Produced by Reuters Plus for
Real Time Business
As organizations around the world adopt generative AI (GenAI) as part of their processes, many employees and business leaders are rightly concerned about how to use this powerful technology responsibly. While artificial intelligence (AI) can fuel productivity and inform decision-making, it comes with potential risks and pitfalls — including bias amplification, security vulnerabilities and hallucinations.
More from this series
Real Time Business
Stay up to date on what matters today and what lies ahead for businesses and the economy.
Watch now
On The Road
On the Road provides business insights and thought-provoking takeaways at key corporate conferences throughout the year.
Watch now
Disclaimer: The Reuters news staff had no role in the production of this content. It was created by Reuters Plus, the brand marketing studio of Reuters. To work with Reuters Plus, contact us here.
Disclaimer: The Reuters news staff had no role in the production of this content. It was created by Reuters Plus, the brand marketing studio of Reuters. To work with Reuters Plus, contact us here.
Addressing AI Risks:
Preventing Bias
and Achieving Ethical
AI Use
Business leaders must educate themselves and their employees about potential risks and how to guard against them. Becoming wise to GenAI’s weaknesses — and how to offset them — is a key component of a successful AI adoption strategy.
Making a Strategic Impact with AI
Generative AI, in particular, is a catalyst that can drive transformative growth and innovation — but to harness those capabilities, you have to link use cases to specific line items in the profit and loss (P&L) statement.
“
“
Khalid Khan, EY Americas Strategy and Transactions AI Leader
Read more
AI Adoption: Areas of Concern
Employees and business leaders share several areas of concern regarding AI. While those concerns vary based on where organizations are in the adoption journey, a few main pain points include:
Job displacement
As AI-powered tools speed up productivity and drastically cut down on rote tasks such as data entry, some workers fear they will lose their jobs.
Employees are undoubtedly concerned about the disruption and job displacement,
Kapish Vanvaria, EY Americas Risk Leader.
Talent gaps
Enterprise leaders are worried about finding enough workers who are well-versed in AI tools and methods.
Samta Kapoor, EY Americas AI Energy and Trusted AI Leader.
Filling that gap is important for scaling up and driving value with AI.
Bias
AI bias can creep in at any phase in the lifecycle — from data collection to design to algorithmic function. “We’ve seen big companies in the news when their AI model had eliminated a segment of society primarily because of the data they were using to train the models,” says Kapoor. “Companies should be very worried about reputation loss and regulations if their AI systems are biased and proper controls are not put in place.”
AI
BIAS
Deepfakes and hallucinations
Unless they are adequately trained on quality data sets, AI algorithms can hallucinate (present false information) — and bad actors can harness its power to create misleading images, audio and videos. “It’s important to acknowledge the limitations of current AI solutions and implement robust testing, validation and monitoring for cyber threats,” says Vanvaria.
AI
ALGORITHMS
Defining Ethical AI
Ethical or responsible AI use can be difficult to define without a solid set of objective standards. Within an organization, creating a framework with clear policies and procedures about how AI can and cannot be used is a great place to start, says Kapoor.
It’s important to have strong governance. Bringing the right stakeholders together from the start is key,” says Kapoor. Take time to define terms and ensure employees in every layer of your organization understand the framework. “For example, when your organization defines fairness, what does it mean to your data scientist?
What does it mean to your CEO, who might be using some form of AI in different use cases?”
To make ethical AI less subjective, Vanvaria suggests grounding an organization’s AI usage framework on existing regulations and recognized standards and guidelines, such as the NIST AI Risk Management Framework (AI RMF) and the European Union AI Act.
“Quantify confidence in AI solutions through metrics and benchmarking when possible,” says Vanvaria. “It’s also essential to make sure AI solutions are continuously monitored, and that humans are ultimately accountable for AI outcomes.”
5 Steps to Ensure Responsible AI Use
As your organization creates an AI strategy, here are five steps you can take right now to help ensure responsible use:
1
Take a “responsible AI by design” approach to mitigate risks.
Weave responsible AI principles into your overall framework, integrating clear boundaries and priorities into your development lifecycle. “For example, create technical controls for development teams, conduct impact assessments and do regular fairness testing,” says Vanvaria.
“Orchestrate all these tasks with an operating model that works for your organization, with the right roles coming together at the right times.”
2
Establish a responsible AI framework grounded on industry standards.
Develop a deep understanding of existing and emerging industry standards for AI.
“Make sure your AI framework takes different AI usage patterns into account,” says Vanvaria. “For example, using enterprise ChatGPT versus developing GenAI internally are different types of AI use.”
3
Invest in technology capabilities for continuous monitoring.
Set up systems that will monitor your AI models and data sets constantly, checking for inconsistencies, bias and anomalies that could indicate a cybersecurity threat.
“Once your models are operationalized, how are you going to have controls that will ensure that model and data drifts are not happening?” says Kapoor.
To offset risks, build technical guardrails that highlight problems and train your algorithms to minimize bad output. Some examples include ModelOps platforms, automated testing and other monitoring solutions.
4
Work to ensure ongoing transparency and accountability.
At every level, keep the lines of communication open to help ensure trust in AI systems. “Inform users that they may be interacting with AI systems, explain how decisions are being made by the AI system and leverage confidence scores and human-in-the-loop to evaluate AI system decision-making,” says Vanvaria.
5
Create a rigorous training program anchored in real-world scenarios.
Build a culture of awareness in your organization, with AI training sessions that consider real scenarios of what could go wrong — and how to mitigate those risks. “The more hands-on you can make the training, the less anxiety employees will have,” says Kapoor. “And the more AI tools you can give them access to, the more they will know what to expect and how to add value to the organization.”
The Future of AI Governance
As businesses continue to integrate AI at every level, successful governance will depend on making sure legal, compliance risk, IT and business leaders have a seat at the table when making decisions. “Because of the enhanced risks of AI, they need to act in collaboration to help ensure that every angle is understood and addressed,” says Kapoor. Many enterprises and large corporations are adopting a hub-and-spoke model for AI use across sites and branch offices. “Corporations need to have some kind of central governance to make sure all these pieces of the puzzle are fitting together well,” says Kapoor.
While it might seem ironic, AI itself could be a helpful tool for AI governance. Algorithms can be used to test each other for bias and errors, and with an exponential rise in AI-related cyber crime, organizations may be wise to use AI-powered cybersecurity tools to detect malicious intent.
Still, keeping a human in the loop will remain a crucial component of any responsible AI framework. “It’s important to keep human oversight front and center,” says Vanvaria. “It’s part of maintaining transparency, which is a key component for building trust in AI systems.”
The views reflected in this article are the views of the author and do not necessarily reflect the views of Ernst & Young LLP or other members of the global EY organization.
How CIOs can drive more value in corporate transactions
Corporate mergers and acquisitions (M&A) and divestitures present unique challenges to a company’s technology team, but they are also a great opportunity for forward-thinking chief information officers (CIO) to help transform the business.
Read more
AI Adoption: Areas of Concern
Business leaders must educate themselves and their employees about potential risks and how to guard against them. Becoming wise to GenAI’s weaknesses—and how to offset them—is a key component of a successful AI adoption strategy.
Job displacement
As AI-powered tools speed up productivity and drastically cut down on rote tasks such as data entry, some workers fear they will lose their jobs.
Talent gaps
Enterprise leaders are worried about finding enough workers who are well-versed in AI tools and methods.
Kapish Vanvaria, Americas Risk Leader at EY.
Employees are undoubtedly concerned about the disruption and job displacement, however, we also see many who embrace this as an opportunity.
Samta Kapoor, EY Partner/Principal and Americas AI Energy and Trusted AI Leader.
Concern about a lack of talent in AI is a high priority for many leaders, filling that gap is important for scaling up and driving value with AI.
Employees and business leaders share several areas of concern regarding AI. While those concerns vary based on where organizations are in the adoption journey, a few main pain points include:
Bias
AI
BIAS
AI bias can creep in at any phase in the lifecycle—from data collection to design to algorithmic function. “We’ve seen big companies in the news when their AI model had eliminated a segment of society because of the data they were using,” says Kapoor. “Companies should be very worried about reputation and financial laws if their AI systems are unknowingly biased toward a certain segment.”
Deepfakes and hallucinations
Unless they are adequately trained on quality data sets, AI algorithms can hallucinate (present false information)—and bad actors can harness its power to create misleading images, audio, and videos. “It’s important to acknowledge the limitations of current AI solutions and implement robust testing, validation, and monitoring for cyber threats,” says Vanvaria.
AI
ALGORITHMS
Defining Ethical AI
Ethical or responsible AI use can be difficult to define without a solid set of objective standards. Within an organization, creating a framework with clear policies and procedures about how AI can and cannot be used is a great place to start, says Kapoor.
“It’s important to have a very strong governance set up so that your developers and end users are interacting right from the start.” Take time to define terms and ensure employees in every layer of your organization understand the framework. “For example, when your organization defines fairness, what does it mean to your data scientist?
What does it mean to your CEO, who might be using some form of AI in different use cases?”
To make ethical AI less subjective, Vanvaria suggests grounding an organization’s AI usage framework on existing regulations and recognized standards and guidelines, such as the NIST AI Risk Management Framework (AI RMF) and the EU AI Act.
“Quantify confidence in AI solutions through metrics and benchmarking when possible,” Vanvaria adds. “It’s also essential to make sure AI solutions are continuously monitored, and that humans are ultimately accountable for AI outcomes.”
1
Take a “responsible AI by design” approach to mitigate risks.
Weave responsible AI principles into your overall framework, integrating clear boundaries and priorities into your development lifecycle. “For example, create technical controls for development teams, conduct impact assessments, and do regular fairness testing,” Vanvaria says.
“Orchestrate all these tasks with an operating model that works for your organization, with the right roles coming together at the right times.”
2
Establish a responsible AI framework grounded on industry standards.
Develop a deep understanding of existing and emerging industry standards for AI.
“Make sure your AI framework takes different AI usage patterns into account,” says Vanvaria. “For example, using enterprise ChatGPT versus developing GenAI internally are different types of AI use.”
3
Invest in technology capabilities for continuous monitoring.
Set up systems that will monitor your AI models and data sets constantly, checking for inconsistencies, bias, and anomalies that could indicate a cybersecurity threat.
“Once your models are operationalized, how are you going to have controls that will keep you out of the news?” Kapoor says.
To offset risks, build technical guardrails that highlight problems and train your algorithms to minimize bad output. Some examples include ModelOps platforms, automated testing, and other monitoring solutions.
4
Ensure ongoing transparency and accountability.
At every level, keep the lines of communication open to ensure trust in AI systems. “Inform users that they may be interacting with AI systems, explain how decisions are being made by the AI system, and leverage confidence scores and human-in-the-loop to evaluate AI system decision making,” says Vanvaria.
5
Create a rigorous training program anchored in real-world scenarios.
Build a culture of awareness in your organization, with AI training sessions that consider real scenarios of what could go wrong—and how to mitigate those risks. “The more hands-on you can make the training, the less anxiety they will have,” says Kapoor. “And the more AI tools you can give them access to, the more they will know what to expect and how to add value to the organization.”
5 Steps to Ensure Responsible AI Use
As your organization creates an AI strategy, here are five steps you can take right now to ensure responsible use:
As businesses continue to integrate AI at every level, successful governance will depend on making sure legal, compliance risk, IT, and business leaders are all in the room together making decisions. “Because of the enhanced risks of AI, they need to act in collaboration to ensure that every angle is understood and addressed,” says Kapoor. Many enterprises and large corporations are adopting a hub-and-spoke model for AI use across sites and branch offices. “Corporations need to have some kind of central governance to make sure all these pieces of the puzzle are fitting together well.”
While it might seem ironic, AI itself could be a helpful tool for AI governance. Algorithms can be used to test each other for bias and errors, and with an exponential rise in AI-related cyber crime, organizations may be wise to use AI-powered cybersecurity tools to detect malicious intent.
Still, keeping a human in the loop will remain a crucial component of any responsible AI framework. “It’s important to keep human oversight front and center,” Vanvaria says. “It’s part of maintaining transparency, a key component for building trust in AI systems.”
The views expressed by the author are not necessarily those of Ernst & Young LLP or other members of the global EY organization.
The Future of AI Governance
“However, we also see many who embrace this as an opportunity.”
“Filling that gap is important for scaling up and driving value with AI.”
“However, we also see many who embrace this as an opportunity.”
Filling that gap is important for scaling up and driving value with AI.
AI Adoption: Areas of Concern
Business leaders must educate themselves and their employees about potential risks and how to guard against them. Becoming wise to GenAI’s weaknesses—and how to offset them—is a key component of a successful AI adoption strategy.
Job displacement
As AI-powered tools speed up productivity and drastically cut down on rote tasks such as data entry, some workers fear they will lose their jobs.
Talent gaps
Enterprise leaders are worried about finding enough workers who are well-versed in AI tools and methods.
Kapish Vanvaria, EY Americas Risk Leader.
Employees are undoubtedly concerned about the disruption and job displacement,
Samta Kapoor, EY Americas AI Energy and Trusted AI Leader.
“Filling that gap is important for scaling up and driving value with AI.”
Employees and business leaders share several areas of concern regarding AI. While those concerns vary based on where organizations are in the adoption journey, a few main pain points include:
Bias
AI
BIAS
AI bias can creep in at any phase in the lifecycle — from data collection to design to algorithmic function. “We’ve seen big companies in the news when their AI model had eliminated a segment of society primarily because of the data they were using to train the models,” says Kapoor. “Companies should be very worried about reputation loss and regulations if their AI systems are biased and proper controls are not put in place.”
Deepfakes and hallucinations
Unless they are adequately trained on quality data sets, AI algorithms can hallucinate (present false information) — and bad actors can harness its power to create misleading images, audio and videos. “It’s important to acknowledge the limitations of current AI solutions and implement robust testing, validation and monitoring for cyber threats,” says Vanvaria.
AI
ALGORITHMS
Defining Ethical AI
Ethical or responsible AI use can be difficult to define without a solid set of objective standards. Within an organization, creating a framework with clear policies and procedures about how AI can and cannot be used is a great place to start, says Kapoor.
It’s important to have strong governance. Bringing the right stakeholders together from the start is key,” says Kapoor. Take time to define terms and ensure employees in every layer of your organization understand the framework. “For example, when your organization defines fairness, what does it mean to your data scientist?
What does it mean to your CEO, who might be using some form of AI in different use cases?”
To make ethical AI less subjective, Vanvaria suggests grounding an organization’s AI usage framework on existing regulations and recognized standards and guidelines, such as the NIST AI Risk Management Framework (AI RMF) and the European Union AI Act.
“Quantify confidence in AI solutions through metrics and benchmarking when possible,” says Vanvaria. “It’s also essential to make sure AI solutions are continuously monitored, and that humans are ultimately accountable for AI outcomes.”
1
Take a “responsible AI by design” approach to mitigate risks.
Weave responsible AI principles into your overall framework, integrating clear boundaries and priorities into your development lifecycle. “For example, create technical controls for development teams, conduct impact assessments and do regular fairness testing,” says Vanvaria.
“Orchestrate all these tasks with an operating model that works for your organization, with the right roles coming together at the right times.”
2
Establish a responsible AI framework grounded on industry standards.
Develop a deep understanding of existing and emerging industry standards for AI.
“Make sure your AI framework takes different AI usage patterns into account,” says Vanvaria. “For example, using enterprise ChatGPT versus developing GenAI internally are different types of AI use.”
3
Invest in technology capabilities for continuous monitoring.
Set up systems that will monitor your AI models and data sets constantly, checking for inconsistencies, bias and anomalies that could indicate a cybersecurity threat.
“Once your models are operationalized, how are you going to have controls that will ensure that model and data drifts are not happening?” says Kapoor.
To offset risks, build technical guardrails that highlight problems and train your algorithms to minimize bad output. Some examples include ModelOps platforms, automated testing and other monitoring solutions.
4
Work to ensure ongoing transparency and accountability.
At every level, keep the lines of communication open to help ensure trust in AI systems. “Inform users that they may be interacting with AI systems, explain how decisions are being made by the AI system and leverage confidence scores and human-in-the-loop to evaluate AI system decision-making,” says Vanvaria.
5
Create a rigorous training program anchored in real-world scenarios.
Build a culture of awareness in your organization, with AI training sessions that consider real scenarios of what could go wrong — and how to mitigate those risks. “The more hands-on you can make the training, the less anxiety employees will have,” says Kapoor. “And the more AI tools you can give them access to, the more they will know what to expect and how to add value to the organization.”
5 Steps to Ensure Responsible AI Use
As your organization creates an AI strategy, here are five steps you can take right now to help ensure responsible use:
As businesses continue to integrate AI at every level, successful governance will depend on making sure legal, compliance risk, IT and business leaders have a seat at the table when making decisions. “Because of the enhanced risks of AI, they need to act in collaboration to help ensure that every angle is understood and addressed,” says Kapoor. Many enterprises and large corporations are adopting a hub-and-spoke model for AI use across sites and branch offices. “Corporations need to have some kind of central governance to make sure all these pieces of the puzzle are fitting together well,” says Kapoor.
While it might seem ironic, AI itself could be a helpful tool for AI governance. Algorithms can be used to test each other for bias and errors, and with an exponential rise in AI-related cyber crime, organizations may be wise to use AI-powered cybersecurity tools to detect malicious intent.
Still, keeping a human in the loop will remain a crucial component of any responsible AI framework. “It’s important to keep human oversight front and center,” says Vanvaria. “It’s part of maintaining transparency, which is a key component for building trust in AI systems.”
The views reflected in this article are the views of the author and do not necessarily reflect the views of Ernst & Young LLP or other members of the global EY organization.
The Future of AI Governance
Preventing Bias
Preventing Bias
Preventing Bias