Bias: AI's Achille's Heel
Bias can result in undesirable AI outcomes. A recent DataRobot survey says organizations are most concerned about its impact on trust and their reputations.
Business and IT leaders are realizing that artificial intelligence should be implemented carefully to avoid unwanted results. In the race to implement AI over the past several years, most organizations have not addressed risk management adequately, and in some cases, the oversight has resulted in headline news that the company could have avoided if it had exercised more care.
People are paying attention to those headlines and taking them to heart. AI has moved from the initial hype phase, in which proponents tend to focus only on the positive aspects, to what Gartner calls the "trough of disillusionment" or Geoffrey Moore called "the chasm" in which the drawbacks of the technology become too apparent to ignore and cause organizations to proceed with greater caution.
Bias is AI's Achille's heel, and it's been the 800-pound gorilla in the room waiting to be recognized by people other than data scientists.
Machine Learning platform provider DataRobot recently published a report on the topic, entitled. "The State of AI Bias in 2019." The report found that 42% of the 350 enterprise respondents surveyed are "very" to "extremely" concerned about AI bias. Interestingly, about twice that number (83%) said their companies have established AI guidelines and are taking steps to avoid bias. 85% believe AI regulation would help and 93% plan to invest more in AI bias prevention initiatives in the next 12 months. Their two biggest concerns about bias are compromised brand reputation and loss of customer trust.
It should be noted that "avoiding" and "preventing" bias as stated in the report is far less practical than minimizing it, given the many types of bias, the varying contexts in which AI is applied and the numerous ways bias can seep into and aggregate across AI systems. Also, survey participants were limited to the U.S. and U.K.
"Bias is nothing new. Humans [are biased] all of the time. The difference with AI is that it happens at a bigger scale and it's measurable," said Colin Priest, Vice President of AI Strategy at DataRobot. "Another thing is that bias is not an absolute. It depends on the use case and on the values of the people applying it."
Colin Priest, DataRobot
For example, while Western cultures tend to value equal opportunity, there are valid reasons why insurance companies charge young males more for car insurance and elderly people more for life insurance.
However, gaffes like Amazon's hiring algorithm and Apple Card's credit limits, which both discriminated against women, got the general public thinking about AI bias and its impact on fairness.
"It's actually mathematically impossible to get rid of all types of bias, whether that be from AI or humans because more than 20 types are mathematically impossible to achieve concurrently," Priest said.
Fairness is key
Fairness is the goal when it comes to minimizing bias, but the definition of "fair" varies depending on the context.
"There's a real gap between what values are and how to actually put them into practice," said Priest. "I've been seeing ethical principles coming from governments around the world, from the EU, and the IEEE. If you read them, they pretty much come to a consensus. They sound reasonable, but when you get to the details though, I don't know what to do."
Organizations need to consider who they need to protect, the level of disclosure they're willing to give stakeholders about AI use, how AI could affect stakeholders, and whether stakeholders can question AI outcomes. Governance is also important to ensure that AI is behaving consistently with the organization's values.
"No matter what system you build, it's going to be skewed in some way, so how can we build these systems in a way that leads to a fair outcome that reduces the amount of bias that's reflected?" said Kirsten Lloyd, associate at Booz Allen Hamilton.
Kirsten Lloyd, Booz Allen Hamilton
Guidelines should align with the organization's values, which raises the question of who should articulate them. The answer is not a person or the C-suite in isolation, but a diverse team representing different points of view and expertise, such as data scientists, business domain experts, systems designers, ethicists and others that all bring with them uniquely valuable points of view that the others may not have considered.
"[Bias] will be a lot easier problem to address if you're thinking about it from the beginning rather than as an afterthought," said Booz Allen Hamilton's Lloyd.
Specifically, three important considerations are: What does the organization value? Does the project or initiative align with the business objectives and values articulated? What does "fair" mean in the specific context in which the AI will be used?
"Define what unfair bias is. You can't minimize something if you've never bothered to define it," said DataRobot's Priest. "I don't start by looking at the data. Data is data. It's when you start deriving patterns from it and deriving decisions that bias occurs, so I wait until I build a model and then I say is that model using any sensitive attributes? If so, show me the patterns of what it is doing with those sensitive attributes. I want to check whether those patterns that it's using are consistent with the values that I've got." Next, he looks for indirect discrimination, whether any other attributes can predict any of the sensitive attributes (serve as a proxy for sensitive data), and how the predictions were made.
Priest also suggests managing AIs like people, having a job description that explains the goal and what they should be doing. AIs should also have KPIs assigned so they can be managed properly.
Watch for bias in third-party data
Organizations are supplementing their own data with third-party data, and like with their own data, they need to consider bias.
"If you look at what happened with the Apple Card, you've got multiple AIs in place there. But what was horrible there ... was a lack of AI governance for both Goldman Sachs and Apple. Apple just washed their hands," said Priest. "If you are using a third party AI, it's up to you to apply governance. You should be checking its behavior before you use it. You should be insisting on sufficient disclosure that individual decisions and the reasons for them can be provided. You should be looking at processes for customers to question the decisions."
Organizations will be held responsible for what their AIs do, like they are responsible for what their employees do. Therefore, governance must extend to third-party data and it should reflect what have emerged as common AI principles among governments, standards bodies and other organizations including transparency (aka explainability) and accountability (who's responsible when something goes wrong).
Regulation: yay or nay?
The DataRobot survey participants all work for enterprises, which explains the high degree (85%) of interest in regulation. Vendors are split on the issue. Some fear that regulation will negatively impact innovation.
"I was surprised by the percentages and then I realized it's got to do with the verticals. It's biased towards the financial industry. The financial services industry is highly regulated so they're used to having this sort of stuff. [However,] in my experience that vertical is focused too much on regulatory compliance and not on what is right and what is wrong, so they only protect attributes that are protected attributes by law," said Priest.
Priest has been working recently with the Singapore government, which is creating an optional governance checklist for organizations creating, deploying, and managing AI. The feedback on it was mixed because some people thought it was too prescriptive and others thought it wasn't prescriptive enough.
If guidelines aren't prescriptive enough, people don't know how to apply them. If they're too prescriptive, they can come across as a mandate. Another problem governments struggle with is the fact that government officials are not AI experts. While they are actively tapping AI experts for help, there's still a gap that needs to be bridged.
"I think, eventually, as organizations evolve in terms of self-regulation, there will be more of a consensus around what's needed. I think it's a little bit too early for a comprehensive [government] policy because there's so much that still needs to be figured out," said Booz Allen Hamilton's Lloyd.
Bottom line
Bias is an important issue that enterprises need to address sooner rather than later if they want to use AI responsibly. While organizations tend to be well aware of the potential benefits AI offers their organizations, they've been less focused on the potential risks. That trend is shifting as more business and IT leaders articulate concern about AI bias and take steps to manage the issues proactively from both business and technical points of view.
For more on AI bias and ethics check out these articles:
How Applying Labels and Other Tactics Keep AI Transparent
Why Businesses Should Adopt an AI Code of Ethics -- Now
Why It's Nice to Know What Can Go Wrong with AI
About the Author
You May Also Like