Tools Tackle AI's Bias, Trust Problem

AI and machine learning deployments are hitting the mainstream in enterprises, but executives still hesitate to blindly accept insights from inside the "black box" without seeing the logic behind them.

Jessica Davis, Senior Editor

September 20, 2018

4 Min Read
Image: Zapp2Photo/Shutterstock

Is your algorithm fair and unbiased? How can you be sure that the insights it offers are correct? It's a question that's being asked with increasing frequency in the last year. That's because when it comes to machine learning, data goes into a "black box" and insights emerge on the other side. The algorithm itself is inside this so-called black box. No one can see inside the box. No one knows why the algorithm arrived at one conclusion or another. Should you follow its advice without knowing how it got to that conclusion?

It all comes down to trust. Can you put blind faith in algorithms?

According to a new survey of 305 business executives conducted by Forbes Insights and sponsored by Accenture Applied Intelligence, Intel, and SAS, 43% of organizations with successful AI deployments have a process in place for augmenting or overriding questionable results, compared with 28% of less successful organizations.

Executives are paying more attention to this black box and how to deal with it.

On one level, there's a concern about whether the insights the algorithm produces are fair or biased. Are individuals being filtered out of hiring decisions because of the university they attended, their age, their gender? Organizations may need to expose the logic to regulatory or compliance bodies.

Beyond the issue of fairness, executives need to know why algorithms reach certain conclusions before they can act on them. A Deloitte report points out that business leaders often hesitate to place blind faith in a result that cannot be explained. Is the algorithm telling you to discontinue a certain successful product line? If so, you probably want to know why it's telling you to do that before you actually shut down manufacturing.

Organizations need to be able to look inside the black box to ascertain how the machine arrived at certain decisions. More vendor companies, including Adobe and IBM Watson AI, are talking about these issues and even offering tools to address it.

"You can't just address bias in the traditional way of looking what your engineers are putting into the software," said Anil Kamath at Adobe in an interview with InformationWeek. "You also have to address bias in the data."

If you are a marketer, your campaigns will suffer if they rely on insights driven by biased or irrelevant data.

For instance, if you have only collected data from North American customers and rely on those insights to create a campaign for European customers, chances are it won't be as effective as it could be.

To help organizations get a better picture of what goes on inside the black box, Adobe introduced a technique it calls MAGIX, or Model Agnostic Globally Interpretable explanations. Adobe said in a blog post introducing the technique that "MAGIX finds rules that define automated segments that explain the patterns the model uses across the board…Think of MAGIX as an interpretation layer that sits on top of the machine learning model."

IBM this week introduced a new software service that it says will enable businesses to monitor AI "under the hood" as it operates, providing a dashboard view inside the black box and helping companies comply with regulations such as GDPR. IBM Rsearch is also releasing to the open source community an AI bias detection and mitigation toolkit that offers tools and education that the company says encourages global collaboration around addressing bias in AI.

Ritika Gunnar, VP for Watson Data and AI, told InformationWeek in an interview that in order to get value from AI, businesses need to infuse it into their most essential application and workloads. But one of the top barriers to doing that is concerns about whether the insights can be trusted and whether information needed for compliance can be exposed.

IBM's software service offering trust and transparency services on the IBM Cloud works with models built with a variety of machine learning frameworks and AI environments beyond Watson, including Tensorflow, SparkML AWS SageMaker, and AzureML.

IBM's announcement coincides with the release of a study that surveyed 5,000 C-suite executives conducted by the IBM Institute for Business Value. The AI 2018 report found that 82% of enterprises and 93% of high-performing enterprises, are now considering or moving ahead with AI adoption with a focus on revenue generation. Among the barriers, 60% fear liability issues and 63% say they lack the skills to harness AI's potential.

"Organizations believe AI is essential to business, yet when you look at the main factor to prevent going ahead with production, it's trust," Gunnar said. "…being able to have explainability gives the business confidence.

Looking for more on AI, bias, and trust? Check out these stories:


AI & Machine Learning: An Enterprise Guide

Debiasing Our Statistical Algorithms Down to Their Roots

Ethics, Privacy Issues Highlight Strata Data Conference

How Machine Learning, Classification Models Impact Marketing Ethics

How AI and Intelligent Automation Impact HR Practices

Read more about:

2018

About the Author

Jessica Davis

Senior Editor

Jessica Davis is a Senior Editor at InformationWeek. She covers enterprise IT leadership, careers, artificial intelligence, data and analytics, and enterprise software. She has spent a career covering the intersection of business and technology. Follow her on twitter: @jessicadavis.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights