How to Build Trust with Artificial Intelligence - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // AI/Machine Learning
Commentary
5/29/2019
07:00 AM
Cathy Cobey, EY, Global Trusted Artificial Intelligence Advisory Leader
Cathy Cobey, EY, Global Trusted Artificial Intelligence Advisory Leader
Commentary
50%
50%

How to Build Trust with Artificial Intelligence

Chief information officers and other IT leaders can reduce or amplify risks depending on how they answer these five questions.

What is the leading barrier to the adoption of artificial intelligence? I believe it is a lack of trust. CIOs, enterprise organization leaders and developers want to have confidence in their AI systems and build trust with their external stakeholders. Yet, we know from experience, and several high-profile examples, that there are serious risks when AI is used without a robust governance and an ethical framework.

An increasing number of industries are starting to embrace AI because it can help people do their jobs better and more efficiently. But will doctors continue to trust AI as a diagnostic tool if the result is a misdiagnosis that ultimately puts the patient’s health at risk? Will car dealers use AI to determine the credit worthiness of customers even if good buyers get denied, leading to lost sales?

Below are five key questions every company should ask when working with developers to design an AI agent. How your organization answers these questions can either reduce or amplify risks -- and greatly impact the trustworthiness of AI.

What are your AI goals?

For example, you may develop an AI agent -- a program that can make decisions or perform a service based on its environment, user input and experiences -- to see, recognize and classify images. If that technology is used to classify pictures in an online photo album, it has a much lower risk profile than if it is used to detect objects in front of an autonomous vehicle deciding to stop or go. 

How complex is what you’re trying to achieve?

It’s important to consider how many different capabilities are required to achieve the AI agent’s goal. Sensing and perceiving meaning from video is more complex than doing the same from still images. Utilizing unstructured textual data requires the use of optical character recognition (OCR) and natural language processing (NLP) to first read and then contextualize the data. The more capabilities required of the AI agent, the greater the risk of systemic failure.

Is your environment stable or variable?

For an AI agent to predict the creditworthiness of loan applicants, the environment may be stable if the data is in a structured format with little variability. By contrast, an autonomous vehicle operating on the open road may have an environment that is highly variable and unpredictable. That is going to significantly increase the risk of prediction error because the AI agent is operating in a dynamic environment and unchartered territory.  

How could bias be introduced into your AI’s predictions?

AI agents that are designed to make predictions about people often contain a risk of bias. When AI agents are processing data on people, developers need to consider a long list of individual characteristics, such as ethnicity, age, gender and sexual orientation and to determine whether these characteristics could have an impact on the decisions or actions of the AI agent.

It’s important that for each AI use case the level of diversity of the full population is considered and measures are taken to monitor how well the AI agent operates across all these groups.

What is the level of human involvement?

The fifth condition, and one which is commonly used today to mitigate the risks of AI, is the level of human involvement in decisions or actions taken by the AI agent. Many organizations today are using AI to augment current human operators, which means that the AI agents are providing good insights, but human operators are still making the final decision and executing the final action. As the level of autonomy of AI agents increases, so too must the continuous mechanisms to monitor its performance.  

It’s clear that most organizations want to harness AI’s full potential to fuel future growth, but to do so, they will need to adopt governance and ethical standards that embed their systems with trust and security.

Drawing on 25 years as a technology risk advisor, Cathy Cobey is EY's Global Trusted AI Advisory Leader. She leads a global team that considers the ethical and control implications of artificial intelligence and autonomous systems. Cobey leverages her unique background as a CPA and her involvement with the EY Climate Change & Sustainability practice to consider the full spectrum of technological and societal implications in intelligent automation development. She also serves on several technical advisory committees to develop industry and regulatory standards for emerging technology.

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Slideshows
Top-Paying U.S. Cities for Data Scientists and Data Analysts
Cynthia Harvey, Freelance Journalist, InformationWeek,  11/5/2019
Slideshows
10 Strategic Technology Trends for 2020
Jessica Davis, Senior Editor, Enterprise Apps,  11/1/2019
Commentary
Study Proposes 5 Primary Traits of Innovation Leaders
Joao-Pierre S. Ruth, Senior Writer,  11/8/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Getting Started With Emerging Technologies
Looking to help your enterprise IT team ease the stress of putting new/emerging technologies such as AI, machine learning and IoT to work for their organizations? There are a few ways to get off on the right foot. In this report we share some expert advice on how to approach some of these seemingly daunting tech challenges.
Slideshows
Flash Poll