AI Accountability: Proceed at Your Own Risk

A new report suggests that to improve AI accountability, enterprises should tackle third-party risk head-on.

John Edwards, Technology Journalist & Author

September 8, 2020

4 Min Read
Image: Willyam - stock.adobe.com

A report issued by technology research firm Forrester, AI Aspirants: Caveat Emptor, highlights the growing need for third-party accountability in artificial intelligence tools.

The report found that a lack of accountability in AI can result in regulatory fines, brand damage, and lost customers, all of which can be avoided by performing third-party due diligence and adhering to emerging best practices for responsible AI development and deployment.

The risks of getting AI wrong are real and, unfortunately, they're not always directly within the enterprise's control, the report observed. "Risk assessment in the AI context is complicated by a vast supply chain of components with potentially nonlinear and untraceable effects on the output of the AI system," it stated.

Most enterprises partner with third parties to create and deploy AI systems because they don’t have the necessary technology and skills in house to perform these tasks on their own, said report author Brandon Purcell, a Forrester principal analyst who covers customer analytics and artificial intelligence issues. "Problems can occur when enterprises fail to fully understand the many moving pieces that make up the AI supply chain. Incorrectly labeled data or incomplete data can lead to harmful bias, compliance issues, and even safety issues in the case of autonomous vehicles and robotics," Purcell noted.

Danger ahead

The highest risk AI use cases are the ones in which a system error leads to negative consequences. "For example, using AI for medical diagnosis, criminal sentencing, and credit determination are all areas where an error in AI can have severe consequences," Purcell said. "This isn't to say we shouldn't use AI for these use cases -- we should -- we just need to be very careful and understand how the systems were built and where they're most vulnerable to error." Purcell added that enterprises should never blindly accept a third-party's promise of objectivity, since it's the computer that's actually making the decisions. "AI is just as susceptible to bias as humans because it learns from us," he explained.

Brandon_Purcell-Forrester.jpg

Third-party risk is nothing new, yet AI differs from traditional software development due to its probabilistic and nondeterministic nature. "Tried-and-true software testing processes no longer apply," Purcell warned, adding the companies adopting AI will experience third-party risk most significantly in the form of deficient data that "infects AI like a virus." Overzealous vendor claims and component failure, leading to systemic collapse, are other dangers that need to be taken seriously, he advised.

Preventative steps

Purcell urged performing due diligence on AI vendors early and often. "Much like manufacturers, they also need to document each step in the supply chain," he said. He recommended that enterprises bring together diverse groups of stakeholders to evaluate the potential impact of an AI-generated slip-up. "Some firms may even consider offering 'bias bounties', rewarding independent entities for finding and alerting you to biases."

The report suggested that enterprises embarking on an AI initiative select partners that share their vision for responsible use. Most large AI technology providers, the report noted, have already released ethical AI frameworks and principles. "Study them to ensure they convey what you strive to condone while you also assess technical AI requirements" the report stated.

Effective due diligence, the report observed, requires rigorous documentation across the entire AI supply chain. It noted that some industries are beginning to adopt the software bill of materials (SBOM) concept, a list of all of the serviceable parts needed to maintain an asset while it's in operation. "Until SBOMs become de rigueur, prioritize providers that offer robust details about data lineage, labeling practices, or model development," the report recommended.

Enterprises should also look internally to understand and evaluate how AI tools are acquired, deployed and used. "Some organizations are hiring chief ethics officers who are ultimately responsible for AI accountability," Purcell said. In the absence of that role, AI accountability should be considered a team sport. He advised data scientists and developers to collaborate with internal governance, risk, and compliance colleagues to help ensure AI accountability. "The folks who are actually using these models to do their jobs need to be looped in, since they will ultimately be held accountable for any mishaps," he said.

Takeaway

Organizations that don’t prioritize AI accountability will be prone to missteps that lead to regulatory fines and consumer backlash, Purcell said. "In the current cancel culture climate, the last thing a company needs is to make a preventable mistake with AI that leads to a mass customer exodus."

Cutting corners on AI accountability is never a good idea, Purcell warned. "Ensuring AI accountability requires an initial time investment, but ultimately the returns from more performant models will be significantly greater," he said.

The learn more about AI and machine learning ethics and quality read these InformationWeek articles.

 Unmasking the Black Box Problem of Machine Learning

How Machine Learning is Influencing Diversity & Inclusion

Navigate Turbulence with the Resilience of Responsible AI

How IT Pros Can Lead the Fight for Data Ethics

About the Author

John Edwards

Technology Journalist & Author

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights