Are You Ready to Manage AI Risks? - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IT Leadership // Security & Risk Strategy
08:00 AM
Lisa Morgan
Lisa Morgan
Connect Directly

Are You Ready to Manage AI Risks?

Most companies have IT-related risk management programs, but they need to be updated to include the nuances of AI.

Image: Shutterstock
Image: Shutterstock

Your company is likely thinking about, evaluating or implementing some form of AI, but is your risk management function equipped to handle it? Likely not, because it probably hasn’t considered what could possibly go wrong with a self-learning system.

"It's no longer just about technical people sitting in a room looking for software bugs, but conscious thinking about where and how I'm using AI and machine learning. Am I comfortable with that? How should we be doing that?" said Steven Mills, associate director of Artificial Intelligence and Machine Learning at global management consulting firm Boston Consulting Group.

Failure to focus on the potential risks -- and thinking only of the potential benefits -- is a classic early market symptom of new technology adoption. Later, when the potential risks become evident, the initial idealism is replaced with a sound enterprise strategy.

“It's far easier to instill and institutionalize AI-related risk management processes now rather than rearchitecting everything after something goes wrong,” said Mills. “If you plan to use AI at enterprise scale, the ideal time to think about the governance issues is now.”

In fact, many organizations don’t have an AI strategy, they have a piecemeal approach to AI adoption that is use-case focused. This myopic approach overlooks the fact that algorithms operate within a larger ecosystem, like software applications.

"AI involves a technical problem we call entanglement. If algorithm A's output is the input to algorithm B, and algorithm B's output is input for algorithm C and they're using different data, it's a tangled mess," said Mills. "If you're not careful, you make one small change, it cascades and breaks everything. So, you have to think about the infrastructure you have and plan ahead so you don't create these issues."

AI should reflect your company’s values

Individual businesses have to decide for themselves what kinds of outcomes they want AI to drive. To ensure those outcomes, they need to think critically about fairness.

Steve Mills, BCG
Steve Mills, BCG

"Since there's no one definition of fairness, each organization is going to have to think about what it means to them and the reason they're going to have to do this is it's a multi-objective problem," said Mills. "If you think about all the definitions of fairness, you can't satisfy all of them, so you need to have an active discussion about the tradeoffs."

A related issue is algorithmic bias that results in such issues as unintentional sexual discrimination and racial discrimination. Because algorithms impact the everyday lives of people, lawmakers are getting involved, including the two senators who recently proposed the Algorithmic Accountability Act. In the meantime, organizations are wise to define what “responsible AI use” means for their organization beyond regulatory compliance. In fact, the operative word, is “governance.”

Various levels of intelligence are seeping into enterprises in various forms, including chatbots, virtual assistants, and robotics processing automation (RPA), plus all the AI that’s being embedded in various types of enterprise software, systems and devices. The shift to an AI-enabled enterprise necessitates cultural change, including entirely new ways of working and multidisciplinary teams, but not every organization is doing that yet.

Learn about machine learning at Interop

Mills will kick off the Emerging Tech track at Interop with the session Seeing Through the Fog: Demystifying Machine Learning, a real-world primer for business and IT executives on May 22 at 9:00 a.m. – 9:45 a.m. You’ll find out what’s hype, what isn’t and what other companies are doing to compete more effectively.

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
InformationWeek Is Getting an Upgrade!

Find out more about our plans to improve the look, functionality, and performance of the InformationWeek site in the coming months.

Becoming a Self-Taught Cybersecurity Pro
Jessica Davis, Senior Editor, Enterprise Apps,  6/9/2021
Ancestry's DevOps Strategy to Control Its CI/CD Pipeline
Joao-Pierre S. Ruth, Senior Writer,  6/4/2021
IT Leadership: 10 Ways to Unleash Enterprise Innovation
Lisa Morgan, Freelance Writer,  6/8/2021
White Papers
Register for InformationWeek Newsletters
Current Issue
Planning Your Digital Transformation Roadmap
Download this report to learn about the latest technologies and best practices or ensuring a successful transition from outdated business transformation tactics.
Flash Poll