Your company is likely thinking about, evaluating or implementing some form of AI, but is your risk management function equipped to handle it? Likely not, because it probably hasn’t considered what could possibly go wrong with a self-learning system.
"It's no longer just about technical people sitting in a room looking for software bugs, but conscious thinking about where and how I'm using AI and machine learning. Am I comfortable with that? How should we be doing that?" said Steven Mills, associate director of Artificial Intelligence and Machine Learning at global management consulting firm Boston Consulting Group.
Failure to focus on the potential risks -- and thinking only of the potential benefits -- is a classic early market symptom of new technology adoption. Later, when the potential risks become evident, the initial idealism is replaced with a sound enterprise strategy.
“It's far easier to instill and institutionalize AI-related risk management processes now rather than rearchitecting everything after something goes wrong,” said Mills. “If you plan to use AI at enterprise scale, the ideal time to think about the governance issues is now.”
In fact, many organizations don’t have an AI strategy, they have a piecemeal approach to AI adoption that is use-case focused. This myopic approach overlooks the fact that algorithms operate within a larger ecosystem, like software applications.
"AI involves a technical problem we call entanglement. If algorithm A's output is the input to algorithm B, and algorithm B's output is input for algorithm C and they're using different data, it's a tangled mess," said Mills. "If you're not careful, you make one small change, it cascades and breaks everything. So, you have to think about the infrastructure you have and plan ahead so you don't create these issues."
AI should reflect your company’s values
Individual businesses have to decide for themselves what kinds of outcomes they want AI to drive. To ensure those outcomes, they need to think critically about fairness.
"Since there's no one definition of fairness, each organization is going to have to think about what it means to them and the reason they're going to have to do this is it's a multi-objective problem," said Mills. "If you think about all the definitions of fairness, you can't satisfy all of them, so you need to have an active discussion about the tradeoffs."
A related issue is algorithmic bias that results in such issues as unintentional sexual discrimination and racial discrimination. Because algorithms impact the everyday lives of people, lawmakers are getting involved, including the two senators who recently proposed the Algorithmic Accountability Act. In the meantime, organizations are wise to define what “responsible AI use” means for their organization beyond regulatory compliance. In fact, the operative word, is “governance.”
Various levels of intelligence are seeping into enterprises in various forms, including chatbots, virtual assistants, and robotics processing automation (RPA), plus all the AI that’s being embedded in various types of enterprise software, systems and devices. The shift to an AI-enabled enterprise necessitates cultural change, including entirely new ways of working and multidisciplinary teams, but not every organization is doing that yet.
Learn about machine learning at Interop
Mills will kick off the Emerging Tech track at Interop with the session Seeing Through the Fog: Demystifying Machine Learning, a real-world primer for business and IT executives on May 22 at 9:00 a.m. – 9:45 a.m. You’ll find out what’s hype, what isn’t and what other companies are doing to compete more effectively.
Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include ... View Full Bio