IT risk management is a mature topic, but it continues to evolve with technology. As rules-based systems are supplemented with self-learning systems, IT departments, risks managers and business leaders need to update their thinking.

Lisa Morgan, Freelance Writer

September 17, 2018

5 Min Read
Image: Shutterstock

IT risk management has historically focused on known risks stemming from systems, software, and operations. The purpose of IT risk management has been to ensure the availability of systems and software.

Today's risk management strategies tend to contemplate safety and security. Adding self-learning systems to the mix changes an organization's risk profile.

"Once you start rolling out AI, you're introducing a whole new set of risks in the course of running your business," said Keith Strier, global and Americas Artificial Intelligence leader and Global Technology Sector digital leader at global professional services firm EY. "Maintaining the performance, stability and security of those intelligent systems is very different than the risks that are posed by traditional ERP and other enterprise systems."

Beware of biased data

The danger is that the color-coded dashboards that reflect traditional KPIs aren't enough to alert IT organizations to other risks. For example, system performance may be fine, but the underlying data is biased or perhaps an algorithm has been corrupted.

"When you're getting into data quality and potentially looking for things like bias, you have to ask different questions, look for different things," said Strier. "It requires an evolution of the very definition of what IT risk is and how you measure it."

After all, biased AI is the direct result of biased data. Even though there is growing general awareness of the potential for biased data, that factor may not yet be an integral part of an IT risk management strategy.

"If it's not on your radar, you're not protecting [against] it or measuring it," said Strier. "Maybe managing bias remains the job of data scientists, but IT has to be aware of it, account for it and include it in the overall [risk] profile. Companies are pursuing AI pervasively without considering [the] risks."

Overall system complexity rises

Every time new technologies, products, services and solutions are added, the overall IT infrastructure becomes more complex. As the infrastructure complexity evolves, so must IT risk strategies and the associated metrics.

Keith_Streier-EY.jpg

"If you spent the last 20 years managing IT network performance and security, the performance and accessibility of enterprise systems and laptop security, you know what that world looks like," said Strier. "Now companies are adopting intelligence as part of the value chain. If you don't evolve the complexity of the risk model, you're not doing your job. It's a pretty big gap for companies."

Organizations are deploying AI in various parts of the organization to reduce security risks, improve HR, enhance customer experience and drive better ROI from marketing campaigns. An important but oft-overlooked factor is that AI instances can have dependencies on other systems and they may be interacting with other AI instances.

"The ecosystem of AI is most tangibly seen on Wall Street with flash crashes. A lot of the big banks and trading companies have moved to algorithmic trading. Massive systems are trading billions of shares per second," said Strier. "It's not human traders trying to outwit each other, it's AI working at the speed of light against another AI."

Those particular systems are operating autonomously, so a crash can result in cascading failures.

"We don't understand all the risks we're creating and I don't think we have any way of truly understanding them," said Strier. "If you're applying AI across your organization, it' not just whether those are secure but what are the interactions with other internal systems and external systems?"

Already, there are outcries about transparency as it relates to discrete AI instances. Network-effect transparency is a more complex and difficult problem, the fallout of which is not completely known or predictable. Right now, the trend is to focus on the opportunities AI enables without placing equal emphasis on the risks.

Trust is important

You may be deploying AI, but can you trust it? In traditional IT, trust has primarily hinged on whether the system performed as expected, was safe, accessible and secure. All of

Bob_Donaldson.jpg

those things apply to AI systems, but can you trust their analysis, conclusions, and decisions?

Even in the case of autonomous systems, there is growing chatter about "the need for a human in the loop" as a safeguard. Supervisory AI instances are being suggested as another form complementary failsafe, one that can operate at the scale and speed of other AI instances. The purpose of supervisory AI is to monitor other AI instances to ensure they are doing what they're intended to do and if not, issuing alerts and perhaps taking some level of remedial action.

Robert Donaldson, a retired computer scientist, risk management executive and former CIO of the State Pennsylvania's Department of Revenue thinks that AI trust and safety could be better assured if automated and intelligent systems had an immutable purpose – that is, designed for a specific purpose and used only for that purpose. If the system deviated from its purpose for whatever reason, it would shut itself down, or when it started to deviate would sent an alert.

Of course, the alerting mechanism would need to be calibrated in a way that minimizes false positives. Otherwise, the alerts would eventually be disregarded.

"Trust is the foundational business case for evolving and understanding the new risk landscape. Failing to measure, monitor and mitigate against these risks will lead to a lack of trust and lack of value and then you'll question the whole purpose of your investment," said EY's Strier. "You can't understand how to build trust until you understand the risks and I think that's good business motivator for developing a new risk framework."

For more about risk, AI, and bias check out these articles:

AI Is a Powerful Ally in Public Safety - Responsible Use Is Paramount

Ethics, Privacy Issues Highlight Strata Data Conference

AIOps to Drive Big IT Pivot

Panacea or Alchemy the Truth About AI

About the Author(s)

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include big data, mobility, enterprise software, the cloud, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights