Is Your Organization Vulnerable to Shadow AI?
Who knows what evil lies in the heart of unauthorized AI? The shadow-savvy IT leader knows.
At a Glance
- Shadow AI refers to AI programs used without approval.
- Biggest danger: Sensitive data falling into wrong hands.
- Most common shadow AI source: Noble intentions.
Lurking within the dark recesses of every enterprise are the developers and other tech-savvy IT team members who want to push the envelope. Such individuals aren’t willing to wait for IT to stamp its seal of approval on a new software tool. Working surreptitiously and without oversight, they plow forward with their unsanctioned technology, regardless of potential security implications, often sharing their innovation with unsuspecting colleagues.
When it comes to shadow artificial intelligence deployments, the stakes grow exponentially high. The term shadow AI refers to AI programs used without the approval or knowledge of organization leaders. For years, CIOs and IT managers have struggled with shadow IT -- the use of any software tool not specifically authorized by IT bosses. “This same issue is now presenting itself in the form of AI tools,” says Tommy Gardner, CTO with HP Federal.
Shadow AI typically surfaces when scientists and software engineers jump onto new open source technologies and apply them in their work, often at first to just try them out, says Scott Zoldi, chief analytics officer at credit scoring service FICO. “Once established, it’s used consistently without being brought back into any formal AI model governance processes.” Organizations grappling with shadow AI often don’t understand who created it, how it’s being used, and for what purposes, all of which can creates considerable risks.
Danger Ahead
Perhaps the biggest danger associated with unaddressed shadow AI is that sensitive enterprise data could fall into the wrong hands. This poses a significant risk to privacy and confidentiality, cautions Larry Kinkaid a consulting manager at BARR Advisory, a cybersecurity and compliance solutions provider. “The data could be used to train AI models that are commingled, or worse, public, giving bad actors access to sensitive information that could be used to compromise your company’s network or services.” There could also be serious financial repercussions if the data is subject to legal, statutory, or regulatory protections, he adds.
Organizations dedicated to responsible AI deployment and use follow strong, explainable, ethical, and auditable practices, Zoldi says. “Together, such practices form the basis for a responsible AI governance framework.” Shadow AI occurs out of sight and beyond AI governance guardrails. When used to make decisions or impact business processes, it usually doesn’t meet even basic governance standards. “Such AI is ungoverned, which could make its use unethical, unstable, and unsafe, creating unknown risks,” he warns.
It may seem harmless when employees download an AI-powered program to their work laptop without permission. Concern arises, however, when that program contains an exploitable vulnerability. “If the IT team isn’t aware that these programs are in use, they can’t take appropriate preventive measures to get ahead of the issue and company data could be compromised,” Gardner explains.
Potential Suspects
AI is a transformative technology with the power to fundamentally improve the way businesses use data, as well as manage customer interactions. “When leveraging AI properly, organizations can make data-informed decisions at super-human speed,” Zoldi says. “However, with shadow AI, creators often don’t understand or anticipate the inherent dangers that accompany AI use.”
The most common shadow AI source is hard-working employees or teams who download one or more unauthorized apps with the noble intention of using AI to solve immediate issues or investigate new processes. Yet while using shadow AI-powered productivity tools or editing features to accomplish a task may seem relatively harmless, serious vulnerabilities can arise when failing to follow a formal approval process, Gardner explains. “While these tools can be great options to solve a problem in the short term, they pose a potential security issue for the organization,” he notes.
Defensive Measures
Besides providing security awareness training to employees, organizations aiming to mitigate risks associated with shadow AI should establish clear governance through acceptable use policies, Kinkaid advises. “Ensure that the use of AI is addressed within formal risk assessments,” he adds.
Zoldi stresses the importance of alerting the entire enterprise to shadow AI’s danger. “AI can be used in any department, from legal and HR to software development, sales, business planning, customer support, and beyond.”
It’s also important for AI algorithms to be accessed through a private network supported by controlled security practices, Zoldi says. “This means, as a first-line defense, blocking access to open source code.” Users who want to leverage open source AI need to go through a controlled review process.
A Single Possible Benefit
Despite the danger, shadow AI can offer a significant benefit, once it’s been detected, identified, and neutralized. “It can be useful in providing a starting point on which to build and modify AI ideas,” Kinkaid states.
About the Author
You May Also Like