Protecting Us from AI's Dark Side - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // AI/Machine Learning
Commentary
2/12/2019
07:00 AM
Jim Carnes, Chief Security Architect, Ciena
Jim Carnes, Chief Security Architect, Ciena
Commentary
50%
50%

Protecting Us from AIís Dark Side

The promise of AI is real, but the potential dangers are also just as real. In the end countermeasures will alleviate the dangers.

Artificial intelligence and now augmented intelligence have received a lot of attention. While some aspects may be overhyped, the technology is a certain part of our future. Adobe’s 2018 Digital Trends report found that while just 15% of enterprises are using AI today; 31% have it on their agenda for the next 12 months.

Advancements in AI, including machine learning and neural networks, are marching us toward a more interconnected and automated future. While the technology continues to mature, the science behind AI is still trying to understand how the human mind works, and replicate that to help improve our daily lives.

Considerations for an AI future

Despite the best of intentions for AI, computer systems may ultimately develop in ways we never intended. This was illustrated during the 2017 Neural Information Processing Systems conference when researchers presented an AI-based system for image mapping that learned to produce results by hiding source data via Steganography. The system produced the results researchers were looking for but by “cheating” and hiding data it needed to “succeed.”

As AI continues to become more widespread, the notion that this technology could potentially be used for malicious purposes becomes more real. Once recent example to consider are the effects of AI driving social media bots on the 2016 US election

Ultimately AI will “learn” based on the data we provide it to train on. We need to be sure we are not biasing the results by selecting data that fits our preconceived understanding of what is appropriate. Consider the example in this research where the system was trained to misinterpret road signs, turning stop signs into speed signs.

The security industry also has some work to do to create more defense mechanisms as advanced attacks on data integrity are increasingly more difficult to detect and defend against. As a start, Google’s publication of its ethical principles on AI showed the potential and risk of the technology.

AI’s role in defending the network

On the flip side, AI will continue to play a role in defending against other threats. There are organizations already developing AI-based products that provide threat-hunting, attack analysis and incident response to proactively search for potential issues.

Another example is the recent work by the New York Power Authority to integrate AI into its power grid.  While the system can certainly aid with detection of anomalies, it is critical that the way in which the system responds to threats is controlled. As Rob Lee stated, “You don't want your grid operators, the humans that are controlling the grid, to become so dependent on the machine learning or AI model that they forget how to do their jobs."

Minimizing AI risks

To be clear, AI can be used for good. But it is important for organizations to understand the full picture and keep in mind what can be done to mitigate the risks.

Legislation: While research and experimentation should continue unfettered to the greatest extent possible, before systems built on AI are given control of essential areas (e.g., critical infrastructure, finance, healthcare, cyber defense, etc.) the consequences of failure should be defined.

Bound the capabilities: One possible control would be first to focus on limiting the “box” within which the AI system can operate to mitigate unintended consequences. If limits are established through red teaming worst case scenarios, controls can be established that dictate exactly what the system can do. 

Examples of the worst-case scenarios would be preventing the automated delivery of medical treatment in a response to a detected healthcare risk or limiting the size and frequency of financial transactions in response to abnormal market conditions.

Keep the human at the wheel: Much like today’s research with self-driving cars, automatic responses to network threats that could be disruptive should be limited to a human’s decision, for now. Augmented intelligence combines the speed of machine intelligence with the intuition and control of humans. Coupled with this is the need for the network to have the ability to adapt to external stimuli. The AI system can recommend a course of action, or several. It could even predict the outcome given a particular choice. But ultimately the “red button” needs to be pushed by a human.

Experiment and deploy more: The best mitigation for risk may in fact be to accelerate the research and test in controlled areas. Using the systems in simulated or replicated environments will enable researchers to better recognize when unintended events or responses occur and better understand how to mitigate them.

Training and deliberate design: Published frameworks on both social and technical impacts should be made available to everyone.

The promise of AI is real. But, the potential dangers are also just as real. As was the case with countless other advances we have made, countermeasures reveal themselves and the benefits of the “new” ultimately overtake the risk. This too, will be the case with AI.

Jim Carnes is the Chief Security Architect of Ciena, a supplier of telecommunications networking equipment, software and services.

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Slideshows
What Digital Transformation Is (And Isn't)
Cynthia Harvey, Freelance Journalist, InformationWeek,  12/4/2019
Commentary
Watch Out for New Barriers to Faster Software Development
Lisa Morgan, Freelance Writer,  12/3/2019
Commentary
If DevOps Is So Awesome, Why Is Your Initiative Failing?
Guest Commentary, Guest Commentary,  12/2/2019
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Getting Started With Emerging Technologies
Looking to help your enterprise IT team ease the stress of putting new/emerging technologies such as AI, machine learning and IoT to work for their organizations? There are a few ways to get off on the right foot. In this report we share some expert advice on how to approach some of these seemingly daunting tech challenges.
Slideshows
Flash Poll