Protecting Us from AI’s Dark Side
The promise of AI is real, but the potential dangers are also just as real. In the end countermeasures will alleviate the dangers.
Artificial intelligence and now augmented intelligence have received a lot of attention. While some aspects may be overhyped, the technology is a certain part of our future. Adobe’s 2018 Digital Trends report found that while just 15% of enterprises are using AI today; 31% have it on their agenda for the next 12 months.
Advancements in AI, including machine learning and neural networks, are marching us toward a more interconnected and automated future. While the technology continues to mature, the science behind AI is still trying to understand how the human mind works, and replicate that to help improve our daily lives.
Considerations for an AI future
Despite the best of intentions for AI, computer systems may ultimately develop in ways we never intended. This was illustrated during the 2017 Neural Information Processing Systems conference when researchers presented an AI-based system for image mapping that learned to produce results by hiding source data via Steganography. The system produced the results researchers were looking for but by “cheating” and hiding data it needed to “succeed.”
As AI continues to become more widespread, the notion that this technology could potentially be used for malicious purposes becomes more real. Once recent example to consider are the effects of AI driving social media bots on the 2016 US election.
Ultimately AI will “learn” based on the data we provide it to train on. We need to be sure we are not biasing the results by selecting data that fits our preconceived understanding of what is appropriate. Consider the example in this research where the system was trained to misinterpret road signs, turning stop signs into speed signs.
The security industry also has some work to do to create more defense mechanisms as advanced attacks on data integrity are increasingly more difficult to detect and defend against. As a start, Google’s publication of its ethical principles on AI showed the potential and risk of the technology.
AI’s role in defending the network
On the flip side, AI will continue to play a role in defending against other threats. There are organizations already developing AI-based products that provide threat-hunting, attack analysis and incident response to proactively search for potential issues.
Another example is the recent work by the New York Power Authority to integrate AI into its power grid. While the system can certainly aid with detection of anomalies, it is critical that the way in which the system responds to threats is controlled. As Rob Lee stated, “You don't want your grid operators, the humans that are controlling the grid, to become so dependent on the machine learning or AI model that they forget how to do their jobs."
Minimizing AI risks
To be clear, AI can be used for good. But it is important for organizations to understand the full picture and keep in mind what can be done to mitigate the risks.
Legislation: While research and experimentation should continue unfettered to the greatest extent possible, before systems built on AI are given control of essential areas (e.g., critical infrastructure, finance, healthcare, cyber defense, etc.) the consequences of failure should be defined.
Bound the capabilities: One possible control would be first to focus on limiting the “box” within which the AI system can operate to mitigate unintended consequences. If limits are established through red teaming worst case scenarios, controls can be established that dictate exactly what the system can do.
Examples of the worst-case scenarios would be preventing the automated delivery of medical treatment in a response to a detected healthcare risk or limiting the size and frequency of financial transactions in response to abnormal market conditions.
Keep the human at the wheel: Much like today’s research with self-driving cars, automatic responses to network threats that could be disruptive should be limited to a human’s decision, for now. Augmented intelligence combines the speed of machine intelligence with the intuition and control of humans. Coupled with this is the need for the network to have the ability to adapt to external stimuli. The AI system can recommend a course of action, or several. It could even predict the outcome given a particular choice. But ultimately the “red button” needs to be pushed by a human.
Experiment and deploy more: The best mitigation for risk may in fact be to accelerate the research and test in controlled areas. Using the systems in simulated or replicated environments will enable researchers to better recognize when unintended events or responses occur and better understand how to mitigate them.
Training and deliberate design: Published frameworks on both social and technical impacts should be made available to everyone.
The promise of AI is real. But, the potential dangers are also just as real. As was the case with countless other advances we have made, countermeasures reveal themselves and the benefits of the “new” ultimately overtake the risk. This too, will be the case with AI.
Jim Carnes is the Chief Security Architect of Ciena, a supplier of telecommunications networking equipment, software and services.
About the Author
You May Also Like