You don’t need to buy into the notion that artificial intelligence (AI) is a so-called “existential threat” to recognize that the technology has its downsides.
Some of AI’s risks may stem from design limitations in a specific buildout of the technology. Others may be due to inadequate runtime governance over live AI apps. Still others may be intrinsic to the technology’s inscrutable “blackbox” complexity. And, let’s not forget the trend toward AI’s weaponization, which poses an existential threat any way you look at it.
One of the most vibrant fields of high-tech research is what’s often called “AI safety (or, alternately, “friendly AI” or “AI risk management”). Generally, AI safety refers to the myriad of ways in which the technology may adversely impact society. The AI safety community is developing technological, procedural, regulatory, and other guardrails to mitigate the most worrisome threats.
As a mainstream preoccupation, AI safety has become inescapable in the popular press, the blogosphere, and technical journals. It has become a popular topic on the mainstage at tech conferences. AI safety researchers can tap into a growing pool of grants that fund innovative approaches for addressing the problem. Some of the research monies are coming from the same foundations that are addressing many types of existential threats, including global warming, nuclear weapons, and biotechnology. Research is coming from all over the AI community, from institutes around the globe, and from big technology companies. Among the most noteworthy AI safety research initiatives is a nonprofit sponsored by Elon Musk and other Silicon Valley movers and shakers.
Key AI safety research topics include the following:
· Can we prevent AI from invading people’s privacy?
· Can we eliminate socioeconomic biases that may be baked into AI-driven applications?
· Can we engineer AI algorithms so that there’s always a clear indication of human accountability, responsibility, and liability for their algorithmic outcomes?
· Can we build ethical and moral principles into AI algorithms so that they weigh the full set of human considerations into decisions that may have life-or-death consequences?
· Can we automatically align AI applications with stakeholder values, or at least build in the ability to compromise in exceptional cases, thereby preventing the emergence of rogue bots in autonomous decisionmaking scenarios?
· Can we throttle AI-driven decision making in circumstances where the uncertainty is too great to justify autonomous actions?
· Can we institute failsafe procedures so that humans may take back control when automated AI applications reach the limits of their competency?
· Can we protect AI applications from adversarial attacks that are designed to exploit vulnerabilities in their underlying statistical algorithms?
· Can we design AI algorithms that fail gracefully, rather than catastrophically, when the environment data departs significantly from circumstances for which they were trained?
AI safeguards will almost certainly find their way into future waves of commercial devices, applications, and cloud services. AI safety is also the focus of a growing curriculum that’s essential study for the next generation of data scientists and other application developers.
But we’d be naïve to imagine that society can ever fully protect itself from all the adverse consequences that may befall us from our AI inventions. No matter how smart humanity becomes in perfecting the state of the art in AI safety, we’re not likely to rid ourselves entirely of algorithmic insensitivity. If nothing else, the probabilistic underpinnings of AI — along with its staggering complexity, versatility, and autonomy — practically guarantee that its behavior can never be entirely predicted or controlled in advance in every real-world circumstance.
As AI remakes the human experience, we’ll have to revisit and recalibrate its guardrails to keep its worst tendencies in check.