As autonomous and intelligent systems find their way into a rapidly expanding number of applications, many business and technology leaders are beginning to ask the same question: Can you trust them?
Although AI technology has been around for decades, many of its current applications are new and need time to mature, observed Deborah Adleman, director and U.S. and Americas data protection leader for global business consulting firm EY. Adleman will lead the session "Can You Trust Your Autonomous and Intelligent Systems?" at Interop on May 22 from 2:30 pm to 3:20 pm.
Ethical considerations, more than anything else, can introduce implications that have historically not been mainstream for technology, including moral behavior, fairness, bias, transparency and respect, Adleman warned. "Other social considerations include workforce disruption, skills retraining, discrimination and environmental effects," she added.
Understanding the risks
Organizations need to fully understand the full spectrum of risk AI can introduce and establish appropriate continuous monitoring mechanisms, Adleman said. "AI developers and operators also need to recognize that the threats from AI go beyond just operational challenges, and may also lead to customer dissatisfaction, negative media attention, adversarial attacks, employee churn, regulatory scrutiny and reduced profitability," she explained.
Governments and regulators are just now beginning to work closely with industry to understand exactly how AI is being developed and used, as well as the applicability of current laws and regulations to the technology. "They recognize their role to protect the public, but also want to make sure that they don’t stifle innovation," Adleman said. "With the fast pace that AI is evolving, they are currently relying more heavily on guiding principles rather than prescriptive rules and have established advisory committees to further study what additional safeguards are needed."
AI systems are, by their nature, designed to operate with incomplete information and to make a prediction based on the information available. "Just like humans, they will sometimes make a mistake," Adleman observed. "But with the right monitoring and corrective actions in place, these errors can be detected and corrected in a timely manner." Adleman also feels that the importance of AI monitoring systems cannot be overstated. "So, if the right conditions are in place, including that the AI is designed and trained appropriately for the goals it's designed to achieve, we at EY do believe it’s possible to create trustworthy AI systems."
More work ahead
While there's a general consensus across industry, governments, academia and the general public that AI needs to be made ethical and trustworthy, AI development is currently outpacing the governance structures needed to ensure that the technology is transparent, unbiased, accurate, secure, and auditable, Adleman said. "Further work is required to enhance existing governance models to address the broader social, regulatory, reputational and ethical implications of AI," she noted.
Check out Deborah Adleman's session at Interop 2019, Can You Trust Your Autonomous and Intelligent Systems? at the Mirage in Las Vegas.
John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic ... View Full Bio