Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.
April 30, 2019
3 Min Read
Deborah Adleman, EY
As autonomous and intelligent systems find their way into a rapidly expanding number of applications, many business and technology leaders are beginning to ask the same question: Can you trust them?
Although AI technology has been around for decades, many of its current applications are new and need time to mature, observed Deborah Adleman, director and U.S. and Americas data protection leader for global business consulting firm EY. Adleman will lead the session "Can You Trust Your Autonomous and Intelligent Systems?" at Interop on May 22 from 2:30 pm to 3:20 pm.
Ethical considerations, more than anything else, can introduce implications that have historically not been mainstream for technology, including moral behavior, fairness, bias, transparency and respect, Adleman warned. "Other social considerations include workforce disruption, skills retraining, discrimination and environmental effects," she added.
Understanding the risks
Organizations need to fully understand the full spectrum of risk AI can introduce and establish appropriate continuous monitoring mechanisms, Adleman said. "AI developers and operators also need to recognize that the threats from AI go beyond just operational challenges, and may also lead to customer dissatisfaction, negative media attention, adversarial attacks, employee churn, regulatory scrutiny and reduced profitability," she explained.
Governments and regulators are just now beginning to work closely with industry to understand exactly how AI is being developed and used, as well as the applicability of current laws and regulations to the technology. "They recognize their role to protect the public, but also want to make sure that they don’t stifle innovation," Adleman said. "With the fast pace that AI is evolving, they are currently relying more heavily on guiding principles rather than prescriptive rules and have established advisory committees to further study what additional safeguards are needed."
AI systems are, by their nature, designed to operate with incomplete information and to make a prediction based on the information available. "Just like humans, they will sometimes make a mistake," Adleman observed. "But with the right monitoring and corrective actions in place, these errors can be detected and corrected in a timely manner." Adleman also feels that the importance of AI monitoring systems cannot be overstated. "So, if the right conditions are in place, including that the AI is designed and trained appropriately for the goals it's designed to achieve, we at EY do believe it’s possible to create trustworthy AI systems."
More work ahead
While there's a general consensus across industry, governments, academia and the general public that AI needs to be made ethical and trustworthy, AI development is currently outpacing the governance structures needed to ensure that the technology is transparent, unbiased, accurate, secure, and auditable, Adleman said. "Further work is required to enhance existing governance models to address the broader social, regulatory, reputational and ethical implications of AI," she noted.
Check out Deborah Adleman's session at Interop 2019, Can You Trust Your Autonomous and Intelligent Systems? at the Mirage in Las Vegas.
Deborah Adleman, EY
About the Author(s)
Technology Journalist & Author
John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic Design. He has also written columns for The Economist's Business Intelligence Unit and PricewaterhouseCoopers' Communications Direct. John has authored several books on business technology topics. His work began appearing online as early as 1983. Throughout the 1980s and 90s, he wrote daily news and feature articles for both the CompuServe and Prodigy online services. His "Behind the Screens" commentaries made him the world's first known professional blogger.
You May Also Like
ESM Best Practices
The Total Economic Impact™ Of Fortinet NGFW For Data Center And AI-Powered FortiGuard Security Services Solution Study
NIST Cybersecurity Framework 2.0: Changes, impacts, and opportunities for your InfoSec program
Solution Brief: Fortinet FortiFlex Delivers Usage-Based Security Licensing That Moves at the Speed of Digital Accelerationâ€‹
2022 Retrospective: The Emergence of the Next Generation of Wi-Fi