AI Regulation: Has the Time Arrived? - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // AI/Machine Learning
Commentary
2/24/2020
08:00 AM
John Edwards
John Edwards
Commentary
Connect Directly
Twitter
RSS

AI Regulation: Has the Time Arrived?

Most of the world's leading democracies want to investigate the impact of AI's exponential growth. The US isn't one of them.



Image: Sdecoret - stock.adobe.com
Image: Sdecoret - stock.adobe.com

Is artificial intelligence getting too smart (and intrusive) for its own good? A growing number of nations have concluded that it's time to take a close look at AI's impact on an array of critical issues, including privacy, security, human rights, crime, and finance.

A proposal for an international oversight panel, the Global Partnership on AI, already has the support of six members of The Group of Seven (G7), an international organization comprised of nations with the largest and most advanced economies. The G7's dominant member, the United States, remains the only holdout, claiming that regulation could hamper the development of AI technologies and hurt US businesses.

The case for regulation

The Global Partnership on AI and OECD’s G20 AI principles represent a good first step toward building a worldwide AI regulatory structure, noted Robert L. Foehl, an executive-in-residence for business law and ethics at Ohio University. "However, it also illustrates the challenges in developing over-arching, comprehensive regulation in this area," he added.

Robert Foehl
Robert Foehl

The US has taken the position that Global Partnership on AI, as envisioned by its proponents, would be overly bureaucratic and stifling to AI innovation and development. Foehl, however, isn't surprised that any attempt at regulating AI will encounter at least some resistance. "It's an enormous challenge for governments to wrest themselves away from thinking and acting primarily in terms of shorter-term economic advantages for their particular country to thinking and acting for the benefit of humanity as a whole," he observed. "We have seen this previously with the issue of global climate change."

Chris McClean, global lead for digital ethics at Avanade, a joint venture between Microsoft and Accenture offering AI and other business services, believes that any technology that impacts mental and physical health, safety, education, financial well-being, and access to opportunity requires some form of government oversight. "The debate should only be about the nature of regulation," he stated.

Conflicting views

Regulating AI while simultaneously supporting an innovation-rich environment promises to be a delicate balancing act. "Lawmakers must be careful not to over-legislate and to allow for innovation and advancements in AI," said Attila Tomaschek, a digital privacy expert at ProPrivacy.com, a privacy education and review website. "However, protecting the public good is obviously a top priority, and regulations must be robust enough to ensure that that priority is successfully achieved, all while working to avoid establishing insurmountable barriers to innovation and AI development."

Chris McClean
Chris McClean

Kimberly Nevala, a strategic advisor at analytics software and service provider SAS, also believes that AI innovation shouldn't take a back seat to regulation. "Done properly, regulation provides the guardrails, common rules of the road, and mechanisms to identify and respond when solutions are in danger of veering out of accepted boundaries," she explained. "Regulations also serve as an initial brake, forcing conversations about ethics, appropriate use, and so on early in the process when it's easier to course correct."

Braden Perry, a litigation, regulatory and government investigations attorney with law firm Kennyhertz Perry, believes that some form of regulation is inevitable. Exactly how government mandates will affect the AI industry depends largely on the course regulators decide to take. "A hasty attempt to reign in every potential for wrongdoing would likely fail and cause more damage than good to the technology," he said.

Kimberly Nevala
Kimberly Nevala

Regulation risks

Karen Silverman, a partner at international business law firm Latham & Watkins noted that regulation risks include stifling beneficial innovation, the selection of business winners and losers without any basis, and making it more difficult for start-ups to achieve success. She added that ineffective, erratic, and uneven regulatory efforts or enforcement may also lead to unintended ethics issues. "There's some work [being done] on transparency and disclosure standards, but even that is complicated, and ... to get beyond broad principles, needs to be done on some more industry- or use-case specific basis," she said. "It’s probably easiest to start with regulations that take existing principles and read them onto new technologies, but this will leave the challenge of regulating the novel aspects of the tech, too."

Braden Perry
Braden Perry

On the other hand, a well-designed regulatory scheme that zeros-in on bad actors and doesn't overregulate the technology would likely mark a positive change for AI and its supporters, Perry said. "This would require a collaborative effort between legislators, regulators, and the industry," he noted.

To protect their interests, AI developers would be wise to adopt protective measures before regulations are thrust upon them. Self-regulation, as opposed to government intervention, is always better, Perry observed. "The industry certainly needs to take regulation seriously," he said. "The last thing any industry wants is regulation by enforcement in which agencies decide that some practices should have been illegal and, instead of declaring it illegal from now on through rulemaking, go back and prosecute the people who were doing it before."

Attila Tomaschek
Attila Tomaschek

Yet another point to consider is the impact additional oversight and tighter rules would have on startups. "The tech giants already have huge legal teams, internal auditors, and other compliance infrastructure [assets] to meet new demands," McClean explained. "If new regulations place the same level of burden on companies, regardless of their size or influence, it could effectively stifle competition and innovation."

For more on AI and analytics regulation, ethics, and concerns check out these articles:

AI & Machine Learning: An Enterprise Guide

Recognize Behavioral Tracking’s Opportunities and Pitfalls

Harnessing Big Data: Can Our Laws and Policies Keep Up?

The Facial Recognition Debate

Bias: AI's Achille's Heel

John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
News
COVID-19: Using Data to Map Infections, Hospital Beds, and More
Jessica Davis, Senior Editor, Enterprise Apps,  3/25/2020
Commentary
Enterprise Guide to Robotic Process Automation
Cathleen Gagne, Managing Editor, InformationWeek,  3/23/2020
Slideshows
How Startup Innovation Can Help Enterprises Face COVID-19
Joao-Pierre S. Ruth, Senior Writer,  3/24/2020
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
IT Careers: Tech Drives Constant Change
Advances in information technology and management concepts mean that IT professionals must update their skill sets, even their career goals on an almost yearly basis. In this IT Trend Report, experts share advice on how IT pros can keep up with this every-changing job market. Read it today!
Slideshows
Flash Poll