Is artificial intelligence getting too smart (and intrusive) for its own good? A growing number of nations have concluded that it's time to take a close look at AI's impact on an array of critical issues, including privacy, security, human rights, crime, and finance.
A proposal for an international oversight panel, the Global Partnership on AI, already has the support of six members of The Group of Seven (G7), an international organization comprised of nations with the largest and most advanced economies. The G7's dominant member, the United States, remains the only holdout, claiming that regulation could hamper the development of AI technologies and hurt US businesses.
The case for regulation
The Global Partnership on AI and OECD’s G20 AI principles represent a good first step toward building a worldwide AI regulatory structure, noted Robert L. Foehl, an executive-in-residence for business law and ethics at Ohio University. "However, it also illustrates the challenges in developing over-arching, comprehensive regulation in this area," he added.
The US has taken the position that Global Partnership on AI, as envisioned by its proponents, would be overly bureaucratic and stifling to AI innovation and development. Foehl, however, isn't surprised that any attempt at regulating AI will encounter at least some resistance. "It's an enormous challenge for governments to wrest themselves away from thinking and acting primarily in terms of shorter-term economic advantages for their particular country to thinking and acting for the benefit of humanity as a whole," he observed. "We have seen this previously with the issue of global climate change."
Chris McClean, global lead for digital ethics at Avanade, a joint venture between Microsoft and Accenture offering AI and other business services, believes that any technology that impacts mental and physical health, safety, education, financial well-being, and access to opportunity requires some form of government oversight. "The debate should only be about the nature of regulation," he stated.
Regulating AI while simultaneously supporting an innovation-rich environment promises to be a delicate balancing act. "Lawmakers must be careful not to over-legislate and to allow for innovation and advancements in AI," said Attila Tomaschek, a digital privacy expert at ProPrivacy.com, a privacy education and review website. "However, protecting the public good is obviously a top priority, and regulations must be robust enough to ensure that that priority is successfully achieved, all while working to avoid establishing insurmountable barriers to innovation and AI development."
Kimberly Nevala, a strategic advisor at analytics software and service provider SAS, also believes that AI innovation shouldn't take a back seat to regulation. "Done properly, regulation provides the guardrails, common rules of the road, and mechanisms to identify and respond when solutions are in danger of veering out of accepted boundaries," she explained. "Regulations also serve as an initial brake, forcing conversations about ethics, appropriate use, and so on early in the process when it's easier to course correct."
Braden Perry, a litigation, regulatory and government investigations attorney with law firm Kennyhertz Perry, believes that some form of regulation is inevitable. Exactly how government mandates will affect the AI industry depends largely on the course regulators decide to take. "A hasty attempt to reign in every potential for wrongdoing would likely fail and cause more damage than good to the technology," he said.
Karen Silverman, a partner at international business law firm Latham & Watkins noted that regulation risks include stifling beneficial innovation, the selection of business winners and losers without any basis, and making it more difficult for start-ups to achieve success. She added that ineffective, erratic, and uneven regulatory efforts or enforcement may also lead to unintended ethics issues. "There's some work [being done] on transparency and disclosure standards, but even that is complicated, and ... to get beyond broad principles, needs to be done on some more industry- or use-case specific basis," she said. "It’s probably easiest to start with regulations that take existing principles and read them onto new technologies, but this will leave the challenge of regulating the novel aspects of the tech, too."
On the other hand, a well-designed regulatory scheme that zeros-in on bad actors and doesn't overregulate the technology would likely mark a positive change for AI and its supporters, Perry said. "This would require a collaborative effort between legislators, regulators, and the industry," he noted.
To protect their interests, AI developers would be wise to adopt protective measures before regulations are thrust upon them. Self-regulation, as opposed to government intervention, is always better, Perry observed. "The industry certainly needs to take regulation seriously," he said. "The last thing any industry wants is regulation by enforcement in which agencies decide that some practices should have been illegal and, instead of declaring it illegal from now on through rulemaking, go back and prosecute the people who were doing it before."
Yet another point to consider is the impact additional oversight and tighter rules would have on startups. "The tech giants already have huge legal teams, internal auditors, and other compliance infrastructure [assets] to meet new demands," McClean explained. "If new regulations place the same level of burden on companies, regardless of their size or influence, it could effectively stifle competition and innovation."
For more on AI and analytics regulation, ethics, and concerns check out these articles:John Edwards is a veteran business technology journalist. His work has appeared in The New York Times, The Washington Post, and numerous business and technology publications, including Computerworld, CFO Magazine, IBM Data Management Magazine, RFID Journal, and Electronic ... View Full Bio