What Just Broke: Is Self-Regulation the Answer to AI Worries?
A tale of two approaches, based on comments from Sam Altman and Dr. Michio Kaku, to introduce oversight and governance of rapidly growing generative AI.
We face a multitude of roads ahead when it comes to generative artificial intelligence, some with highly restrictive guardrails and others with nothing to prevent calamity. This episode will be a quick discussion of two trains of thought about regulating AI as it proliferates across the public and private sectors.
Last week saw a pair of events unfold that spoke to the regulatory debate about the implementation of generative AI. Sam Altman, CEO of OpenAI, testified before Congress about the implications of AI continuing to spread and grow without regulations.
Then, at a private event held in New York, theoretical physicist Michio Kaku gave his perspective on who should be involved in establishing guidelines for AI.
“We want the industry to be self-policing,” Kaku said. “We don’t want bureaucrats to come in and grandstand so they can get re-elected.”
Altman, in contrast, called on regulations from lawmakers. My colleague Shane Snider’s story says Altman wanted lawmakers to step in to establish parameters to stop AI creators from harming the world. Altman is quoted to have said, “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening.”
Is industry-only, self-policing the way to go or is a public-private approach to AI regulation what we need? Should it be entirely up to lawmakers and the agencies they assign to the task?
What to Read Next:
OpenAI CEO Sam Altman Pleads for AI Regulation
What Just Broke?: Should AI Come with Warning Labels?
What Just Broke?: Digital Ethics in the Time of Generative AI
Read more about:
RegulationAbout the Author
You May Also Like