As Regulators Escalate Oversight, Who Are the AI Heroes?

More laws and scrutiny of AI may be on the way, but that does not mean every implementation of the technology evolves with nefarious intent.

Joao-Pierre S. Ruth, Senior Editor

October 14, 2024

With concerns about and regulation of artificial intelligence on the rise, it can be easy to focus on questionable practices associated with the recent boom in AI technology. There can be ways to further AI’s use that include best practices, without putting privacy at risk or stirring other fears.

Lawmakers at the state, national, and international levels continue to draft policy meant to ensure public safety, protect privacy and ownership of original content, fight misinformation, and a plethora of other concerns AI now raises.

But does AI have to be “the bad guy” to be innovative and useful? How much more scrutiny is on the way for AI? Are there uses for AI that do not raise concerns of risk to the public, disruption of society, or harm to creatives behind original content?

In this episode of DOS Won’t Hunt, Octavian Udrea, chief scientist with Code Metal; Kjell Carlsson, head of AI strategy with Domino Data Lab; and Sohrob Kazerounian, distinguished AI researcher with Vectra AI, share their perspectives on AI development and use with ethics and regard for the public good in mind.

Listen to the full podcast here.

About the Author

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights