Private sector, voluntary guidelines on the ethical use of generative AI are emerging, yet no US government body regulates the technology currently.

Joao-Pierre S. Ruth, Senior Editor

March 14, 2023

6 Min Read
Kittipong Jirasukhanont via Alamy Stock Photo

The growing potential of generative AI is clouded by its possible harms, prompting some calls for regulation.

ChatGPT and other generative AI have taken centerstage for innovation with companies racing to introduce their own respective twists on the technology. Questions about the ethics of AI have likewise escalated with ways the technology could spread misinformation, support hacking attempts, or raise doubts about the ownership and validity of digital content.

The issue of ethics and AI is not new, according to Cynthia Rudin, the Earl D. McLean, Jr. professor of computer science, electrical and computer engineering, statistical science, mathematics, and biostatistics & bioinformatics at Duke University

She says AI recommender systems already have been pointed to for such ills as contributing to depression among teenagers, algorithms amplifying hate speech that spurred the 2017 Rohingya massacre in Myanmar, vaccine misinformation, and the spread of propaganda that contributed to insurrection in the United States on January 6, 2021.

“If we haven’t learned our lesson about ethics by now, it’s not going to be when ChatGPT shows up,” says Rudin.

How the Private Sector Approaches Ethics in AI

Companies might claim they conduct ethical uses of AI, she says, but more could be done. For example, Rudin says companies tend to claim that putting limits on speech that contributes to human trafficking or vaccine misinformation would also eliminate content that the public would not want removed, such as critiques of hate speech or retellings of someone’s experiences confronting bias and prejudice.

“Basically, what the companies are saying is that they can’t create a classifier, like they’re incapable of creating a classifier that will accurately identify misinformation,” she says. “Frankly, I don’t believe that. These companies are good enough at machine learning that they should be able to identify what substance is real and what substance is not. And if they can’t, they should put more resources behind that.”

Rudin’s top concerns about AI include circulation of misinformation, ChatGPT putting to work helping terrorist groups using social media to recruit and fundraise, and facial recognition being paired with pervasive surveillance. “I’m on the side of thinking we need to regulate AI,” she says. “I think we should develop something like the Department of Transportation but for AI.”

She is keeping her eye on Rep. Ted W. Lieu’s efforts that include a push in Congress for a nonpartisan commission to provide recommendations on how to regulate AI.

For its part, Salesforce recently published its own set of guidelines, which lays out the company’s intent to focus on accuracy, safety, honesty, empowerment, and sustainability in the development of generative AI. It is an example of the private sector drafting a roadmap for itself in the absence of cohesive industry consensus or national regulations to guide the implementation of growing technology.

“Because this is so rapidly evolving, we continue to add more details to it over time,” says Kathy Baxter, principal architect of ethical AI with Salesforce. She says meetings and exercises are held with each team to foster an understanding of the meaning behind the guidelines.

Baxter says there is a community of her peers from other companies that gets together for workshops with speakers from industry, nonprofits, academia, and government to talk about such issues and how the organizations handle them. “We all want good, safe technology,” she says.

Sharing Perspectives on AI Ethics

Salesforce is also sharing its perspective on AI with its customers, including teaching sessions on data ethics and AI ethics. “We first introduced our guidelines for how we’re building generative AI responsibly,” Baxter says, “but then we followed up with, ‘What can you do?’”

The first recommendation made was to go through all data and documents that will be used to train the AI to ensure it is accurate and up to date.

“For the EU AI act, they’re now talking about adding generative AI into their description of general-purpose AI,” she says. “This is one of the problems when you’ve got these massive, uber sets of regulation, it takes a long time for everybody to come to an agreement. The technology is not going to wait for you. The technology just keeps on evolving and you’ve got to be able to respond and keep updating those regulations.”

The National Institute of Standards and Technology (NIST), Baxter says, is an important organization for this space with efforts such as the AI risk management framework team, which she is volunteering time to be a part of. “Right now, that framework isn’t a standard, but it could be,” Baxter says.

One element she believes should be brought to the discussion on AI ethics is datasets. “The dataset that you train those foundation models on, most often they are open-source datasets that have been compiled over the years,” Baxter says. “They haven’t been curated to pull out bias and toxic elements.” That bias can then be reflected in the generated results.

Can Policy Resolve Legacies of Inequity Baked into AI?

“Areas of ethical concern related to AI, generative AI -- there is the classic and not well-solved-for challenge of structural bias,” says Lori Witzel, TIBCO Software’s director of thought leadership, referring to bias in systems by which training data is gathered and aggregated. This includes historic legacies that can surface in the training data.

The composition of the teams doing development work on the technology, or the algorithms could also introduce bias, she says. “Maybe not everybody was in the room on the team who should have been in the room,” Witzel says, referring to the exclusion of input that can reflect societal inequity that leaves out certain voices.

There are also issues with creator and intellectual property rights related to content produced through generative AI if it was trained on the intellectual property of others. “Who owns the output? How did the IP get into the system to allow the technology to build that?” Witzel asks. “Did anybody need to give permission for that data to be fed into training system?”

There is obvious excitement about this technology and where it might lead, she says, but there can be a tendency to overpromise on what may be possible versus what will be feasible. Questions of transparency and honesty in the midst of such a hype cycle remain to be answered as technologists forge ahead with generative AI’s potential. “Part of the fun and scariness of our cultural moment is the pace of technology is outstripping our ability to respond societally with legal frameworks or accepted boundaries,” Witzel says.

What to Read Next:

What Just Broke?: Digital Ethics in the Time of Generative AI

ChatGPT: An Author Without Ethics

ChatGPT: Enterprises Eye Use Cases, Ethicists Remain Concerned

Read more about:

Regulation

About the Author(s)

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight. Follow him on Twitter: @jpruth.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights