AI’s Equality Problem Needs Solution: Experts

While lauding AI’s potential for advancements in all areas of life, one prominent group of experts is trying to raise awareness about problems with equality.

Shane Snider, Senior Writer, InformationWeek

July 27, 2023

3 Min Read
Inequality Difference Diversity Imbalance Racism Concept
Rawpixel Ltd via Alamy Stock

Leading experts in the fields of artificial intelligence and data ethics on Wednesday said bias and equality are top concerns as companies and organizations grapple with the race to adopt new and powerful AI tools.

Miriam Vogel, president and CEO of EqualAI, hosted a press conference to discuss new developments in AI with Cathy O’Neil, CEO and founder of AI risk management firm ORCAA, and Reggie Townsend, vice president of the Data Ethics Practice at SAS Institute. Vogel is also chair of the White House National AI Advisory Committee.

“We are excited about the ways that AI can and will be a powerful tool to advance our lives and our economy,” Vogel said to kick off the talk. “But we are also extremely mindful that we need to be hyper-vigilant and ensure that the AI we are supporting does not perpetuate and mass produce historical and new forms of bias and discrimination.”

Several studies (including reports from Harvard University and the National Institute of Biomedical Imaging and Bioengineering, among others) have found significant evidence for unintended racial and gender biases baked into AI models that could have a profound impact on society. These biases can impact everything from facial recognition systems used in criminal justice, to artificial intelligence determining race from medical images and using the information in a biased manner. The list goes on and on.

Four Factors for Responsible AI

Vogel said responsible AI governance comes down to four key factors: Employee demand for responsible AI; brand integrity; all impacted parties (including consumers); and liability for everyone involved. “Employees don’t want to work for a company that’s doing harm,” she said.

Companies must remain vigilant to ensure that any AI tools used reflect the organization’s values, Vogel added.

According to O’Neil, care must be taken in data collection and the way AI tools are used. “The harms often do fall to the people who historically have been marginalized,” she said. “AI, even though it also helps us so much in so many ways -- it makes our lives easier and faster and more convenient, and it can make companies more efficient -- it also has this sort of underside, this dark side.”

O’Neil said while companies may not intend to build harmful mechanisms though AI, they might feel insulated because they are using third-party vendors’ technologies. “So they might feel insulated legally, even when they are not.”

AI Fixes Not ‘Rocket Science’

The ethical dilemmas do have a solution, O’Neil said. “The good news is that it actually isn’t rocket science. It’s actually possible to anticipate what the problems are and isn’t impossible to put up guardrails around your AI to make sure that people are protected from harm.”

Several prominent tech companies leading AI development last week agreed to voluntary guardrails -- announced by the Biden Administration. Seven companies pledged safeguards, including Amazon, Anthropic, Google Inflection, Meta, Microsoft, and OpenAI. Vogel said the pledges were a good first step, but more work needs to be done to established firm rules. While one of the seven safeguarding commitments included research on potential societal risks, equality and bias concerns were not specifically mentioned.

Fielding a question from InformationWeek, SAS Institute’s Townsend said it would be important for any regulatory effort to include input from marginalized groups. “These organizations … need to ensure there’s adequate representation of voices at the table. To make sure are people at the table who have lived experiences -- and I’m sure that every one of these organizations has those folks on staff. And I would love for them to amplify those voices.”

What to Read Next:

The State of AI Bias and How Businesses Can Reduce Risk

AI and Hiring Quick Study

How Fighting AI Bias Can Make Fintech Even More Inclusive

About the Author

Shane Snider

Senior Writer, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights