The State of AI Bias and How Businesses Can Reduce Risk

Here’s a look at the trust aspect of AI and how business and IT leaders can work on implementing AI projects that can prevent the consequences of bias.

Ravi Mayuram, CTO, Couchbase

April 28, 2023

4 Min Read
GoodIdeas-2GPNT via Alamy Stock

Artificial intelligence continues to make headlines as more people discover the capabilities of tools like OpenAI’s DALL-E 2, and ChatGPT. These futuristic tools are able to take a prompt and return an intelligent textual or visual response.

While it may seem like magic, it’s important to understand that these tools aren’t perfect and come with their own set of limitations and risks. Thus, the consequences that come with the democratization of AI should be carefully considered.

From a business perspective, AI adoption continues to grow. AI innovations have delivered significant benefits to organizations: streamlining processes, improving efficiencies, and augmenting human intelligence. According to Forrester, “rapid progress in areas of fundamental AI research, novel applications of existing models, the adoption of AI governance and ethics frameworks and reporting, and many more developments will make AI an intrinsic part of what makes a successful enterprise.”

In fact, spending on AI software will grow from $33B in 2021 to $64B in 2025 -- accelerating twice as fast as the overall software market.

As adoption in the enterprise continues to rise, business leaders should understand one of the growing concerns: AI bias. Algorithm bias occurs when human biases make their way into algorithms. More than 50% of organizations are concerned by the potential for AI bias to hurt their business. Furthermore, nearly three fourths of businesses haven’t taken steps to reduce bias in datasets. This article will examine the trust aspect of AI and explore ways business and IT leaders implementing AI projects can prevent the consequences of bias.

The Impact of AI Bias

Many believe that ChatGPT has the potential to take away from Google’s mindshare. In response, Google CEO Sundar Pichai mentioned that their customers trust Google’s search results and that “you can imagine for search-like applications, the factuality issues are really important and for other applications, bias and toxicity and safety issues are also paramount.”

For example, it was revealed that phrasing a question to ChatGPT in a certain way could result in highly offensive and biased results (e.g., ChatGPT ranked who should be tortured based on country of origin).

On the enterprise side, having a biased dataset could hurt organizations. This could lead to poor decision making based on skewed/harmful predictions and even lead to legal ramifications given the ethical sensitivities around AI bias. Think AI-infused hiring practices that are biased against female applicants. Consequently, this could also result in damage to a company’s reputation and credibility, as well as lost opportunities due to inaccurate forecasts.

With 36% of businesses reporting that their business was negatively impacted by AI bias, resulting in lost revenue and customers, it’s not surprising that a loss of customer trust was the main concern reported around the risk of AI bias (56%).

What Is the Issue With AI Bias?

AI producing offensive results can be attributed to models trained using datasets that contain countless examples of questionable and problematic content. And, if you look at the history of the internet, you’ll know there is a lot of this type of content, including misinformation and hate speech. This is the data on which many of the popular AI models are trained. These bots then generate even more data, further polluting the data source.

We are going to end up in a world where we rate pre- and post-AI era data differently. But where do the biases begin? They can typically be traced initially to biased datasets or a dataset underrepresenting or ignoring whole populations. These biases in the sample sets on which AI models are trained are what lead to untrustworthy AI.

How to Eliminate AI Bias

The conversation seems to be on how to eliminate bias. This is inherently challenging to solve because fundamental human bias is baked into the question itself. If AI tools of the present can help identify bias, that would be a great step forward.

The inherent bias in data is a combination of human bias and bias in data. De-biasing humans is a superhuman effort, while de-biasing AI models is a more tractable one. We need to train data scientists better in curating the data and ensuring ethical practices are followed in collecting and cleansing the data. They should also be custodians for preserving high-quality data.

As for the underrepresentation of people, the best solution here is transparency. By ensuring data is open and available to as many data scientists as possible, we can ensure that more diverse groups of people can sample the data and point out inherent biases. Further, using these experiences, we can build AI models that will train the trainer, so to speak. Doing this would automate this inspection as well, as it will be impossible for humans to vet the volumes of data discussed.

Moving Forward

Just as we solved the vaccine problem for the coronavirus, where academia, industry, and governments came together to accelerate the development of a treatment by sharing knowledge and reducing red tape, we can solve this issue as well. It is this level of urgency and collaboration that is required to solve this issue. We have shown that we can do this already. My unbiased opinion is that we can do it again!

About the Author(s)

Ravi Mayuram

CTO, Couchbase

Ravi Mayuram is CTO of Couchbase (NASDAQ: BASE - provider of a leading cloud database platform for enterprise applications that 30% of the Fortune 100 depend on).

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights