Sponsored By

What We Can Do About Biased AI

Biased artificial intelligence is a real issue. But how does it occur, what are the ramifications -- and what can we do about it?

Guest Commentary

June 11, 2021

5 Min Read
Credit: Wright Studio via Adobe Stock

Artificial Intelligence is blazing new trails into many industries. With applications for these unique and advanced programs being extremely varied, it might be worrying to learn that AI’s automated processes can be biased. AI can be programmed with its own set of prejudices.

While we currently seem to implicitly follow technology, finding out that it can be at fault can be a hard pill to swallow.

Many have suggested doing more research into establishing a set of AI ethics. This would comprise values, principles, and techniques to ensure that AI would keep ethically conducting all its processes.

As with any new development, there is a period where progress, discoveries, and rules need to be established. There is nothing new about that, but the rate that AI is developing puts it into a new class of its own. So, before you get worked up and start a revolt against the machines, let’s learn more about what AI bias is.

What is AI Bias?

As we know all too well, humans can be biased. It’s arguably completely unavoidable. Unfortunately, this can spill over when developers are writing the code for certain systems, and in some cases, be amplified by AI.

While it can be human error, it can also be associated with a lack of or incomplete data being programmed into the AI.

These prejudices can also be a total oversight and just mimicking old trends. Or, in some cases, because the teams creating them aren’t diverse enough to spot the issues.

A case in point, and famous example, is Amazon’s biased recruiting tool.

Amazon Case Study

In 2014, Amazon wanted to automate its recruiting process. As you can imagine, a company of their scale would require hours of resumé review time. Their solution was to create an AI program that would review job applicants’ resumes and feed the recruiters a score.

While this did whittle down the list, by the following year, Amazon had realized there was an issue, as the system was not rating women candidates equally to men.

This learned behavior was down to the historical data that Amazon had provided the system from the last 10 years. Since the workforce was 60% males, the system incorrectly assumed that the company preferred men. Once the problem was discovered, the company quickly reverted back to the method of reading the resumes.

While this illustrates how biases can creep into the systems, how do we go about laying the groundwork of establishing ethical AI systems?

What Will AI Ethics Look Like?

As you would expect, this is an all-encompassing question. Please don’t quote Asimov's three laws of robotics here.

Finding what AI ethics look like and how they can be incorporated into a seamlessly non-biased system takes nuanced steps:

1. Doing the in-depth reviews of data

As I mentioned, making sure that your AI is harnessing the correct data is key. This review process will need to be conducted by an independent body. In turn, this will create a new sphere of specialists that will hone their skills.

2. Invest in creating a framework tailored to your industry

Creating and brainstorming the ethical problems your industry faces, and implementing these lessons into the system, can help identify issues. And by simply doing this, you can even action some steps to address issues in the real world.

3. Use lessons learned in other ethical industries

Specific industries will have had to have tackled certain ethical conversations already. The medical field is one that jumps out as a place where in-depth discussions have already happened due to technology. When companies can cross-pollinate skills, it always drives innovation. 

4. Have ethical discussion and awareness drives

Simply having the conversation makes individuals aware of the issue. Also, hosting an awareness drive will educate people. Educating individuals on their prejudices is always a plus for humanity as a whole. 

5. Monitor results

When reviewing the Amazon case study, the number one step to ensure that AI is ethical is to monitor it. Always review the data and results in the system.

While it’s great to establish ethics, there needs to be a proper regulatory body in place that ensures that AI doesn’t get out of hand.

Regulating AI

It shouldn’t come as a surprise, but the government might be out of their depth on this one. We need professionals that can advise and make critical regulation decisions.

This isn’t the first time. The Food and Drug Administration (FDA), the Securities and Exchange Commission (SEC), and Environmental Protection Agency (EPA) were all founded due to an event that needed someone to regulate a situation afterward.

Experts in the AI community agree that there needs to be a regulation board to address massive issues. These include problems we’re all familiar with, which are Deepfakes and facial recognition. With Deepfakes, the problem is false statements can be issued and distributed across the globe, causing chaos.

If experts and government work together, there can be an effective body that will ensure that AI doesn’t cause damage to the general public but rather uplifts the world.

What We Can Do For a Better AI Future

AI is here to stay. We are currently living through a pivotal time. It’s growing faster every day, and the applications are getting more in-depth and expanding across more industries.

It’s our duty to monitor how AI is applied in our lives, report any identifiable problems, and become solution-based users that can provide constructive paths forward.

We’re all in this together.



Jody Glidden is the founder and CEO of Introhive. Founded in 2012, Introhive is the fastest growing B2B relationship intelligence service and data management platform. The company was recently recognized by Deloitte’s Fast 50 and Fast 500 Awards programs and was named the MarTech 2020 Breakthrough Award winner for Best CRM Innovation. Jody is an experienced business leader with start-up tenacity, public company rigor, and an innovative passion for technology. Introhive is the 4th company he’s been involved in founding and building, with three successful exits including Chalk Media, icGlobal, and Scholars.com.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights