What Do We Do About Racist Machines?

We will never rid ourselves of all our biases overnight. But we can pass on a legacy in AI that is sufficiently aware of the past to foster a more just and equitable society.

Guest Commentary, Guest Commentary

December 28, 2020

5 Min Read
Image: momius - stock.adobe.com

Enterprise AI traditionally views all data as good data. But that’s not always true. As investors think through IPOs and strategy when it comes to tech, we need to take injustice embedded in artificial intelligence seriously.

Artificial intelligence has benefitted enormously from the mass of data accessible via social media, smartphones, and other online technologies. Our ability to extract, store and compute data -- specifically unstructured data -- is a game changer. Searches, clicks, photos, videos, and other data train machines to learn how humans devote their attention, acquire knowledge, spend and invest money, play video games, and otherwise express themselves.

Every aspect of the technology experience has a bias component. Communities take for granted the exclusion of others due to traditions and local history. The legacy of structural racism is not far below the surface of politics, finance, and real estate. Never experiencing or observing bias, if that is even possible, is itself a form of privilege. Such bias, let’s call it racism, is inescapable.

Laws have been in place for well over 70 years to remove apparent bias. The Equal Credit Opportunity Act of 1974 and Fair Housing Act of 1968 were foundational to ensure equal access and opportunity for all Americans. In theory, technology should have reinforced equality because the program and the algorithms are color blind.

Nearly 7 million 30-year mortgages analyzed by University of California at Berkeley researchers found that Latinx and African-American borrowers pay 7.9 and 3.6 basis points more in interest for home-purchase and refinance mortgages, respectively, because of discrimination. Lending discrimination currently costs African American and Latinx borrowers $765 million in extra interest per year.

FinTech algorithms discriminate 40% less than face-to-face lenders; Latinx and African Americans pay 5.3 basis points more in interest for purchase mortgages and 2.0 basis points for refinance mortgages originated on FinTech platforms. Despite the reduction in discrimination, the finding that even FinTechs discriminate is important

The data and the predictions and recommendations that AI makes are prejudiced by the human that is using sophisticated mathematical models to query the data. Nicol Turner Lee, from the Brookings Institute, through her research found the lack of racial and sexual diversity in the programmers designing the training sample leads to bias.

The AI apple does not fall far from the tree

AI models in financial services are largely auto-decisioning, where the training data is used in the context of a managed decision algorithm. Using past data to make future decisions often perpetuates an existing bias.

In 2016, Microsoft chatbot Tay promised to act like a hip teenage girl but quickly learned to spew vile racist rhetoric. Trolls from the hatemongering website 4chan inundated Tay with hateful racist, misogynistic, and Anti-Semitic messages shortly after the chatbot’s launch. The influx skewed the chatbot’s view of the world.

Racist labeling and tags have been found in massive AI photo databases, for example. The Bulletin of Atomic Scientists recently warned of malicious actors poisoning more datasets in the future. Racist algorithms have discredited facial recognition systems that were supposed to identify criminals. Even the Internet of Things is not immune. A digital bathroom hand soap dispenser reportedly only squirted onto white hands. Its sensors were never calibrated for dark skin.

The good news is that humans can try to stop other humans from inputting too much inappropriate material into AI. It’s now unrealistic to develop AI without erecting barriers to prevent malicious actors -- racists, hackers, or anyone -- from manipulating the technology. We can do more, however. Proactively, AI developers can speak to academics, urban planners, community activists, and leaders of marginalized groups to incorporate social justice into their technologies.

Review the data

Using both an interdisciplinary approach to reviewing data using social justice criteria and the common sense of a more open mind to audit data sets might reveal subtly racist elements of AI datasets. Changing this data can have significant impact: improving education, healthcare, income levels, policing, homeownership, employment opportunities, and other benefits of an economy with a level playing field. These elements might be subconscious to AI developers but evident to anyone from communities outside the developers’ backgrounds.

Members of the Black and other minority communities, including those working in AI, are now eager to discuss such issues. The even better news is that among the people we engage in those communities are potential customers who represent growth.

Bias is human. But we can do better

Trying to vanquish bias in AI is a fool’s errand, as humans are and have always been biased in some way. Bias can be a survival tool, a form of learning, and making snap judgments based on precedent. Biases against certain insects, animals, and locations can reflect deep communal knowledge. Unfortunately, biases can also strengthen racist narratives that dehumanize people at the expense of their human rights. Those we can root out.

We will never rid ourselves of all our biases overnight. But we can pass on a legacy in AI that is sufficiently aware of the past to foster a more just and equitable society.

Ishan_Manaktala-symphonyAI.jpg

Ishan Manaktala is a partner at private equity fund and operating company SymphonyAI whose portfolio includes Symphony MediaAI, Symphony AyasdiAI and Symphony RetailAI. He is the former COO of Markit and CoreOne Technologies, and at Deutsche Bank Ishan was the global head of analytics for the electronic trading platform.

About the Author

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights