What Do We Do About Racist Machines? - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // AI/Machine Learning
Commentary
12/28/2020
07:00 AM
Ishan Manaktala, Partner, SymphonyAI
Ishan Manaktala, Partner, SymphonyAI
Commentary
50%
50%

What Do We Do About Racist Machines?

We will never rid ourselves of all our biases overnight. But we can pass on a legacy in AI that is sufficiently aware of the past to foster a more just and equitable society.

Enterprise AI traditionally views all data as good data. But that’s not always true. As investors think through IPOs and strategy when it comes to tech, we need to take injustice embedded in artificial intelligence seriously.

Artificial intelligence has benefitted enormously from the mass of data accessible via social media, smartphones, and other online technologies. Our ability to extract, store and compute data -- specifically unstructured data -- is a game changer. Searches, clicks, photos, videos, and other data train machines to learn how humans devote their attention, acquire knowledge, spend and invest money, play video games, and otherwise express themselves.

Image: momius - stock.adobe.com
Image: momius - stock.adobe.com

Every aspect of the technology experience has a bias component. Communities take for granted the exclusion of others due to traditions and local history. The legacy of structural racism is not far below the surface of politics, finance, and real estate. Never experiencing or observing bias, if that is even possible, is itself a form of privilege. Such bias, let’s call it racism, is inescapable.

Laws have been in place for well over 70 years to remove apparent bias. The Equal Credit Opportunity Act of 1974 and Fair Housing Act of 1968 were foundational to ensure equal access and opportunity for all Americans. In theory, technology should have reinforced equality because the program and the algorithms are color blind.

Nearly 7 million 30-year mortgages analyzed by University of California at Berkeley researchers found that Latinx and African-American borrowers pay 7.9 and 3.6 basis points more in interest for home-purchase and refinance mortgages, respectively, because of discrimination. Lending discrimination currently costs African American and Latinx borrowers $765 million in extra interest per year.

FinTech algorithms discriminate 40% less than face-to-face lenders; Latinx and African Americans pay 5.3 basis points more in interest for purchase mortgages and 2.0 basis points for refinance mortgages originated on FinTech platforms. Despite the reduction in discrimination, the finding that even FinTechs discriminate is important

The data and the predictions and recommendations that AI makes are prejudiced by the human that is using sophisticated mathematical models to query the data. Nicol Turner Lee, from the Brookings Institute, through her research found the lack of racial and sexual diversity in the programmers designing the training sample leads to bias.

The AI apple does not fall far from the tree

AI models in financial services are largely auto-decisioning, where the training data is used in the context of a managed decision algorithm. Using past data to make future decisions often perpetuates an existing bias.

In 2016, Microsoft chatbot Tay promised to act like a hip teenage girl but quickly learned to spew vile racist rhetoric. Trolls from the hatemongering website 4chan inundated Tay with hateful racist, misogynistic, and Anti-Semitic messages shortly after the chatbot’s launch. The influx skewed the chatbot’s view of the world.

Racist labeling and tags have been found in massive AI photo databases, for example. The Bulletin of Atomic Scientists recently warned of malicious actors poisoning more datasets in the future. Racist algorithms have discredited facial recognition systems that were supposed to identify criminals. Even the Internet of Things is not immune. A digital bathroom hand soap dispenser reportedly only squirted onto white hands. Its sensors were never calibrated for dark skin.

The good news is that humans can try to stop other humans from inputting too much inappropriate material into AI. It’s now unrealistic to develop AI without erecting barriers to prevent malicious actors -- racists, hackers, or anyone -- from manipulating the technology. We can do more, however. Proactively, AI developers can speak to academics, urban planners, community activists, and leaders of marginalized groups to incorporate social justice into their technologies.

Review the data

Using both an interdisciplinary approach to reviewing data using social justice criteria and the common sense of a more open mind to audit data sets might reveal subtly racist elements of AI datasets. Changing this data can have significant impact: improving education, healthcare, income levels, policing, homeownership, employment opportunities, and other benefits of an economy with a level playing field. These elements might be subconscious to AI developers but evident to anyone from communities outside the developers’ backgrounds.

Members of the Black and other minority communities, including those working in AI, are now eager to discuss such issues. The even better news is that among the people we engage in those communities are potential customers who represent growth.

Bias is human. But we can do better

Trying to vanquish bias in AI is a fool’s errand, as humans are and have always been biased in some way. Bias can be a survival tool, a form of learning, and making snap judgments based on precedent. Biases against certain insects, animals, and locations can reflect deep communal knowledge. Unfortunately, biases can also strengthen racist narratives that dehumanize people at the expense of their human rights. Those we can root out.

We will never rid ourselves of all our biases overnight. But we can pass on a legacy in AI that is sufficiently aware of the past to foster a more just and equitable society.

Ishan Manaktala is a partner at private equity fund and operating company SymphonyAI whose portfolio includes Symphony MediaAI, Symphony AyasdiAI and Symphony RetailAI. He is the former COO of Markit and CoreOne Technologies, and at Deutsche Bank Ishan was the global head of analytics for the electronic trading platform.

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Commentary
2021 Outlook: Tackling Cloud Transformation Choices
Joao-Pierre S. Ruth, Senior Writer,  1/4/2021
News
Enterprise IT Leaders Face Two Paths to AI
Jessica Davis, Senior Editor, Enterprise Apps,  12/23/2020
Slideshows
10 IT Trends to Watch for in 2021
Cynthia Harvey, Freelance Journalist, InformationWeek,  12/22/2020
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you.
Slideshows
Flash Poll