It’s up to Smart Humans to Stop Being Stupid About AI

As AI becomes a bigger and perhaps transformative technological force, it’s crucial to recognize that artificial intelligence is only as good as the genuine human thinking that goes into it.

Guest Commentary, Guest Commentary

February 25, 2019

4 Min Read

2018 was the year when AI seemed to go awry: to name a few examples, the controversy over Amazon’s facial recognition tool; the disclosure that YouTube’s machine learning system recommended conspiracy theory videos to visitors; disturbing revelations about the health insurance industry’s use of AI to predict people’s healthcare costs; the arrest of a Palestinian man after he posted “good morning” on Facebook but the social media site’s algorithmic translation feature spat out “attack them.”

Will 2019 bring more of the same? It will if companies keep plowing through the tricky intersection between data and morality. As last year’s headlines proved, AI programs can be influenced by bias or simple oversight and it’s important that the intelligent humans who develop artificial intelligence applications impart ethics and values into their work.

We live in a time of data idolatry. A belief has permeated society that if you just look closely enough at the data, you can find the truth. In reality, this notion is ridiculous and dangerous.

Data will answer the questions people ask only based on how they ask them. Even very smart people can end up encoding their own biases, often subconsciously, into algorithms or machine learning models.

It’s extraordinarily difficult to design a model that can determine unbiased truth because human bias – whether explicitly or implicitly – will almost always be coded into these systems in some way.

In her 2016 book Weapons of Math Destruction, Cathy O’Neil called this the “tyranny of the algorithm.” Most vexing, she wrote, is they propagate discrimination. “If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a toxic cocktail for democracy.”

Another dangerous fallacy swirls around data: We think that data-based approaches will inherently be more objective, simply by virtue of being data. This is a bad blind spot. Even if  well-intentioned humans end up encoding their biases into algorithms they design, humans who have to live with the results  have little room to appeal since, after all, machines are “objective.”

The operating principle across society needs to be: All data are people until proven otherwise.

As AI becomes a bigger and perhaps transformative technological force, it’s crucial to recognize that artificial intelligence is only as good as the genuine human thinking that goes into it. Put another way: Crap in, crap out.

The central question amid the march of the machines is how to accept the realities of these biases and how they affect data insights while still recognizing the value and potential of data. Intuition, experience and context must have an equal place with data science.

Some good ideas to address this are floating around. Omoju Miller, the senior machine learning data scientist at GitHub, has suggested that computer scientists adopt a code similar to the medical Hippocratic oath governing ethical behavior in AI and data analytics.

The Association of Computing Machinery publishes a useful Code of Ethics. The first rule in the Ten simple rules for responsible big data research, published by a group that includes noted researcher and academic Kate Crawford: Acknowledge that “all data are people until proven otherwise” and misuse of data can do harm.

Brent Hecht of Northwestern University has proposed that the computer science community change its peer-review process to ensure that researchers disclose any possible negative societal consequences of their work in papers or risk rejection. Without such a measure, he says, computer scientists will blindly develop applications without weighing their impact on people and society.

AI is one of the most hyped technologies ever, and – when applied correctly – for good reason. To name just two of many archetypes: the FarmLogs system that tracks weather and soil conditions to help farmers maximize crop yields; and the New York University machine learning program that can distinguish with 97% accuracy two lung cancer types, adenocarcinoma and squamous cell carcinoma, that doctors often struggle to differentiate.

But AI’s practitioners must learn to consistently consider the quality of their work in ways that display empathy with other human beings.

Artificial intelligence has enormous promise. But it’s up to humans to think long and hard about the ethics of every AI program they create.

There really is no algorithmic way to ensure ethical behavior. It is really on us, the humans, to make sure we reflect deeply on our goals and understand whether the ends really justify the AI means.

Christian Beedgen is co-founder and chief technology officer at Sumo Logic, a cloud-native machine data analytics platform provider.

About the Author

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights