Why It's Nice to Know What Can Go Wrong with AI

The mistakes and lessons learned early in the lifecycle of AI will serve all of us well in the future.

James M. Connolly, Contributing Editor and Writer

November 11, 2019

4 Min Read
Image: Shutterstock

I'm not wishing ill on anyone, whether it's in the form of bias in an AI-driven credit system or a false positive from a facial recognition system. Unfortunately, those things are going to happen. The real question is about whether we learn from such events.

It's a fact that bad things do happen when we adopt new technologies, and artificial intelligence is no different in that regard. What is different is that we -- as a society and a tech industry -- are recognizing the dangers early in AI's life cycle, and if we have a minimum of issues today it's better than massive damage down the road. That struck me when I was reading Jessica Davis's report from the Gartner IT Symposium last month, AI Ethics: Where to Start.

The advice from Gartner led me to recall the often-forgotten issues that we encountered with technologies of the past when we ignored the flaws, even the evil, associated with them.

For example, business loves its email. It allows us to have our say with someone without having to walk to their office, call them, or send them an interoffice mail that could take three days to reach them. In the late 1980s, when email adoption was everywhere, I spoke with four or five CIO-level executives, asking them if they had guidelines for how email was used. A couple had some rules of protecting company secrets, but none of them had thought through issues such as how communicating through email differs from voice communications where voice inflection offers so much value.

Worse, we learned only that email offered a wide open door for malware and phishing.

Then instant messaging came along, initially with no archive capabilities. People could say anything on IM and there was no real record of it. Plenty of us witnessed coworkers airing complaints about a manager by starting with a mental lapse: Oops! The boss was on your brain and you clicked his name into the "To:" field before starting the flame mail, rather than the coworker. (I got a couple of those).

When the cell phone came along the only real evils that we recognized were the threats of brain cancer and the fact that phones were a way to sneak a camera into a secure facility. We didn't anticipate the downside of an always connected life, the societal and family impact of people not speaking face to face, and the public safety aspects: knuckleheads walking into traffic or light poles because they are so focused on their phones.

A lot of those emerging tech issues were addressed too late or not at all. However, with AI we have an opportunity to face issues such as bias, data quality, and the impact on the workforce in a more timely manner. Rather than ignoring them, we can use just a bit more care in how we employ AI.

First, we have to give warnings of AI's flaws a fair hearing and more consideration for what just a little more care or transparency might offer. Don't turn a blind eye and rush AI apps into production, damn the torpedoes, full speed ahead.

Ask yourself if you are drawing on biased data or if your algorithm favors a specific gender, race or age group. Understand what business or customer actions, good or bad, might result from your AI results. Explore whether automation initiatives are intended to simply replace the cost of human workers, or whether automation will make operations more productive. Maybe take a little more time in deciding not only how facial recognition will be used but also how it shouldn't be used.

(The latest example of alleged discrimination was the series of revelations Monday about the "Apple Card" -- a partnership of Apple and Goldman Sachs -- offering men higher credit limits than their spouses, even when the spouses shared all assets. For example, Apple co-founder Steve Wozniak's credit limit was set at 10 times his wife's limits, and he noted that they are among the couples who share all assets. According to CNN, Apple said credit limits are set by algorithm.)

Maybe even ask whether the application you are considering truly benefits your organization, your customers and employees, or even humankind.

A lot of the issue of how to deal with the evils or flaws of AI really has to do with knowledge aforethought and well-thought-out policies and education. The benefits from AI can be significant, but we have an opportunity to recognize that careers, relationships, reputations and more are at risk. So, a bit more care couldn't hurt, even if it results in a delay of a few months or years. The need for doing it right far outweighs the need for doing it right now.

I welcome your feedback.

About the Author

James M. Connolly

Contributing Editor and Writer

Jim Connolly is a versatile and experienced freelance technology journalist who has reported on IT trends for more than three decades. He was previously editorial director of InformationWeek and Network Computing, where he oversaw the day-to-day planning and editing on the sites. He has written about enterprise computing, data analytics, the PC revolution, the evolution of the Internet, networking, IT management, and the ongoing shift to cloud-based services and mobility. He has covered breaking industry news and has led teams focused on product reviews and technology trends. He has concentrated on serving the information needs of IT decision-makers in large organizations and has worked with those managers to help them learn from their peers and share their experiences in implementing leading-edge technologies through such publications as Computerworld. Jim also has helped to launch a technology-focused startup, as one of the founding editors at TechTarget, and has served as editor of an established news organization focused on technology startups at MassHighTech.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights