3 Hurdles to Overcome for AI and Machine Learning - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IoT
IoT
Data Management // AI/Machine Learning
Commentary
9/17/2020
07:00 AM
Mark Runyon, Principal Consultant, Improving
Mark Runyon, Principal Consultant, Improving
Commentary
50%
50%

3 Hurdles to Overcome for AI and Machine Learning

Artificial intelligence and machine learning have distinct limitations. Businesses looking to implement AI need to understand where these boundaries are drawn.

Although we are still in the infancy of the AI revolution, there’s not much artificial intelligence can’t do. From business dilemmas to societal issues, it is being asked to solve thorny problems that lack traditional solutions. Possessing this endless promise, are there any limits to what AI can do?

Image: Viktor - stock.adobe.com
Image: Viktor - stock.adobe.com

Yes, artificial intelligence and machine learning (ML) do have some distinct limitations. Any organization looking to implement AI needs to understand where these boundaries are drawn so they don’t get themselves into trouble thinking artificial intelligence is something it’s not. Let’s take a look at three key areas where AI gets tripped up. 

1. The problem with data

AI is powered by machine learning algorithms. These algorithms, or models, eat through massive amounts of data to recognize patterns and draw conclusions. These models are trained with labeled data that mirrors countless scenarios the AI will encounter in the wild. For example, doctors must tag each x-ray to denote if a tumor is present and what type. Only after reviewing thousands of x-rays, can an AI correctly label new x-rays on its own. This collection and labeling of data is an extremely time-intensive process for humans.

In some cases, we lack enough data to adequately build the model. Autonomous automobiles are having a bumpy ride dealing with all the challenges thrown at them. Consider a torrential downpour where you can’t see two feet in front of the windshield, much less the lines on the road. Can AI navigate these situations safely? Trainers are logging hundreds of thousands of miles to encounter all these hard use cases to see how the algorithm reacts and make adjustments accordingly.

Other times, we have enough data, but we unintentionally taint it by introducing bias. We can draw some faulty conclusions when looking at racial arrest records for marijuana possession. A Black person is 3.64 times more likely to be arrested than a white person. This could lead us to the conclusion that Black people are heavy marijuana users. Yet, without analyzing usage statistics, we would fail to see the mere 2% difference between the races. We draw the wrong conclusions when we don’t account for inherent biases in our data. This can be compounded further when we share flawed datasets. 

Whether it’s the manual nature of logging data or a lack of quality data, there are promising solutions. Reinforcement learning could one day shift humans to supervisors in the tagging process. This method for training robots, applying positive and negative reinforcement, could be utilized for training AI models. When it comes to missing data, virtual simulations may help us bridge the gap. They simulate target environments to allow our model to learn outside the physical world.

2. The black box effect

Any software program is underpinned by logic. A set of inputs fed into the system can be traced through to see how they trigger the results. It isn’t as transparent with AI. Built on neural networks, the end result can be hard to explain. We call this the black box effect. We know it works, but we can’t tell you how. That causes problems. In a situation where a candidate fails to get a job or a criminal receives a longer prison sentence, we have to show the algorithm is applied fairly and is trustworthy. A web of legal and regulatory entanglements awaits us when we can’t explain how these decisions were made within the caverns of these large deep learning networks.

The best way to overcome the black box effect is by breaking down features of the algorithm and feeding it different inputs to see what difference it makes. In a nutshell, it’s humans interpreting what AI is doing. This is hardly science. More work needs to be done to get AI across this sizable hurdle.

3. Generalized systems are out of reach

Anyone worried that AI will take over the world in some Terminator-type future can rest comfortably. Artificial intelligence is excellent at pattern recognition, but you can’t expect it to operate on a higher level of consciousness. Steve Wozniak called this the coffee test. Can a machine enter a normal American home and make a cup of coffee? This includes finding the coffee grinds, locating a mug, identifying the coffee machine, adding water and hitting the right buttons. This is referred to as artificial general intelligence where AI makes the leap to simulate human intelligence. While researchers work diligently on this problem, others question if AI will ever achieve this.

AI and ML are evolving technologies. Today’s limitations are tomorrow’s successes. The key is to continue to experiment and find where we can add value to the organization. Although we should recognize AI’s limitations, we shouldn’t let it stand in the way of the revolution.

Mark Runyon works as a principal consultant for Improving in Atlanta, Georgia. He specializes in the architecture and development of enterprise applications, leveraging cloud technologies. Mark is a frequent speaker and contributing writer for the Enterprisers Project.

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
News
Think Like a Chief Innovation Officer and Get Work Done
Joao-Pierre S. Ruth, Senior Writer,  10/13/2020
Slideshows
10 Trends Accelerating Edge Computing
Cynthia Harvey, Freelance Journalist, InformationWeek,  10/8/2020
News
Northwestern Mutual CIO: Riding Out the Pandemic
Jessica Davis, Senior Editor, Enterprise Apps,  10/7/2020
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
[Special Report] Edge Computing: An IT Platform for the New Enterprise
Edge computing is poised to make a major splash within the next generation of corporate IT architectures. Here's what you need to know!
Slideshows
Flash Poll