Machine Learning's Greatest Weakness is Humans - InformationWeek
IoT
IoT
Data Management // Big Data Analytics
Commentary
6/8/2017
07:00 AM
Lisa Morgan
Lisa Morgan
Commentary
Connect Directly
Twitter
RSS
50%
50%

Machine Learning's Greatest Weakness is Humans

Modeling artificial intelligence on the human brain is modeling it on a flawed model.

Machine learning-- deep learning and cognitive computing in particular-- attempt to model the human brain. That seems logical because the most effective way to establish bilateral understanding with humans is to mimic them. As we have observed from everyday experiences, machine intelligence isn't perfect and neither is human intelligence.

Still, understanding human behavior and emotion is critical if machines are going to mimic humans well. Technologists know this, so they're working hard to improve natural language processing, computer vision, speech recognition, and other things that will enable machines to better understand humans behave more like humans

I imagine that machines will never emulate humans perfectly because they will be able to rapidly identify the flaws in our thinking and behavior and improve upon them. To behave exactly like us would be illogical and ill-advised.

From an analytical perspective, I find all of this fascinating because human behavior is linear and non-linear, rational and irrational, logical and illogical. If you study us at various levels of aggregation, it's possible to see patterns in the way humans behave as a species, why we fall into certain groups and why behave the way we do as individuals. I think it would be very interesting to compare what machines have to say about all of that with what psychologists, sociologists, and anthropologists have to say.

Right now we're at the point where we believe that machines need to understand human intelligence. Conversely, humans need to understand machine intelligence.

Why AI is Flawed

Human brain function is not infallible. Our flaws present challenges for machine learning, namely, machines have the capacity to make the same mistakes we do and exhibit the same biases we do, only faster. Microsoft's infamous twitter bot is a good example of that.

Then, when you model artificial emotional intelligence based on human emotion, the results can be entertaining, inciting or even dangerous.

Training machines, whether for supervised or unsupervised learning, begins with human input at least for now. In the future, the necessity for that will diminish because a lot of people will be teaching machines the same things. The redundancy will indicate patterns that are easily recognizable, repeatable and reusable. Open source machine learning libraries are already available, but there will be many more that approximate some aspect of human brain function, cognition, decision-making, reasoning, sensing and much more.

Slowly but surely, we're creating machines in our own image.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
News
6 Tech Trends for the Enterprise in 2019
Calvin Hennick, Technology Writer,  11/16/2018
Commentary
Tech Vendors to Watch in 2019
Susan Fogarty, Editor in Chief,  11/13/2018
Commentary
How Automation Empowers the CIO to Think Outside the IT Department
Guest Commentary, Guest Commentary,  11/20/2018
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Enterprise Software Options: Legacy vs. Cloud
InformationWeek's December Trend Report helps IT leaders rethink their enterprise software systems and consider whether cloud-based options like SaaS may better serve their needs.
Slideshows
Flash Poll