Sponsored By

Machine Learning's Greatest Weakness is Humans

Modeling artificial intelligence on the human brain is modeling it on a flawed model.

Lisa Morgan

June 8, 2017

2 Min Read

Machine learning-- deep learning and cognitive computing in particular-- attempt to model the human brain. That seems logical because the most effective way to establish bilateral understanding with humans is to mimic them. As we have observed from everyday experiences, machine intelligence isn't perfect and neither is human intelligence.

Still, understanding human behavior and emotion is critical if machines are going to mimic humans well. Technologists know this, so they're working hard to improve natural language processing, computer vision, speech recognition, and other things that will enable machines to better understand humans behave more like humans

I imagine that machines will never emulate humans perfectly because they will be able to rapidly identify the flaws in our thinking and behavior and improve upon them. To behave exactly like us would be illogical and ill-advised.

From an analytical perspective, I find all of this fascinating because human behavior is linear and non-linear, rational and irrational, logical and illogical. If you study us at various levels of aggregation, it's possible to see patterns in the way humans behave as a species, why we fall into certain groups and why behave the way we do as individuals. I think it would be very interesting to compare what machines have to say about all of that with what psychologists, sociologists, and anthropologists have to say.

Right now we're at the point where we believe that machines need to understand human intelligence. Conversely, humans need to understand machine intelligence.

Why AI is Flawed

Human brain function is not infallible. Our flaws present challenges for machine learning, namely, machines have the capacity to make the same mistakes we do and exhibit the same biases we do, only faster. Microsoft's infamous twitter bot is a good example of that.

Then, when you model artificial emotional intelligence based on human emotion, the results can be entertaining, inciting or even dangerous.

Training machines, whether for supervised or unsupervised learning, begins with human input at least for now. In the future, the necessity for that will diminish because a lot of people will be teaching machines the same things. The redundancy will indicate patterns that are easily recognizable, repeatable and reusable. Open source machine learning libraries are already available, but there will be many more that approximate some aspect of human brain function, cognition, decision-making, reasoning, sensing and much more.

Slowly but surely, we're creating machines in our own image.

Read more about:

2017

About the Author(s)

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include big data, mobility, enterprise software, the cloud, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights