AI's Next Brain Wave - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Software // Enterprise Applications
03:30 PM

AI's Next Brain Wave

New research in artificial intelligence could lay the groundwork for computer systems that learn from their users and the world around them. Part four in The Future Of Software series.

Artificial intelligence, a field that has tantalized social scientists and high-tech researchers since the dawn of the computer industry, had lost its sex appeal by the start of the last decade. After a speculative boom in the '80s, attempts to encode humanlike intelligence into systems that could categorize concepts and relate them to each other didn't really pan out, and "expert systems" packed with rules derived from human authorities couldn't translate their expertise into areas beyond the subject matter for which they were programmed. Even when Deep Blue, an IBM chess-playing computer that could evaluate some 200 million board positions per second, defeated grand master Gary Kasparov in 1997, the triumph didn't lead to an artificial-intelligence renaissance.

Now a new generation of researchers hopes to rekindle interest in AI. Faster and cheaper computer processing power, memory, and storage, and the rise of statistical techniques for analyzing speech, handwriting, and the structure of written texts, are helping spur new developments, as is the willingness of today's practitioners to trade perfection for practical solutions to everyday problems. Researchers are building AI-inspired user interfaces, systems that can perform calculations or suggest passages of text in anticipation of what users will need, and software that tries to mirror people's memories to help them find information amid digital clutter. Much of the research employs Bayesian statistics, a branch of mathematics that tries to factor in common beliefs and discount surprising results in the face of contrary historical knowledge. Some of the new AI research also falls into an emerging niche of computer science: the intersection of artificial intelligence and human-computer interaction.

Several industry trends also are helping move AI up on labs' agendas. The emerging field of wireless sensor networks, which have the potential to collect vast amounts of data about industrial operations, the ecosystem, or conditions in a building or home, could benefit from the use of AI techniques to interpret their data. The Pentagon also continues to fund AI research, partly to lay the groundwork for intelligent vehicles and robots; in October, the Defense Advanced Research Projects Agency will run its second "grand challenge" robotic vehicle race through the desert, likely in California or Nevada. And last month, Jeff Hawkins and Donna Dubinsky, the founders of Palm Computing and Handspring, and inventors of the Palm Pilot, shone a spotlight on artificial intelligence with the launch of a company called Numenta, formed to build software that mimics the human brain's memory. The company's prototype software can distinguish between drawings of animals, for example, a task computers typically can't do.

InformationWeek took a look at four research labs working in artificial intelligence, at IBM, Intel, Microsoft, and Xerox subsidiary Palo Alto Research Center. Instead of leading to another round of outsize expectations, this generation of research likely could lay the groundwork for a new breed of computer systems that learn from their users and the world around them.

IBM: Firing Neurons
As the excitement about traditional AI waned in the late '80s, development of artificial neural networks picked up steam. Instead of manipulating and relating symbols about concepts in the world, neural networks operate according to lists of numbers representing problems and potential solutions. These artificial neurons could learn about relationships based on a training set of solutions and eventually became stacked into "layers," so the output of one neural network could form the input of another. Researchers at IBM's Watson Laboratory in Yorktown Heights, N.Y., are trying to make the model even more complex, building layered neural networks that behave according to biological characteristics of the nervous systems of vertebrates.

The four-year program, called systems neurocomputing, is far reaching; IBM is funding it under a category it calls adventurous research. Charles Peck, program director of neural computing research, has a background in neuroscience, mathematics, and artificial intelligence, and researcher staff member James Kozloski has a Ph.D. in neuroscience from the University of Pennsylvania, where he studied the nervous systems of African river fish.

Systems neurocomputing aims to address a conundrum in AI: that it's virtually impossible to write programs that know in advance all the unfamiliar elements of every task they may encounter. Programming languages such as C generate procedures of instructions for computers to follow. But that doesn't help with a task such as reading a new article--there's no way to know ahead of time what the words in the story will be or the order in which they'll appear, or to program for that. "Computer science has run up against a brick wall," Peck says. "The problems where we know all the details are fewer in number than those we encounter on a daily basis. But we don't know how to write a program to do that. ... Any technology that requires you to specify everything you know up front is going to be doomed." So far, the researchers have written two in-house programming languages to test how multilayered neural networks may be able to get around this problem.

In March, the researchers showed in a published paper that they could recognize patterns and avoid a problem called the superposition catastrophe that trips up neural networks when the systems believe inputs from two sources of information come from the same place or can be averaged together, according to Kozloski. Due next is a demonstration that would digitally re-create a person's ability to perceive a broken line as complete. That could advance understanding of how to build computers that can see, he says.

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
1 of 3
Comment  | 
Print  | 
More Insights
What Becomes of CFOs During Digital Transformation?
Joao-Pierre S. Ruth, Senior Writer,  2/4/2020
Fighting the Coronavirus with Analytics and GIS
Jessica Davis, Senior Editor, Enterprise Apps,  2/3/2020
IT Careers: 10 Job Skills in High Demand This Year
Cynthia Harvey, Freelance Journalist, InformationWeek,  2/3/2020
White Papers
Register for InformationWeek Newsletters
Current Issue
IT Careers: Tech Drives Constant Change
Advances in information technology and management concepts mean that IT professionals must update their skill sets, even their career goals on an almost yearly basis. In this IT Trend Report, experts share advice on how IT pros can keep up with this every-changing job market. Read it today!
Flash Poll