informa
/
2 MIN READ
Feature

Researchers Teach Computers To See As Humans Do

Researchers are tackling computerized visual recognition by using mathematical models that work the same way our brains process images, an approach that differs fundamentally from current methods.
Can Computers be taught to see just like people? Scientists at MIT's Center for Biological and Computational Learning think so.

Researchers are tackling computerized visual recognition by using mathematical models that work the same way our brains process images. This approach is fundamentally different from current visual recognition methods and could result in search tools that can identify people's faces in seconds.

The scientists work with the center's neurophysiologists, who are studying how the brain sorts images, such as how the tiniest part of an image rouses a photoreceptor in the eye and induces neurons to fire in a specific pattern. At MIT and elsewhere, computer scientists are developing mathematical models of the neuron simulation patterns for particular things--cars, faces, and buildings. Eventually, when a computer sees a car, it's hoped the machine will respond by comparing the neural pattern it processes to earlier instances of car viewing, just as humans do.

That's different from current visual recognition technology, which has grown into a $7 billion industry, according to David Lowe, a University of British Columbia computer science professor. Today, programmers use a half-century-old statistical learning system to teach a computer that certain images are trees and other images aren't. Pixel by pixel, the computer scrutinizes each image and statistically discerns what characteristics trees share with each other but not with other objects.

Statistical learning systems recognize only one type of image, such as a product on an assembly line, says Stan Bileschi, an MIT researcher. But an approach based on how a brain functions would allow the development of software that can recognize many images. To index an image, a user would tag one or two images of a specific item or scene, and the system would recognize all such images in a database.

Bileschi expects neuron-based imaging technology to aid the development of more sophisticated surveillance software and to help neurologists diagnose radiological images.

So far, scientists understand what happens during the first milliseconds of neurons firing in response to images, but they know little about the feedback the brain sends about images. For instance, a person seeing a blurry image on a road doesn't immediately distinguish that it's a car, but knows it likely is. To develop software that can mimic that behavior, more sophisticated imaging technologies must be developed.

Editor's Choice
Samuel Greengard, Contributing Reporter
Cynthia Harvey, Freelance Journalist, InformationWeek
Carrie Pallardy, Contributing Reporter
John Edwards, Technology Journalist & Author
Astrid Gobardhan, Data Privacy Officer, VFS Global
Sara Peters, Editor-in-Chief, InformationWeek / Network Computing