Neuroevolution Will Push AI Development to the Next Level - InformationWeek
IoT
IoT
Data Management // AI/Machine Learning
Commentary
3/30/2018
08:00 AM
James Kobielus
James Kobielus
Commentary
Connect Directly
Twitter
RSS
50%
50%

Neuroevolution Will Push AI Development to the Next Level

Next up for developing artificial intelligence systems is automated neural-net architecture search.

Evolution is the historical process that created the intelligence behind these words. It’s also responsible for spawning the neural connections that readers are using to grasp what’s being expressed.

Any serious effort to develop “artificial general intelligence” must at some point recapitulate the evolutionary process within which neural networks took shape and became attuned to world around them.  Artificial intelligence researchers have been developing more sophisticated “neuroevolution” approaches for many years. Now it would seem that the time is right for these to enter the mainstream of commercialized AI in a big way.

As AI becomes the driving force behind robotics, more developers are exploring alternative approaches for training robots to master the near-endless range of environmental tasks for which they’re being designed. There is fresh interest in approaches that can train robots to walk as well as humans, swim like dolphins, swing from trees like gibbons, and maneuver with the aerial agility of bats. As I noted here, the robotics revolution has spurred AI researchers to broaden the scope of intelligence to encompass any innate faculty that enables any entity to explore, exploit, adapt, and survive in some environment.

Image: Shutterstock
Image: Shutterstock

In this new era, we’re seeing more research focused on evolutionary algorithms, which are designed to help neural nets automatically evolve their internal structures and connections through trial-and-error training scenarios. In the broader perspective, there is an intensifying commercial and research focus on “neurorobotics,” as well as such overlapping fields as  reinforcement learning, embodied cognition, swarm intelligence, and multi-objective decision making.

As Kenneth O. Stanley notes in this fascinating article, developers’ growing need for sophisticated techniques to accelerate neural-net architecture optimization has spurred convergence between the fields of neural evolution and deep reinforcement learning. As he notes, researchers at OpenAI have developed a neuroevolution approach that boosts the performance of conventional deep reinforcement learning techniques on a variety of training tasks. In this way, the researchers can go well beyond the traditional focus of AI training -- which takes a neural-net architecture as given and simply adjusts weights among artificial neurons -- and uses a simulated variant of “natural selection” to evolve the architecture itself through iterations.

In Stanley’s article, he suggests how neuroevolution might soon become a standard capability in the DevOps toolkit of every practicing data scientist. He discusses a hypothetical scenario in which alternative neural-net architectures are iteratively generated, tested, and selected into a robotics simulation lab.

This is an increasingly feasible scenario for mainstream developers, according to Stanley, due to the steadily improving availability and price-performance of GPUs and other AI-optimized hardware processing power in the cloud. “Neuroevolution,” he states, is just as eligible to benefit from massive hardware investment as conventional deep learning, if not more. The advantage for neuroevolution, as with all evolutionary algorithms, is that a population of artificial neural networks is intrinsically and easily processed in parallel. If you have 100 artificial neural networks in the population and 100 processors, you can evaluate all of those networks at the same time, in the time it takes to evaluate a single network. That kind of speed-up can radically expand the potential applications of the method.”

 Of course, no one is claiming that neuroevolution is a mature field or that this AI training approach is widely deployed in production environments. However, it is clear that these evolutionary neural-net architectural optimization techniques will begin to enter the mainstream of “automated machine learning” approaches within the coming 3-to-5 years. As I noted in this recent Wikibon report, there is a growing range of automation tools for the new generation of developers that deploy machine learning, deep learning, and other AI capabilities into production applications.

It’s only a matter of time before automated neural-net architecture search comes into AI developer toolchains. As it does, it will supplement the automated feature engineering, algorithm selection, and model training capabilities that are already there.

Jim is Wikibon's Lead Analyst for Data Science, Deep Learning, and Application Development. Previously, Jim was IBM's data science evangelist. He managed IBM's thought leadership, social and influencer marketing programs targeted at developers of big data analytics, machine ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Comments
Newest First  |  Oldest First  |  Threaded View
News
6 Tech Trends for the Enterprise in 2019
Calvin Hennick, Technology Writer,  11/16/2018
Commentary
Tech Vendors to Watch in 2019
Susan Fogarty, Editor in Chief,  11/13/2018
Commentary
How Automation Empowers the CIO to Think Outside the IT Department
Guest Commentary, Guest Commentary,  11/20/2018
White Papers
Register for InformationWeek Newsletters
Video
Current Issue
Enterprise Software Options: Legacy vs. Cloud
InformationWeek's December Trend Report helps IT leaders rethink their enterprise software systems and consider whether cloud-based options like SaaS may better serve their needs.
Slideshows
Flash Poll