There's a new style of art in the world, and it's called Inceptionism. Search giant Google has been training artificial neural networks, a family of statistical learning models inspired by neural networks like those found in the brain, to generate images based on information that engineers provide.
Using the example of a fork, which needs a handle and certain range of tines, Google can input that information, while withholding information like shape, size or color, and see what the network comes up with.
The network's results show a fascinating process whereby an artificial intelligence attempts to create images using data -- the company had particular trouble trying to get a neural network to correctly render dumbbells.
The whole notion of machine learning and AI has been making a lot of news lately. Beyond the worries that it causes some people, such as Bill Gates, AI seems to be something that business are taking a look at. A recent report found that AI can actually help create jobs, thus adding another wrinkle into the whole notion of what IT means to enterprises.
In Google's case, however, art seems to be taking precedence over commerce.
"Instead of exactly prescribing which feature we want the network to amplify, we can also let the network make that decision. In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture," a team of Google software engineers explained in a June 17 Google Research Blog post. "We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance."
While this technique can be applied it to any kind of image, the results can vary considerably depending on the kind of image, because the features that are entered bias the network towards certain interpretations.
The network's visualizations can be fascinating and surreal, as it often tries to interpret shapes into objects based on the information it thinks is relevant, similar to the way we morph clouds into animals using our imagination and recognition of certain similar features.
"If we apply the algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring the set of things the network knows about," the team wrote in the blog post. "We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network."
Those images were generated purely from random noise, using a network trained on places by MIT Computer Science and AI Laboratory.
[Read more about AI.]
When that happens, the results are positively Dali-esque, a collection of brightly colored hues and collage-type images that wouldn't look out of place under a blacklight in a college dorm room.
Google also provides high-resolution photos of these images in its Inceptionism gallery, including some fascinating variations on pointillist master Georges Seurat's famous "A Sunday Afternoon on the Island of La Grande Jatte" and expressionist icon Edward Munch's most famous painting, "The Scream."
"The techniques presented here help us understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training, the post concludes. "It also makes us wonder whether neural networks could become a tool for artists -- a new way to remix visual concepts -- or perhaps even shed a little light on the roots of the creative process in general."