Software engineers at Google have been analyzing the ‘dreams’ of their computers. And it turns out that androids do dream of electric sheep… and also pig-snails, camel-birds and dog-fish.
This conclusion has been made after testing the ability of Google’s servers to recognize and create images of commonplace objects – for example, bananas and measuring cups. The result of this experiment is some tessellating Escher-esque artwork with Dali-like quirks.
So, what’s the point in creating these bizarre images? Is it purely to find out our future robot overlord’s artistic potential, or is there a more scientific reason? As it turns out, it is for science: Google wants to know how effectively computers are learning.
The Google artificial neural network is like a computer brain, inspired by the central nervous system of animals. When the engineers feed the network an image, the first layer of ‘neurons’ have a look at it. This layer then ‘talks’ to the next layer, which then has a go at processing the image. This process is repeated 10 to 30 times, with each layer identifying key features and isolating them until it has figured out what the image is. The neural network then tells us what it has valiantly identified the object to be, sometimes with little success. This is the process behind image-recognition.
The Google team then realized that they could reverse the process. They gave the artificial neural network an object and asked it to create an image of that object. The computer then tries to associate it with specific features. When we want a picture of a fork, the computer should figure out that the defining features of a fork are two to four tines and a handle. But things like size, color and orientation aren’t as important. The images in the picture above were created in order to ascertain whether the computer has understood this sort of distinction.
Sometimes, the resulting images are not quite what you’d expect… Take this picture of a dumbbell, for example: