Google recently released code of its visualization tool DeepDream based on artificial neural networks. The network has been trained with millions of images, and can be asked what it sees in new imagery:
“The results are intriguing—even a relatively simple neural network can be used to over-interpret an image, just like as children we enjoyed watching clouds and interpreting the random shapes. This network was trained mostly on images of animals, so naturally it tends to interpret shapes as animals. But because the data is stored at such a high abstraction, the results are an interesting remix of these learned features.”
I put some of my Musical Anatomy images through this network to see what would happen. Some of the results could be interpreted as representations of sound fluidly moving through the air. Some of the results, with hints of animal faces and peacock eyes, are dreamlike (if not nightmarish). Here are some examples:
Astor & Pollux
Mr. Tambourine Man
Created with Dreamscope.