Artificial Intelligence always fascinated me. Not only as a useful set of tools, continuously evolving, but also as an experiment field. Some years ago, as Neural Network-based solutions popped up everywhere, it was a breakthrough. A silent one back to those days.
The fact of a system, learning and developing itself without direct influence of a programmer was a beautiful one. And a terrifying one for some people, who were seeing a Terminator-esque danger of a total domination of machines smashing human skulls in the total AI Apocalypse.
Here is a takeaway, even before we start:
It’s not about AI itself being evil. It’s about ourselves, about people, being enabled to do evil things using AI.
I was just personally overwhelmed with the possibilities for creativity by the usage of AI. Back in the early days of Neural Networks, I was captivated by Google Deep Dream and installed its virtual instance on my laptop (you can check it out right now on the web-based version provided by Deep Dream developer Alex Mordvintsev).
To recollect, what did Google Deep Dream do?
- analyse a photograph
- recognize familiar patterns and objects
- via several iterations, modify the original image with its own interpretation of recognized objects — with a kind of Echo Chamber effect.
So for the beginning I just made a selfie.
My unnerving selfie full of eyes.
Then I let run various photos of mine via virtual DeepDream installation:
Dogs. Lot of dogs. Neural Network’s trained AI recognized mostly in everything dogs.
By the way, the reason why so many dogs was obvious:
A neural network’s ability to recognize what’s in an image comes from being trained on an initial data set. In Deep Dream’s case, that data set is from ImageNet, a database created by researchers at Stanford and Princeton who built a database of 14 million human-labeled images. But Google didn’t use the whole database. Instead, they used a smaller subset of the ImageNet database released in 2012 for use in a contest… a subset which contained “fine-grained classification of 120 dog sub-classes.” (FastCompany)
Then I used my Merzmensch userpic (a selfie in a polished iron marble I shot 2006):
The results were pretty weird, Brueghel’ & Bosch’esque.
Here was way more than just dogs: tin can-like structures, caterpillar-ish creatures. It was mesmerizing to watch the alteration of the original photo — iteration for iteration.
This was a huge potential for new forms of arts. For new collaborations between a machine and a human. Because the concept of a human doesn’t seem to distinguishable from the cybernetic entity:
We (humans & AI) recognize things we perceive and the subjects we already know. We view this perception as the only true one. We create the reality inside of our OS —either it is a complex brain-hormones collaboration, or sophisticated Deep Learning processes.
DeepDream was just the beginning. Nowadays there are huge varieties of experiments and inventions around AI and creativity. Let’s brainstorm and think weirdly. In the following series “AI & C reativity” I will examine art in artificial.
Tell me, what do you think about creativity and Artificial Intelligence? Are they compatible issues?