un Projects is based on the unceded sovereign land and waters of the Wurundjeri and Boon Wurrung people of the Kulin Nation; we pay our respects to their Elders, past, present and emerging.
un Projects

The Future of Art: Collaborating with Computers

by

J. Rosenbaum, Xe (2016), digital print on MDF with Augmented Reality Interaction, 76.2 x 76.2 cm. Download the Recursion app to see the interaction.

A new brain, a new way of thinking, a silicone mind fresh for moulding. Neural networks only know what they have been taught, a blank slate, a clean canvas. But they are so much more than just a surface or a new medium to explore; Artificial Intelligence (AI) is the future of art and offers a way to collaborate with our machines to create new forms of art.

Do we need breath to create art? Do we need emotions and fleshy appendages to be able to realise our artistic visions? What is a brain but neurons sparking with the sum total of our knowledge summoning words and pictures in response to our unspoken commands? What is a neural network but a digital brain with artificial neurons connected together to form a structured environment where the neurons can learn from each other and pass information back and forth. Imagine having the power to work with something that can learn and be shaped and moulded by your teachings, with the power to be fed images from a specific time or style as you watch it try to generate its own based on what it has learned. Slowly at first but gaining in skill and complexity; not mimicking exactly, but trying to create its own work based on the inspiration you have given it, the impulses you have shared and the knowledge you have imparted.

For some makers, creating art is a process of taking everything we have seen and experienced and interpreting it in visual form into something that others can see and appreciate. Neural networks create by absorbing everything they have been taught and working to create something new, something original. They are art students learning only what we teach them, and we are in the unique position of being able to help them grow.

DeepDream was the spark that led the revolution into AI based art. A hallucination generator that sees how something resembles something else and builds on its own assumptions to create something no one has ever seen before — a melange of psilocybin induced critters with too many eyes and dog’s heads and insect legs. DeepDream works in a similar way to seeing pictures in clouds; the more it sees something and passes it to the next neuron, the more the next neuron agrees and reinforces that image. It loses the big picture to create a series of tiny ones inside the greater whole. The default neural network most people use for DeepDream is Caffe model zoo which is a neural network trained on animals. As neural networks know only what they have been taught, DeepDream using caffe model zoo cannot help but see animal influences in everything. The creatures it creates are reminiscent of Bosch; demonic creations of nightmare and fantasy, a new surrealism.

Style transfer is the machine learning the art of making something look like it was made in the style of something else, giving everyone the opportunity to have their portrait painted by Monet or Van Gogh. Artistically, however, style transfer is a way to break a work down into its components and rebuild it into the style of something else entirely. If you let go and allow the digital materiality to take over, the work becomes something new rather than a shallow imitation. It can become a collaborative effort of algorithmic technique and style, a new form of inspiration and imitation.

The act of critique is critical to the artistic process. The artist must learn to assess their works and create according to their own success metrics. A Generative Adversarial Network (GAN) is an algorithmic answer to this. A generator creates works while the discriminator decides if it fits the criteria. The human is the puppet master, training the networks with the images they want it to work from and be inspired by. It takes thousands of images to train a GAN and the curation of those images is a meditative process that involves knowing what you want to see but trusting in the network enough to let go and see what it becomes, what it realises.

J. Rosenbaum, Bark (2018), digital print on paper with Augmented Reality Interaction, 60.96 x 60.96 cm. Download the [un] natural app to see the interaction.

Every part of my artwork is digital. From the people, lovingly crafted in 3D and textured and rendered realistically, to the algorithms that abstract and alter them, to the Augmented Reality apps that bring them all together. My world is digital and my creations live inside my computer and on walls and in phones as a way to connect with them, a reminder that our worlds are interconnecting across a digital gap and becoming more computerised. My computer is my collaborative partner — I talk with it, cajole, argue. I paint ideas on my iPad and write on clouds.

I began my algorithmic artistic journey through style transfer, creating works on non-binary transness that explore the notion of transition without transitioning, the nature of not passing. It is about being who you are, your inspirations and the things you see become part of you and when you look at yourself you see parts of everything you have consumed visually coming together. These works are post-gender, post- digital and post human, but they also explore the inherent nature of neural networks and our own brains. Each work is hung on the wall, a traditional white box gallery concept, a print and its partner abstract laser-cut work. The realistic print would abstract and the abstraction would reconstruct itself back to the print. The abstractions were all performed with style transfer and animated inside an Augmented Reality app allowing you to share in the world of the works, however briefly, by using your phone to bridge the gap between the human world and the digital. A series of gods — beyond human, beyond binary notions of gender, they become an abstract notion. Or, they go from being something abstract and barely understood to being a whole person once again.

My series of [un]natural works took algorithmic art further to create photographic abstractions based on the idea of texturing, a key component in digital arts and particularly 3D art. The neural network would single out a section and attempt to create a repeating pattern using the photograph as a texture. I then used style transfer to reveal dancing figures among the leaves and emerging from the bark. Naiads, dryads and nereids; by placing a human face on the environmental concerns of today through an entirely digital medium we see a possible way forward for AI to assist us in seeing the environment as a living, breathing entity requiring empathy.

Every day we work with computers we take them for granted; their ability to make all of our daily tasks easier, the possibilities for communicating all over the world and sharing our grand ideas with everyone who will listen, a void to scream into. But the collaborative possibilities of Artificial Intelligence are becoming more and more accessible to everyone — websites and apps are available to try making machine learning based artistic creations such as deepart.io and deepdreamgenerator.com. If your interest is piqued in the possibilities inherent in teaching a silicone brain to create and think then you can delve further into GitHub and download machine learning frameworks such as TensorFlow and Caffe to create your own artworks using Python, JavaScript and Lua. Once you start down the rabbit hole of AI based art a new world opens up. The possibilities are exciting and only just beginning. Beautiful artworks can be created with websites and apps, but working with the code to create something new and unique is an amazing new way to look at the way that art is made and to work with your own machine to form new ideas, and look at art in a new way. Imagine being able to converse with your canvas to make something unique? Rather than making every decision yourself you learn to trust the code and your settings, you learn to let go and learn from the neural network as it learns from you. You create and as you create you get inspired to create more. And as you create more the algorithms learn and inspire you further. A perfect collaborative cycle.

J. Rosenbaum is an artist working at the intersection of art and technology with Augmented Reality, Artificial Intelligence and 3D modelling. They have a Masters Degree in Contemporary Art from The University of Melbourne, faculty of the VCA and MCM and speak about Art and Technology.