To top
Share on FacebookTweet about this on TwitterPin on PinterestShare on Google+Share on LinkedInEmail this to someone

Are there limits to a computer’s creativity? On June 1, Google launched Magenta, part of Google Brain Group, to explore this very question. Magenta is a research project using artificial intelligence (AI) to create music, images and text. So far, the group has about six researchers and will invite others through Google’s Tensorflow, an open-source library for machine learning, to help solve the problem of creative machines.

“Magenta has now generated its first piece of AI music,” according to Jason Freidenfelds of Google’s Global Communications & Public Affairs team. Elliot Waite at Google created the little ditty embedded below using a LSTM (long short term memory) neural network:

 

Machine-generated music has been done before, but not in this fashion – this melody was entirely self-learned. In the composition, the neural network was “primed with just four notes, C,C,G,G — think “twin-kle twin-kle,” explains Freidenfelds. “The important parts here are memory and attention — the neural network has to be able to look over a collection of MIDI tunes to get a sense of what’s important to focus on to either repeat it or change it. We added some drums just to hold it together, but the melody is machine-generated. We didn’t give [the AI] any rules about music, or any little rules-of-thumb to help it generate anything nice-sounding (as most previous machine-generated music has been done).”

According to Popular Science, Douglas Eck, the initial researcher on the Magenta project, said that the group will tackle algorithms that can generate music, then move to other visual arts. Eck’s inspiration for Magenta came from the Google DeepDream project – a way researchers used to look at how their AI algorithms perceived objects by asking them to generate examples. Here is an example of what Google’s DeepDream neural network thinks leaves, landscapes and trees look like:

image-dream-map-1

Researchers then used the picture the network produced as the new picture to process, and soon the neural network began to create an “endless stream of new impressions.” They call these images the neural network’s “dreams,” completely original representations of a computer’s mind created from real objects. Below are examples of what happens when the neural network was asked to pull patterns out of white noise:

iterative_places205-googlenet_4 2

iterative_places205-googlenet_12 2

AI typically has a hard time doing certain things that humans can do, like recognizing objects in images and other types of intuitive behavior. But Google wants to change this by developing projects, such as DeepDream and Magenta, to teach AI to be as creative and intuitive as humans.

Eck also mentioned a potential Magenta app, which would feature the music and art created out of the Magenta project and gauge whether people like the art because it’s interesting or because it holds artistic value. The project has a GitHub page where it will be posting code and other information related to Magenta’s progress in the near future.

All Photos By Michael Tyka/Google. Music provided by Project Magenta/Google.

Leave a Reply

We are on Instagram