Advertisement

Google's 'Magenta' project will see if AIs can truly make art

Google research will find the limits of computer creativity.

Google's next foray into the burgeoning world of artificial intelligence will be a creative one. The company has previewed a new effort to teach AI systems to generate music and art called Magenta. It'll launch officially on June 1st, but Google gave attendees at the annual Moogfest music and tech festival a preview of what's in store. As Quartz reports, Magenta comes from Google's Brain AI group -- which is responsible for many uses of AI in Google products like Translate, Photos and Inbox. It builds on previous efforts in the space, using TensorFlow -- Google's open-source library for machine learning -- to train computers to create art. The goal is to answer the questions: "Can machines make music and art? If so, how? If not, why not?"

That's not an entirely new endeavor. Researchers and creatives have been generating music through technology for years. One notable name in the field is Dr. Nick Collins, a composer who uses machine learning to create songs, some of which were adapted in the making of a computer-generated musical launched earlier this year. Individuals have also created songs using publicly available recurrent neural network code, while companies like Jukedeck are already commercializing their models.

How Google's efforts in the space will differ from those that came before it is still unknown. From the brief demo at Moogfest, though, it appears Magenta will be similar to others. The most important part of the process will be training, where the AI will absorb and learn from a particular type of media -- at Moogfest, the focus was obviously music. Once it's trained, the network can be "seeded" with a few notes, and then let loose its creativity to turn those notes into a full piece of music. The output of this process can generally be tweaked with variables that define how complex its calculations should be, and how "creative" or "safe" its output.

DeepDream, Google's visual AI that could transform photos into psychedelic art, worked on a similar principle, as do other neural networks like Char-RNN, which we used to train a writing bot. Douglas Eck, a Google Brain researcher who led the talk at Moogfest, said the ultimate aim was to see how well computers can create new works of art semi-independently.

A neural network demoed at Moogfest extrapolated five notes into a more complex melody.

Unless Google has made a significant breakthrough, it's likely Magenta will involve multiple unique efforts in the fields it's looking into -- one neural network wouldn't be able to create music and art. At first, the focus will be on music, before moving onto visual arts with other projects.

Before working on Magenta, Eck was responsible for music search and recommendation for Google Play Music. Perhaps it should come as no surprise, then, that he's also interested in other uses for AI in music and the arts. If a computer can understand why you like to listen to a song at any given moment, it can better recommend others. This sort of user-specific, context-aware recommendation is something all music services want to offer, but none have really nailed yet. This research isn't part of Magenta, but gives you an idea of how many uses AI can have in the field beyond "just" generating pieces.

As with DeepDream, Google will be working on Magenta out in the open, sharing its code and findings through developer resources like GitHub. The first public release will be a simple program that can be trained using MIDI files. It's not clear if there'll be an equally simple way to output new music based on that training on June 1st also, but Eck committed to regularly adding software to the Magenta GitHub page and updating its blog with progress reports.