CR4 - The Engineer's Place for News and Discussion ®


Notes & Lines Blog

Notes & Lines

Notes & Lines discusses the intersection of math, science, and technology with performing and visual arts. Topics include bizarre instruments, technically-minded musicians, and cross-pollination of science and art.

Previous in Blog: A Portable Pipe Organ: the International Touring Organ   Next in Blog: Why's It So Hard to Build A Decent Music Hall?
Close
Close
Close

Musical AI Takes Off

Posted July 28, 2016 12:00 AM by Hannes

Music is a no-brainer when it comes to AI research. It has a finite set of rules, a relatively limited scale, and pretty strict limits as to what “sounds good,” at least based on a researcher’s subjectivity. The concept of musical AI is also at least 50 years old: well-known futurist Ray Kurzweil appeared with a music-writing computer of his own invention on I’ve Got a Secret in 1965. However outlandish a music-writing computer might’ve been in the 1960s, Ray unfortunately stumped only one panelist.

This summer has seen a bumper crop of headlines about musical AI. The most notable example is probably IBM’s Watson, the Jeopardy-winning supercomputer. IBM is leveraging Watson to create Watson Beat, a new app designed to boost a musician’s creativity. A user feeds the app a musical snippet and a list of desired instrument sounds, and Watson more or less remixes and alters the sample, choosing its own tempo and doing its own orchestration. Richard Daskas, a composer working on the Beat project, says the app could be helpful for a DJ or composer experiencing “musician’s block” in that it “generate[s] ideas to create something new and different.” An IBM researcher working on the project says Watson Beat should be available commercially by the end of the year.

If there’s a developing tech-related area, how can we not expect Google to be in the ring? A few months ago the tech giant released a 90-second melody created by its Magenta program. Google is using Magenta, a project they first announced in May, to apply its machine learning systems to creating art and music. Similar to Watson Beat, a user feeds Magenta a series of notes, and the program expands them into a longer, more complex sample. Magenta relies on a trained neural network, an AI technique inspired by the brain, to remix its inputs. Google’s efforts at neural networks have already tackled visual artistic development: its DeepDream algorithm was the basis for a visual gallery show early in 2016.

Recent research from Baidu, the Chinese tech giant, takes a different tack and combines AI, visual art, and music. The company’s experimental machine learning algorithms analyze patterns in visual artworks and map them to musical patterns, creating a kind of “soundtrack.” (Check out this video, in which the AI Composer tackles Van Gogh’s Starry Night and other images.) Baidu says the program first attempts to identify known objects such as animals or human faces within the image, and analyzes colors for perceived moods (red=passion, yellow=warmth, etc.). AI Composer contains a large musical library categorized by “feel,” and it draws upon these musical samples to piece together an original composition in the mood of the image.

A large grain of salt is necessary when evaluating AI developments, at least in my opinion. It’s exciting to see artificial neural networks doing their thing, but even considering the subjective nature of art and music, it’s hard to see how Watson, Magenta, or AI Composer have produced anything “good” or worth listening to. Granted, they’re all in the early stages, so who knows? Maybe we’ll see a day when composers come up with basic ideas and let computers do the rest. I for one hope that day’s far off over the horizon.

Reply

Interested in this topic? By joining CR4 you can "subscribe" to
this discussion and receive notification when new comments are added.

Previous in Blog: A Portable Pipe Organ: the International Touring Organ   Next in Blog: Why's It So Hard to Build A Decent Music Hall?

Advertisement