MENU

AI-based piano controller lets anyone improvise, compose music

AI-based piano controller lets anyone improvise, compose music

Technology News |
By Rich Pell



Offering an eight-button interface in which the pressing of a button(s) triggers a note(s) until released, the Piano Genie controller enables a user to perform on the piano like a trained musician by automatically predicting the next most probable note in a song. It accomplishes this by decoding the user’s “performance” into the realm of plausible piano music in real time.

“To learn a suitable mapping procedure for this problem,” say the researchers in a paper on the project, “we train recurrent neural network autoencoders with discrete bottlenecks: an encoder learns an appropriate sequence of buttons corresponding to a piano piece, and a decoder learns to map this sequence back to the original piece. During performance, we substitute a user’s input for the encoder output, and play the decoder’s prediction each time the user presses a button.”

Piano  Genie  consists of a discrete sequential autoencoder. A bidirectional RNN encodes monophonic piano sequences (88-dimensional) into smaller discrete latent variables (shown here as 4-dimensional). The unidirectional decoder is trained to map the latents back to piano sequences. During inference, the encoder is replaced by a human improvising on buttons.

To improve the interpretability of the Piano Genie’s performance mechanics, “musically-salient” constraints are imposed over the encoder’s outputs. For users, the mapping between buttons and pitch is non-deterministic, but the performer can control the overall form by pressing higher buttons to play higher notes and lower buttons to play lower notes.

As there are no examples of performances on eight-button “pianos,” the researchers adopted an unsupervised strategy for learning the mappings. Specifically, they used an autoencoder setup, where an encoder learns to map 88-key piano sequences to eight-button sequences, and a decoder learns to map the button sequences back to piano music.

The researchers used a system comprising NVIDIA Tesla P100 GPUs and the cuDNN-accelerated TensorFlow deep learning framework to train a recurrent neural network on 1400 classical musical performances by skilled pianists. At performance time, the encoder’s output is replaced with a user’s button presses, evaluating the decoder in real time.

“A non-musician could operate a system which automatically generates complete songs at the push of a button,” say the researchers, “but this would remove any sense of ownership over the result. We seek to sidestep these obstacles by designing an intelligent interface which takes high-level specifications provided by a human and maps them to plausible musical performances.”

As a result, say the researchers, the Piano Genie has an immediacy not shared by other work in this space – sound is produced the moment a player interacts with the software rather than requiring laborious configuration. Additionally, the player is kept in the improvisational loop as they respond to the generative procedure in real time.

Looking ahead, say the researchers, “we believe that the autoencoder framework is a promising approach for learning mappings between complex interfaces and simpler ones, and hope that this work encourages future investigation of this space. For more, see the paper “Piano Genie” (PDF) and the Piano Genie online demo.

Related articles:
Yamaha prototypes VR gloves for musicians
Self-learning brain-inspired chip composes music

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s