Machine Intelligence Meets Music Composition

The Project:

Augmenting Human Creativity with Machine Intelligence


DeepJams is a graduate project at UC Berkeley exploring ways to augment traditional music composition with machine intelligence. We are applying the latest research in the field along with the latest machine learning toolkits and artificial neural networks to train models that can extend original human compositions with equally original machine generated extensions.

Our group's focus centers on creating useable models which exceed not only on quantitative dataset benchmarking exercises, but can also garner positive qualitative feedback via controlled user testing. To this end, we have created fully functional prototypes for every iteration of the academic effort, free for evaluation and use.

We envision this not only as an academic project (which we hope to publish) but also as a functional product with real use-cases and potentially licensable intellectual property.

  UC Berkeley Best Project

  Hal Varian Award

People are using DeepJams when...

1

Jamming out!

Jamming out solo on guitar or piano, and stuck or bored
Its hard to interact with a machine learning model with a guitar in your hand, so we made it as easy as uploading a file. Check out the snippet of one of the team members picking on a guitar, and the corresponding output.

Input:


Output:

2

Composing

Learning about music theory when composing songs
This was not a use case initially imagined, but was brought to our attention from our more musically inclined friends. To further help composers, we have incorporated sheet music to go along with the generated midi output. We have heard some really fun feedback on how folks are looking at chord progressions and melodies! Here is an example of generating the next 30 seconds on a classical piano piece.

Input:


Output:

3

Driving engagement

Driving engagement to music apps and websites
Imagine listening to a song, now imagine endless variations and remixes. Check out what it can sound like for something like Sandstorm. Imagine possibly even playing both snippets at the same time… or off beat :).

Input:


Output:

Lets Get Started

Two Easy Steps: Initiate & Generate

You can start using our application here. Just follow the two easy steps below or see the usage demo video on the right.

Step 1) upload an MP3 file, much like this one (we all happen to be huge fans of Sandstorm)

Under the hood, we take care of all conversions, audio pipelines, and threshold cleanup.

Step 2) listen to what the next few bars of your music can sound like

The Application in Action



Our Philosophy:

Tooling to Harness Individual Creativity


Music is one of the most universal forms of expression. It can be pleasing, tear-jerking, or motivating. It accompanies us on journeys, at parties, at home, at work, and keeps us company in our solitude. It conjures memories, supports traditions, and sometimes even generates stereotypes. It is so woven into our lives, sometimes we don’t even notice it. All three of us are passionate about music, passionate about science, and even more passionate about furthering the application of science to music.

Composing music is challenging. Composers want to create great melodies, but each new piece needs to be unique. Very often, composers try to step outside their comfort zone (what they have done before), in order to create something “new”. There is a fine balance between being predictable in your sounds, but also surprising the listener.

With DeepJams, we are not creating from scratch. We are creating based on your input, allowing you to keep your signature and style. We focused on intelligent tooling for composers using state-of-the-art machine learning science. All our tools are available for immediate use, and we love to receive feedback.

The Team

Byron Tang

UC Berkeley, MS, 2017
     

Byron is a Data Scientist at Pandora, working with and (primarily) learning from some of the smartest and most passionate music lovers. Favorite activity is going through all of my thumbed up songs on Pandora, and jamming out on my guitar or piano. Would love more jam sessions, but it’s not always easy finding people to play with, thus this project :). Most importantly, the biggest Taylor Swift fan on the team.

Andrea Pope

UC Berkeley, MS, 2017
     

Andrea is a freelance data scientist with close to twenty years of consulting experience. Andrea studied music through college while pursuing her engineering degree. Combining music with machine learning brings all of her loves together. Andrea likes music best when it is live, and tries to catch shows as often as possible. From classical to classic rock, blues to pop, and jam bands to musicals (Lin Manuel Miranda is amazing!), if you are with Andrea, music is most likely playing.

Saif Ahmed

UC Berkeley, MS, 2017
     

Saif is a former Wall Street quant with fifteen years at hedge funds and three as CTO of a AI-driven-diagnostic startup he co-founded. Before all this, Saif produced a rap EP back in high school, focusing on electronic beats before there was any real tooling (1995!) Merging his musical obsessions with professional engineering, data science, and machine learning interests is a dream come true. This stuff shouldn't even count as homework!

Acknowledgements

This project is being conducted at the School of Information at UC Berkeley and being advised by Professor Joyce Shen and Professor Alberto Todeschini. We greatly appreciate their constant guidance and feedback through this journey. We thank Dr. D. Alex Hughes from UC Berkeley for his guidance on experimental validation.

We thank Startup@BerkeleyLaw and The Berkeley Center for Law, Business and the Economy and UC Berkeley Intellectual Property and Industry Research Alliances (IPIRA) for their time and guidance on how to carry the project forward to licensable state.

We thank Amazon Web Services for providing generous usage credits which allowed us to lease compute resources for our application.

On the Shoulders of Giants

Our work is built extensively upon recent research, open source projects, and various academic efforts.

Results and Validation:

The Beauty of Originality


"In some ways, music is a lot like literature. It can be pleasing, enlightening, even life-changing. A great song, like a great book, affects the listener in unanticipated ways and resonates long after the music has ceased. And, if done well, both music and literature have a rhythm and tone unique to their authors; you wouldn't confuse a passage from a Charles Dickens novel with one from a Harry Potter book, nor would you mistake a verse from A Hard Day's Night for one from a Lady Gaga song." --Leah Butterfield, 2014, www.bustle.com

Our project took great pains to quantitatively and qualitatively measure the originality of our model outputs. There is a fine balance -- the outputs need to extend the gist of the user's input, but also produce original music reflecting the style of the training corpora (without being too close or too different.) We trained custom models on sub-genres of music.

We are aiming to publish a paper with quantitative results. However, no need to wait. Please start using the application and tell us what you think!

Frequently Asked Questions

Who owns the outputs?

We are currently not asserting any rights over the output, so you own your own outputs. Of course, please ensure you have the appropriate rights over the input content that you seed the process with. We are currently discussing with lawyers how far outputs must diverge before even the original content rights do not matter.

Does the system work for all music?

All music will be processed, but the system works well on classical, and particularly on piano music. Generally, discrete inputs produce better outputs than ambient pieces.

Do you retain the content produced?

Yes, for now all inputs and outputs are stored to help debug and improve the system.

How can I help or participate?

Please use the system and provide critical feedback. Introductions to musical artists of any level would also be much appreciated.

What is the product roadmap?

We have built everything with a stateless REST architecture, but currently only have a web facade. We're exploring more extemporaneous facades better suited for casual musicians -- a smartphone app and an Alexa Skill. Let us know if you can help beta test.

What is the technology roadmap?

We are pretty scalable but we're in the midst of architecting for infinite scalability via serverless and lambda architectures to handle our entire execution pipeline.

Could the system become self aware?

No, that only happens in the movies. In reality, we spent dozens of nights and weekends just to get the system functional. We'd be pretty happy if it could just start doing stuff without effort. Until then, this is human labour of love.


Oh, Hey!

Night of greater cattle abundantly days second can't also their lesser Heaven Creature yielding lights.

don't worry! we hate spam too.