1.23.1 • Published 3 years ago

@magenta/music v1.23.1

Weekly downloads
77
License
Apache-2.0
Repository
github
Last release
3 years ago

@magenta/music

npm version npm.io

This JavaScript implementation of Magenta's musical note-based models uses TensorFlow.js for GPU-accelerated inference. For the Python TensorFlow implementations, see the main Magenta repo.

Complete API documentation is available here.

Table of Contents

Getting started

If you want to get hands-on with Magenta, we've put together a small interactive tutorial that takes you through generating a small melody in the browser using a Machine Learning model.

Here are some examples of applications that have been built with @magenta/music. A more complete list is available on the Magenta site.

You can also try our hosted demos for each model and have a look at their code.

Usage

There are several ways to get @magenta/music in your JavaScript project, either in the browser, or in Node:

In the browser

The models and the core library is split into smaller ES6 bundles (not ESModules, unfortunately 😢), so that you can use a model independent of the rest of the library. These bundles don't package the Tone.js or TensorFlow.js dependencies (since there would be a risk of downloading multiple copies on the same page). Here is an abbreviated example:

<html>
<head>
  ...
  <!-- You need to bring your own Tone.js for the player, and tfjs for the model -->
  <script src="https://cdnjs.cloudflare.com/ajax/libs/tone/14.7.58/Tone.js"></script>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/tensorflow/1.2.8/tf.min.js"></script>
  <!-- Core library, since we're going to use a player -->
  <script src="https://cdn.jsdelivr.net/npm/@magenta/music@^1.0.0/es6/core.js"></script>
  <!--Model we want to use -->
  <script src="https://cdn.jsdelivr.net/npm/@magenta/music@^1.0.0/es6/music_vae.js"></script>
</head>
<script>
  // Each bundle exports a global object with the name of the bundle.
  const player = new core.Player();
  //...
  const mvae = new music_vae.MusicVAE('https://storage.googleapis.com/magentadata/js/checkpoints/music_vae/mel_2bar_small');
  mvae.initialize().then(() => {
    mvae.sample(1).then((samples) => player.start(samples[0]));
  });
</script>
</html>

We also have an ES5 bundle that contains all the models and the core functions, but using in production is not recommended due to its size.

In Node

You can use @magenta/music in your project using yarn (by calling yarn add @magenta/music) or npm (by calling npm install --save @magenta/music).

The node-specific bundles (that don't transpile the CommonJS modules) are under @magenta/music/node. For example:

const mvae = require('@magenta/music/node/music_vae');
const core = require('@magenta/music/node/core');

// Your code:
const model = new mvae.MusicVAE('/path/to/checkpoint');
const player = new core.Player();
model
  .initialize()
  .then(() => model.sample(1))
  .then(samples => {
    player.resumeContext();
    player.start(samples[0])
  });

Example Commands

yarn install to install dependencies.

yarn test to run tests.

yarn build to produce the different bundled versions.

yarn run-demos to build and serve the demos, with live reload.

(Note: the default behavior is to build/watch all demos - specific demos can be built by passing a comma-separated list of specific demo names as follows: yarn run-demos --demos=transcription,visualizer)

Supported Models

We have made an effort to port our most useful models, but please file an issue if you think something is missing, or feel free to submit a Pull Request!

Piano Transcription w/ Onsets and Frames

OnsetsAndFrames implements Magenta's piano transcription model for converting raw audio to MIDI in the browser. While it is somewhat flexible, it works best on solo piano recordings. The algorithm takes half the duration of audio to run on most browsers, but due to a Webkit bug, audio resampling will make this significantly slower on Safari.

⭐️Demo: Piano Scribe

MusicRNN

MusicRNN implements Magenta's LSTM-based language models. These include MelodyRNN, DrumsRNN, ImprovRNN, and PerformanceRNN.

⭐️Demo: Neural Drum Machine

MusicVAE

MusicVAE implements several configurations of Magenta's variational autoencoder model called MusicVAE including melody and drum "loop" models, 4- and 16-bar "trio" models, chord-conditioned multi-track models, and drum performance "humanizations" with GrooVAE.

⭐️Demo: Endless Trios

MidiMe

MidiMe allows you to personalize a pre-trained MusicVAE model by quickly training a smaller model directly in the browser, with very little user data.

⭐️Demo: MidiMe

Piano Genie

Piano Genie is a VQ-VAE model that maps 8-button input to a full 88-key piano in real time.

⭐️Demo: Piano Genie

GANSynth

GANSynth is a method for generating high-fidelity audio with Generative Adversarial Networks (GANs).

⭐️Demo: GANHarp by Counterpoint.

SPICE

SPICE is a wrapper method for extracting pitches from audio using the SPICE model.

DDSP

DDSP is a method for synthesizing audio into other instruments.

⭐️Demo: Tone Transfer by AIUX x Magenta.

Model Checkpoints

Most @magenta/music models (with the exception of MidiMe) do not support training in the browser (because they require a large amount of data, which would take an incredibly long time), and they use weights from a model trained with the Python-based Magenta models. We are also making available our own hosted pre-trained checkpoints.

Pre-trained hosted checkpoints

Several pre-trained checkpoints for all of our models are available and hosted on GCS. The full list is available in this table and can be accessed programmatically via a JSON index here.

Your own checkpoints

Dumping your weights

To use your own checkpoints with one of our models, you must first convert the weights to the appropriate format using the provided checkpoint_converter script.

This tool is dependent on tfjs-converter, which you must first install using pip install tensorflowjs. Once installed, you can execute the script as follows:

../scripts/checkpoint_converter.py /path/to/model.ckpt /path/to/output_dir

There are additional flags available to reduce the size of the output by removing unused (training) variables or using weight quantization. Call ../scripts/checkpoint_converter.py -h to list the available options.

Specifying the Model Configuration

The model configuration should be placed in a JSON file named config.json in the same directory as your checkpoint. This configuration file contains all the information needed (besides the weights) to instantiate and run your model: the model type and data converter specification plus optional chord encoding, auxiliary inputs, and attention length. An example config.json file might look like:

{
  "type": "MusicRNN",
  "dataConverter": {
    "type": "MelodyConverter",
    "args": {
      "minPitch": 48,
      "maxPitch": 83
    }
  },
  "chordEncoder": "PitchChordEncoder"
}

This configuration corresponds to a chord-conditioned melody MusicRNN model.

SoundFonts

There are several SoundFonts that you can use with the mm.SoundFontPlayer, for more realistic sounding instruments:

InstrumentURLLicense
PianosalamanderAudio samples from Salamander Grand Piano
Multisgm_plusAudio samples based on SGM with modifications by John Nebauer
Percussionjazz_kitAudio samples from Jazz Kit (EXS) by Lithalean

You can explore what each of them sounds like on this demo page.

How Tos

Use with a WebWorker

A WebWorker is a script that can run in the background, separate from the main UI thread. This allows you to perform expensive computatios (like model inference, etc) without blocking any of the user interaction (like animations, scrolling, etc). All @magenta/music models should work in a WebWorker, except for GANSynth and Onsets and Frames, which need to use the browser's AudioContext to manipulate audio data. (You can work around this by separating the audio processing code from the actual inference code, but we don't currently have an example of this).

Here is an example of using a MusicVAE model in a WebWorker. In your main app.js,

const worker = new Worker('worker.js');

// Tell the worker to use the model
worker.postMessage({sequence: someNoteSequence});

// Worker returns the result.
worker.onmessage = (event) => {
  if (event.data.fyi) {
    console.log(event.data.fyi);
  } else {
    const sample = event.data.sample;
    // Do something with this sample
  }
};

In your worker, worker.js,

importScripts("https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@1.4.0/dist/tf.min.js");
importScripts("https://cdn.jsdelivr.net/npm/@magenta/music@^1.12.0/es6/core.js");
importScripts("https://cdn.jsdelivr.net/npm/@magenta/music@^1.12.0/es6/music_vae.js");

const mvae = new music_vae.MusicVAE('https://storage.googleapis.com/magentadata/js/checkpoints/music_vae/mel_2bar_small');

// Main script asks for work.
self.onmessage = async (e) => {
  if (!mvae.isInitialized()) {
    await mvae.initialize();
    postMessage({fyi: 'model initialized'});
  }

  const output = await mvae.sample(1);
  // Send main script the result.
  postMessage({sample: output[0]});
};

Use with a ServiceWorker

A ServiceWorker is a script that your browser runs in the background, separate from a web page. In particular, ServiceWorkers allow you to provide offline interactions by controlling what data your browser caches (like soundfont files, model checkpoint chunks). For a full example, check out the Piano Genie PWA code, that lets you install Piano Genie as a PWA app, and use it entirely offline.

This is also extremely useful if you want to test a very large model checkpoint, but don't want to download it every time you refresh the page.

The main things to look out for are the manifest.json and the meta tags. Then, in your main script, load the service worker:

  // Force HTTP.
  if (location.protocol == 'http:') location.protocol = 'https:';
  if('serviceWorker' in navigator) {
    navigator.serviceWorker.register('/sw.js')
      .then(reg => console.log('Service Worker registered', reg))
      .catch(err => console.error('Service Worker **not** registered', err));
  }
  else {
    console.warn('Service Worker not supported in this browser');
  }

In sw.js,

self.addEventListener('install', e => {
  e.waitUntil(
  (async function() {
    const cache = await caches.open("your-app-name-assets");

    const resources = [
      // Static files you want to cache.
      "index.html",
      "style.css",
      "script.js",
      "helpers.js",
      "manifest.json",
      // A built, minified bundle of dependencies.
      "magenta-1.7.0.js",
      // SoundFont manifest.
      'https://storage.googleapis.com/magentadata/js/soundfonts/sgm_plus/soundfont.json',
      // Model checkpoint.
      "https://storage.googleapis.com/magentadata/js/checkpoints/piano_genie/model/epiano/stp_iq_auto_contour_dt_166006/weights_manifest.json",
      "https://storage.googleapis.com/magentadata/js/soundfonts/sgm_plus/acoustic_grand_piano/instrument.json",
      // List here all the actual shards of your model.
      "https://storage.googleapis.com/magentadata/js/checkpoints/piano_genie/model/epiano/stp_iq_auto_contour_dt_166006/group1-shard1of1"
    ];
    // The actual SoundFont files you will use.
    for (let i = 21; i < 105; i++) {
      resources.push(`https://storage.googleapis.com/magentadata/js/soundfonts/sgm_plus/acoustic_grand_piano/p${i}_v79.mp3`)
    }

    // Cache all of these
    const local = cache.addAll(resources);
    await Promise.all([local]);
  })()
  );
});

self.addEventListener('fetch', e => {
  // If the resource is cached, send it.
  e.respondWith(caches.match(e.request).then(r => r || fetch(e.request)))
});

Use with TypeScript

If you want to use @magenta/music as a dependency in a TypeScript project, here is a sample project that does that and uses webpack to build and transpile it.

1.23.1

3 years ago

1.23.0

3 years ago

1.22.1

4 years ago

1.22.0

4 years ago

1.21.0

4 years ago

1.20.0

4 years ago

1.19.0

4 years ago

1.18.2

4 years ago

1.18.1

4 years ago

1.18.0

4 years ago

1.17.0

5 years ago

1.16.0

5 years ago

1.15.0

5 years ago

1.14.0

5 years ago

1.13.0

5 years ago

1.12.1

5 years ago

1.12.0

5 years ago

1.11.3

5 years ago

1.11.2

5 years ago

1.11.1

5 years ago

1.11.0

5 years ago

1.10.1

5 years ago

1.10.0

5 years ago

1.9.0

5 years ago

1.8.0

6 years ago

1.7.0

6 years ago

1.6.0

6 years ago

1.5.0

6 years ago

1.4.6

6 years ago

1.4.5

6 years ago

1.4.4

6 years ago

1.4.3

6 years ago

1.4.2

6 years ago

1.4.1

6 years ago

1.4.0

6 years ago

1.3.1

6 years ago

1.3.0

6 years ago

1.2.3

6 years ago

1.2.2

6 years ago

1.2.1

6 years ago

1.2.0

6 years ago

1.1.14

6 years ago

1.1.13

6 years ago

1.1.12

6 years ago

1.1.11

6 years ago

1.1.10

6 years ago

1.1.9

6 years ago

1.1.8

6 years ago

1.1.7

6 years ago

1.1.6

6 years ago

1.1.5

7 years ago

1.1.4

7 years ago

1.1.3

7 years ago

1.1.2

7 years ago

1.1.1

7 years ago

1.0.3

7 years ago

1.0.2

7 years ago

1.0.1

7 years ago

1.0.0

7 years ago

0.0.10

7 years ago

0.0.9

7 years ago

0.0.8

7 years ago

0.0.7

7 years ago

0.0.6

7 years ago

0.0.5

7 years ago

0.0.4

7 years ago

0.0.3

7 years ago

0.0.2

7 years ago

0.0.1

7 years ago