MidiMe constructor.
(optional) Model configuration.
Decodes a batch of latent vectors.
The batch of latent vectors, of shape [numSamples,
this.config['latent_size']].
A latent vector representing a NoteSequence. You can pass
this latent vector to a MusicVAEs decode method to convert it to a
NoteSequence.
Disposes of any untracked tensors to avoid GPU memory leaks.
Encodes a batch of latent vectors.
The batch of latent vectors, of shape [numSamples,
this.config['input_size']]. This is the vector that you would get from
passing a NoteSequence to a MusicVAEs encode method.
A latent vector of size this.config['latent_size'].
Instantiates the Encoder, Decoder and the main VAE.
Reconstructs an input latent vector.
The input latent vector
The reconstructed latent vector after running it through the model.
Samples sequences from the model prior.
The number of samples to return.
A latent vector representing a NoteSequence. You can pass
this latent vector to a MusicVAEs decode method to convert it to a
NoteSequence.
Trains the VAE on the provided data. The number of epochs to train for
is taken from the model's configuration.
A function to be called at the end of every training epoch, containing the training errors for that epoch.
Generated using TypeDoc
Main
MidiMemodel class.A
MidiMeis a hierarchical variational autoencoder that is trained on latent vectors generated byMusicVAE. It allows you to personalize your own MusicVAE model with just a little data, so that samples from MidiMe sound similar to the input data.