Converts between a monophonic, quantized NoteSequence containing a melody and the Tensor objects used by MusicVAE.
NoteSequence
Tensor
MusicVAE
Melodies are represented as a sequence of categorical variables, representing one of three possible events:
The Tensor output by toTensor is a one-hot encoding of the sequence of labels extracted from the NoteSequence.
toTensor
The expected Tensor in toNoteSequence is a one-hot encoding of melody sequence labels like those returned by toTensor.
toNoteSequence
The length of each sequence.
The minimum pitch to model. Those above this value will cause an error to be thrown.
The maximum pitch to model. Those above this value will cause an error to be thrown.
(default: true) If false, an error will be raised when notes start at the same step. If true, the highest pitched note is used and others are ignored.
(Optional) The number of conductor segments, if applicable.
Generated using TypeDoc
Converts between a monophonic, quantized
NoteSequence
containing a melody and theTensor
objects used byMusicVAE
.Melodies are represented as a sequence of categorical variables, representing one of three possible events:
The
Tensor
output bytoTensor
is a one-hot encoding of the sequence of labels extracted from theNoteSequence
.The expected
Tensor
intoNoteSequence
is a one-hot encoding of melody sequence labels like those returned bytoTensor
.The length of each sequence.
The minimum pitch to model. Those above this value will cause an error to be thrown.
The maximum pitch to model. Those above this value will cause an error to be thrown.
(default: true) If false, an error will be raised when notes start at the same step. If true, the highest pitched note is used and others are ignored.
(Optional) The number of conductor segments, if applicable.