Within a SoundFont, sounds can be mutually exclusive: for example, a ‘closed hihat’ sound will terminate an ‘open hihat’ sound ringing at the same time.
This behavior is modeled in SoundFont using ‘exclusive classes’: turning on a sound with an exclusive class other than 0 will kill all other sound having that exclusive class within the same preset or channel.
Exclusive class defined with Polyphone
Obviously, Midi Player Tool Kit implements this standard behavior. But we let you have the choice to activate or not this way of work.
By script, use this method: MPTK_KillByExclusiveClass and set to false to disable the kill by exclusive. By default the value is true, if you want to change it, call this method once after the synthesizer is initialized. See here how.
With inspector:
With the current version (2.841), these parameters are only visible with MiDiFilePlayer prefab. They will be extended to others prefab in the next version.
Use case: you want to generate music with an instrument (preset, patch, program, there is a lot of synonyms!) other than Piano at the startup of your application.
By default when the synthesizer is loaded, all channels are set with the preset 0. Generally, preset 0 is associated with Piano moreover when it’s a General Midi SoundFont.
The method MPTK_ChannelPresetChange must be used for changing preset with a prefab MidiPlayerStream.
But, the method must be called just after the synthesizer is initialized. So, we used the event OnEventSynthStarted to call EndLoadingSynth (or the name that you want!) in the script.
This process can be applied for a lot of synthesizer parameters, for example Kill By Exclusive Class , or Release Same Note (If the same note is hit twice on the same channel, then the older voice process is advanced to the release stage).
Try this code in your script:
void Start()
{
if (midiStreamPlayer != null)
{
// The call of this methods can also be defined // in the prefab MidiStreamPlayer from the Unity// editor inspector. See "On Event Synth Started.// Below by script: EndLoadingSynth is defined to be called when the synthesizer is ready.if (!midiStreamPlayer.OnEventSynthStarted.HasEvent())
midiStreamPlayer.OnEventSynthStarted.AddListener(EndLoadingSynth);
}
else// Your public variable midiStreamPlayer must be defined with the inspector Unity editor.// Or by script, use the method FindObjectOfType
Debug.LogWarning("midiStreamPlayer is not defined. Check in Unity editor inspector of this gameComponent");
}
/// <summary>/// This methods is run when the synthesizer is ready./// It's a good place to set some synth parameter's as defined preset by channel /// </summary>/// <param name="name"></param>public void EndLoadingSynth(string name)
{
Debug.LogFormat($"Synth {name} is loaded");
// Set music box (preset 10) to channel 0. // Could be different for another SoundFont.
midiStreamPlayer.MPTK_ChannelPresetChange(0, 10);
Debug.LogFormat($"Preset {midiStreamPlayer.MPTK_ChannelPresetGetName(0)} defined on channel 0");
// Set reed organ (preset 20) to channel 1. Could be different for another SoundFont.
midiStreamPlayer.MPTK_ChannelPresetChange(1, 20);
Debug.LogFormat($"Preset {midiStreamPlayer.MPTK_ChannelPresetGetName(1)} defined on channel 1");
// Disable kill sound with same exclusive class
midiStreamPlayer.MPTK_KillByExclusiveClass = false;
}
The fundamental time unit of music is the beat. Beats can be slower or faster depending on the kind of music, and the tempo (speed of the beats) can change even in a single piece. Tempo in standard music notation are typically given in beats per minute (BPM).
In music a bar is a segment of time defined by a given number of beats of a given duration. The values which define a bar, are called the Time Signature.
Notes come in different power-of-two lengths. A MIDI quarter note normally is one beat long. A half note is two beats, and a whole note is four beats (as it takes up a whole measure, if you’re in 4).
so,
An eighth note is half a quarter note, so there are two eighth notes per beat,
a sixteenth note is half an eighth so there are 4 sixteenths per beat,
and so on.
Time Signature
A Time Signature is two numbers, one on top of the other. The numerator describes the number of beats in a Bar, while the denominator describes of what note value a beat is (ie, how many quarter notes there are in a beat).
4/4 would be four quarter-notes per bar (MIDI default),
4/2 would be four half-notes per bar (or 8 quarter notes),
4/8 would be four eighth-notes per bar (or 2 quarter notes), and
2/4 would be two quarter-notes per Bar.
The default MIDI tempo is 120 BPM, and the default Time Signature is 4/4.
However the Set Tempo meta event can change these defaults. As MIDI only deals in quarter notes, the Set Tempo meta event also only deals in quarter notes but also gives the time signature. If the time signature is 4/8, a quarter-note is not a beat since its described as an eighth-note, so using it to calculate beats per minute on its own is incorrect.
Musical timing is defined in fractions of a musical beat, so it makes sense to create a timebase that measures time as fractions of a beat.
A quarter note is always one fourth of a whole note – regardless of the tempo. Similarly, a sixteenth note is always the same fraction of a beat. The rate at which the notes occur can change as the tempo changes, but the relative durations are always the same.
So ideal timebase divides a musical beat into many small bits that occur at a rate determined by the current tempo. Each of these tiny fractions of a beat is called a tick, and the number of ticks per beat is independent of the tempo.
The Standard Midi File header chunk contains a 16-bit value that gives the number of ticks per quarter note. If it is not specified the MIDI default is 48 ticks per quarter note. This value is a constant over the whole file.
Within the MIDI data stream are tempo meta-events, which contain a 24-bit value that give the number of microseconds per quarter note. Divide this one by the first one, include the time signature, and you get the number of microseconds per tick.
Events in a Standard Midi File are defined in terms of Delta Time. Delta Time determines when an event should be played relative to the track’s last event, in ticks. A delta time of 0 ticks means that it should play simultaneously with the last event. A track’s first event delta time defines the amount of time (number of ticks) to wait before playing this first event. Events unaffected by time are still preceded by a delta time, but should always use a value of 0 and come first in the stream of track events.
Sequencing Time
Delta times are stored as ticks, so what we need to know now is how many ticks make up a quarter-note.
Delta times are cumulative, and the next event’s delta time needs to be added onto this one after it has been calculated. If the MIDI time division is 60 ticks per beat and if the microseconds per beat is 500,000, then 1 tick = 500,000 / 60 = 8333.33 microseconds. The fractional number of microseconds must be properly accounted for or the MIDI playback will drift away from the correctly synchronized time.
Midi Sequencer and Audio Synthesizer run in separate system threads, not in the Unity thread, consequently playing music has no impact on your game or application timing.
You can modify Unity Audio Parameters from the MPTK prefab. So, synth rate and buffer size could be adapted to your need.
lower buffer size to lower the latency.
higher rate rise the CPU load and therefor lower the timing accuracy.
Tip: avoid using Unity Update() and Unity Coroutine to create rythmic music. They are not stable enough for create music. Rather use, system thread or system event.
Synth Parameters – Impact on Performance
The main synthesizer parameters which impact performance are:
Interpolation: it’s the process of using two or more data samples to create a data sample between them.
Rate: at which rate the Unity internal synth (DSP) is playing the sample.
Buffer size: length of the sample buffer. Its is build at each cycle of the DSP.
The measure below has been done with the demo TestMidiStream. With this configuration, 25 voices are playing simultaneously:
Piano with one sample
New note every 0.2 second
Duration of 5 seconds
Effect Parameters – Impact on Performance
Effects are defined in the SoundFont for each instrument. So, each of them can have a dedicated set of effects. See here. Conversely, Unity effect apply to the whole AudioSource. So, same effect is applied to all the instruments in the Midi.
Remark:
Effects are available only with the Pro version of MPTK.
By default, effects are disabled in MPTK prefab. You have to enable them from the dedicated inspector or by script with the MPTK API.
The measure has been done only for the SoundFont effect.
First test:
Reed organ with two samples
One continuous note
Second test:
Piano with one sample or reed organ with two samples
This video is a comparison of three Midi files playing in V1 and V2. For the first one (Adagio) the difference is not so obvious, but for the two others, it’s clear!
New Midi Synthesizer based on the excellent fluidsynth project. Code has been converted from C to C# and, of course adapted to Unity.
New class MidiLoad. Useful to load and process all the Midi events from a Midi without playing any sound.
Added Karaoke Capabilities. Midi meta event can contains lyrics in Midi file synchronized with the music. See demo TestMidiFilePlayerScripting.cs and screen shot below. Also some information format here and site here.
New SoundFont format: time to load and size divided by 20 and more compliant respect of the design of generators, modulators, envelopes …