Synth is a basic synthesizer with a single oscillator
PlayTone starts the note (the amplitude is rising), and StopTone is when the amplitude is going back to 0 (i.e. note off).
PlayStopTone is a combination of PlayTone and StopTone
The first argument to the note which can either be a frequency in hertz (like 440) or as “pitch-octave” notation (like "D#2").
The second argument is the duration that the note is held. This value can either be in seconds, or as a tempo-relative value.
The third (optional) argument of StartStopTone is when along the AudioContext time the note should play. It can be used to schedule events in the future.
Web Audio has advanced, sample accurate scheduling capabilities. The AudioContext time is what the Web Audio API uses to schedule events, starts at 0 when the page loads and counts up in seconds.
Music.now() gets the current time of the AudioContext.
Music abstracts away the AudioContext time. Instead of defining all values in seconds, any method which takes time as an argument can accept a number or a string. For example "4n" is a quarter-note, "8t" is an eighth-note triplet, and "1m" is one measure.
music.Transport is the main timekeeper. Unlike the AudioContext clock, it can be started, stopped, looped and adjusted on the fly. You can think of it like the arrangement view in a Digital Audio Workstation or channels in a Tracker.
Multiple events and parts can be arranged and synchronized along the Transport. music.Loop is a simple way to create a looped callback that can be scheduled to start and stop.
Since Javascript callbacks are not precisely timed, the sample-accurate time of the event is passed into the callback function. Use this time value to schedule the events.
There are numerous synths to choose from including FM, AM and Noise.
All of these instruments are monophonic (single voice) which means that they can only play one note at a time.
To create a polyphonic synthesizer, use CreatePolySynth, which accepts a monophonic synth as its first parameter and automatically handles the note allocation so you can pass in multiple notes. The API is similar to the monophonic synths, except StopTone must be given a note or array of notes.
Sound generation is not limited to synthesized sounds. You can also load a sample and play that back in a number of ways. CreatePlayer is one way to load and play back an audio file.
music.SetOnLoaded when all audio files are loaded. It’s a helpful shorthand instead of waiting on each individual audio buffer’s onload event to resolve.
Multiple samples can also be combined into an instrument. If you have audio files organized by note, CreateSampler will pitch shift the samples to fill in gaps between notes. So for example, if you only have every 3rd note on a piano sampled, you could turn that into a full piano sample.
Unlike the other synths, CreateSampler is polyphonic so doesn’t need to be passed into CreatePolySynth.
In the above examples, the sources were always connected directly to the Destination, but the output of the synth could also be routed through one (or more) effects before going to the speakers.
The connection routing is very flexible. Connections can run serially or in parallel.
Multiple nodes can be connected to the same input enabling sources to share effects. music.Gain is very useful utility node for creating complex routing.
Like the underlying Web Audio API, Music is built with audio-rate signal control over nearly everything. This is a powerful feature which allows for sample-accurate synchronization and scheduling of parameters.
Signal properties have a few built in methods for creating automation curves.
For example, the frequency parameter on Oscillator is a Signal so you can create a smooth ramp from one frequency to another.