Use an AudioBufferSourceNode as the source of the sound sample in the Web Audio graph.
There is a special node in Web Audio for handling sound samples, called an AudioBufferSourceNode.
This node has different properties:
buffer: the decoded sound sample.
loop: should the sample be played as an infinite loop - when the sample has played to its end, it is re-started from the beginning. (default is True), it also depends on the two next properties.
loopStart: a double value indicating, in seconds, in the buffer sample playing must restart. Its default value is 0.
loopEnd: a double value indicating, in seconds, at what point in the buffer sample playing must stop (and eventually loop again). Its default value is 0.
playbackRate: the speed factor at which the audio asset will be played. Since no pitch correction is applied on the output, this can be used to change the pitch of the sample.
detune: not relevant for this course.
Before use, a sound sample must be loaded using Ajax, decoded, and set to the buffer property of an AudioBufferSourceNode.
In this example, as soon as the page is loaded, we send an Ajax request to a remote server in order to get the file shoot2.mp3. When the file is loaded, we decode it. Then we enable the button (before the sample was not available, and thus could not be played). Now you can click on the button to make the noise.
Notice in the code that each time we click on the button, we rebuild the audio graph.
HTML code extract:
<button id="playButton" disabled=true>Play sound</button>
JavaScript source code:
var ctx;
var soundURL =
'https://mainline.i3s.unice.fr/mooc/shoot2.mp3';
var decodedSound;
window.onload = function init() {
// The page has been loaded
// To make it work even on browsers like Safari, that still
// do not recognize the non prefixed version of AudioContext
var audioContext = window.AudioContext || window.webkitAudioContext;
ctx = new audioContext();
loadSoundUsingAjax(soundURL);
// By default the button is disabled, it will be
// clickable only when the sound sample will be loaded
playButton.onclick = function(evt) {
playSound(decodedSound);
};
};
function loadSoundUsingAjax(url) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
// Important: we're loading binary data
request.responseType = 'arraybuffer';
// Decode asynchronously
request.onload = function() {
console.log("Sound loaded");
// Let's decode it. This is also asynchronous
ctx.decodeAudioData(request.response,
function(buffer) { // success
console.log("Sound decoded");
decodedSound = buffer;
// we enable the button
playButton.disabled = false;
},
function(e) { // error
console.log("error");
}
); // end of decodeAudioData callback
}; // end of the onload callback
// Send the request. When the file will be loaded,
// the request.onload callback will be called (above)
request.send();
}
function playSound(buffer){
// builds the audio graph, then start playing the source
var bufferSource = ctx.createBufferSource();
bufferSource.buffer = buffer;
bufferSource.connect(ctx.destination);
bufferSource.start(); // remember, you can start() a source only once!
}
When the page is loaded, we first call the loadSoundUsingAjax function for loading and decoding the sound sample (line 16), then we define a click listener for the play button. Loading and decoding the sound can take some time, so it's an asynchronous process. This means that the call to loadSoundUsingAjax will return while the downloading and decoding is still in progress. We can define a click listener on the button anyway, as it is disabled by default (see the HTML code). Once the sample has been loaded and decoded, only then will the button be enabled (line 42).
The loadSoundUsingAjax function will first create an XmlHttpRequest using the "new version of Ajax called XhR2" (described in detail during week 3). First we create the request (lines 26-30): notice the use of 'arrayBuffer' as a responseType for the request. This has been introduced by Xhr2 and is necessary for binary file transfer. Then the request is sent (line 52).
Ajax is an asynchronous process: once the browser receives the requested file, the request. onload callback will be called (it is defined at line 33), and we can decode the file (an mp3, the content of which must be uncompressed in memory). This is done by calling ctx.decodeAudioData(file, successCallback, errorCallback). When the file is decoded, the success callback is called (lines 38-43). We store the decoded buffer in the variable decodedSound, and we enable the button.
Now, when someone clicks on the button, the playSound function will be called (lines 55-61). This function builds a simple audio graph: it creates an AudioBufferSourceNode (line 57), sets its buffer property with the decoded sample, connects this source to the speakers (line 59) and plays the sound. Source nodes can only be used once (a "fire and forget" philosophy), so to play the sound again, we have to rebuild a source node and connect that to the destination. This seems strange when you learn Web Audio, but don't worry - it's a very fast operation, even with hundreds of nodes.
The asynchronous aspect of Ajax has always been problematic for beginners. For example, if our applications use multiple sound samples and we need to be sure that all of them are loaded and decoded, using the code we presented in the earlier example will not work as is. We cannot call:
loadSoundSample(urlOfSound1);
loadSoundSample(urlOfSound2);
loadSoundSample(urlOfSound3);
etc...
... because we will never know exactly when all the sounds have finished being loaded and decoded. All these calls will run operations in the background yet return instantly.
There are different approaches for dealing with this problem. During the HTML5 Coding Essentials and Best Practices course, we presented utility functions for loading multiple images. Here we use the same approach and have packaged the code into an object called the BufferedLoader.
HTML code:
<button id="shot1Normal" disabled=true>Shot 1</button>
<button id="shot2Normal" disabled=true>Shot 2</button>
JavaScript code extract (does not contain the BufferLoader utility code):
var listOfSoundSamplesURLs = [
'https://mainline.i3s.unice.fr/mooc/shoot1.mp3',
'https://mainline.i3s.unice.fr/mooc/shoot2.mp3'
];
window.onload = function init() {
// To make it work even on browsers like Safari, that still
// do not recognize the non prefixed version of AudioContext
var audioContext = window.AudioContext || window.webkitAudioContext;
ctx = new audioContext();
loadAllSoundSamples();
};
function playSampleNormal(buffer){
// builds the audio graph and play
var bufferSource = ctx.createBufferSource();
bufferSource.buffer = buffer;
bufferSource.connect(ctx.destination);
bufferSource.start();
}
function onSamplesDecoded(buffers){
console.log("all samples loaded and decoded");
// enables the buttons
shot1Normal.disabled=false;
shot2Normal.disabled=false;
// creates the click listeners on the buttons
shot1Normal.onclick = function(evt) {
playSampleNormal(buffers[0]);
};
shot2Normal.onclick = function(evt) {
playSampleNormal(buffers[1]);
};
}
function loadAllSoundSamples() {
// onSamplesDecoded will be called when all samples
// have been loaded and decoded, and the decoded sample will
// be its only parameter (see function above)
bufferLoader = new BufferLoader(ctx, listOfSoundSamplesURLs,onSamplesDecoded);
// starts loading and decoding the files
bufferLoader.load();
}
After the call to loadAllSoundSamples() (line 13), when all the sound sample files have been loaded and decoded, a callback will be initiated to onSamplesDecoded(decodedSamples), located at line 25. The array of decoded samples is the parameter of the onSamplesDecoded function.
The BufferLoader utility object is created at line 45 and takes as parameters: 1) the audio context, 2) an array listing the URLs of the different audio files to be loaded and decoded, and 3) the callback function which is to be called once all the files have been loaded and decoded. This callback function should accept an array as its parameter: the array of decoded sound files.
This is a variant of the previous example (picture taken with the now discontinued FireFox WebAudio debugger, you should get similar results with the Chrome Audion extension).
In this example, we added a function (borrowed and adapted from this article on HTML5Rocks):
makeSource(buffer)
Here is the source code of this function:
function makeSource(buffer) {
// build graph source -> gain -> compressor -> speakers
// We use a compressor at the end to cut the part of the signal
// that would make peaks
// create the nodes
var source = ctx.createBufferSource();
var compressor = ctx.createDynamicsCompressor();
var gain = ctx.createGain();
// set their properties
// Not all shots will have the same volume
gain.gain.value = 0.2 + Math.random();
source.buffer = buffer;
// Build the graph
source.connect(gain);
gain.connect(compressor);
compressor.connect(ctx.destination);
return source;
}
And this is the function that plays different sounds in a row, eventually creating random time intervals between them and random pitch variations:
function playSampleRepeated(buffer, rounds, interval, random, random2) {
if (typeof random == 'undefined') {
random = 0;
}
if (typeof random2 == 'undefined') {
random2 = 0;
}
var time = ctx.currentTime;
// Make multiple sources using the same buffer and play in quick succession.
for (var i = 0; i < rounds; i++) {
var source = makeSource(buffer);
source.playbackRate.value = 1 + Math.random() * random2;
source.start(time + i * interval + Math.random() * random);
}
}
Lines 11-15: we make a loop for building multiple routes in the graph. The number of routes corresponds to the number of times that we want the same buffer to be played. Note that the random2 parameter enables us to randomize the playback rate of the source node that corresponds to the pitch of the sound.
Line 14: this is where the sound is being played. Instead of calling source.start(), we call source.start(delay), this tells the Web Audio scheduler to play the sound after a certain time.
The makeSource function builds a graph from one decoded sample to the speakers. A gain is added that is also randomized in order to generate shot sounds with different volumes (between 0.2 and 1.2 in the example). A compressor node is added in order to limit the max intensity of the signal in case the gain makes it peak.