In order to have a "volume meter" which traces upward/downward with the intensity of the music, we will compute the average intensity of our frequency ranges, and draw this value using a nice gradient-filled rectangle.
Here are the two functions we will call from the animation loop:
function drawVolumeMeter() {
canvasContext.save();
analyser.getByteFrequencyData(dataArray);
var average = getAverageVolume(dataArray);
// set the fill style to a nice gradient
canvasContext.fillStyle=gradient;
// draw the vertical meter
canvasContext.fillRect(0,height-average,25,height);
canvasContext.restore();
}
function getAverageVolume(array) {
var values = 0;
var average;
var length = array.length;
// get all the frequency amplitudes
for (var i = 0; i < length; i++) {
values += array[i];
}
average = values / length;
return average;
}
Note that we are measuring intensity (line 4) and once the frequency analysis data is copied into the dataarray, we call the getAverageVolume function (line 5) to compute the average value which we will draw as the volume meter.
This is how we create the gradient:
// create a vertical gradient of the height of the canvas
gradient = canvasContext.createLinearGradient(0,0,0, height);
gradient.addColorStop(1,'#000000');
gradient.addColorStop(0.75,'#ff0000');
gradient.addColorStop(0.25,'#ffff00');
gradient.addColorStop(0,'#ffffff');
And here is what the new animation loop looks like (for the sake of clarity, we have moved the code that draws the signal waveform to a separate function):
function visualize() {
clearCanvas();
drawVolumeMeter();
drawWaveform();
// call again the visualize function at 60 frames/s
requestAnimationFrame(visualize);
}
Notice that we used the best practices seen in week 3 of the HTML5 part 1 course: we saved and restored the context in all functions that change something in the canvas context (see function drawVolumeMeter and drawWaveForm in the source code).
This time, let's split the audio signal and create a separate analyser for each output channel. We retain the analyser node that is being used to draw the waveform, as this works on the stereo signal (and is connected to the destination in order to hear full audio).
We added a stereoPanner node right after the source and a left/right balance slider to control its pan property. Use this slider to see how the left and right volume meter react.
In order to isolate the left and the right channel (for creating individual volume meters), we used a new node called a Channel Splitter node. From this node, we created two routes, each going to a separate analyser (lines 46 and 47 of the example below)
See the ChannelSplitterNode's documentation. Notice that there is also a ChannelMergerNode for merging multiple routes into a single stereo signal.
Use the connect method with extra parameters to connect the different outputs of the channel splitter node:
connect(node, 0, 0) to connect the left output channel to another node,
connect(node, 1, 0) to connect the right output channel to another node,
This is the audio graph we've built (picture taken with the now discontinued FireFox WebAudio debugger, you should get similar results with the Chrome WebAudio Inspector extension):
As you can see there are two routes: the one on top sends the output signal to the speakers and uses an analyser node to animate the waveform, meanwhile the one at the bottom splits the signal and send its left and right parts to separate analyser nodes which draw the two volume meters. Just before the split, we added a stereoPanner to enable adjustment of the left/right balance with a slider.
Source code extract:
function buildAudioGraph() {
var mediaElement = document.getElementById('player');
var sourceNode = audioContext.createMediaElementSource(mediaElement);
// connect the source node to a stereo panner
stereoPanner = audioContext.createStereoPanner();
sourceNode.connect(stereoPanner);
// Create an analyser node for the waveform
analyser = audioContext.createAnalyser();
// Use FFT value adapted to waveform drawing
analyser.fftSize = 1024;
bufferLength = analyser.frequencyBinCount;
dataArray = new Uint8Array(bufferLength);
// Connect the stereo panner to the analyser
stereoPanner.connect(analyser);
// and the analyser to the destination
analyser.connect(audioContext.destination);
// End of route 1. We start another route from the
// stereoPanner node, with two analysers for the meters
// Two analysers for the stereo volume meters
// Here we use a small FFT value as we're gonna work with
// frequency analysis data
analyserLeft = audioContext.createAnalyser();
analyserLeft.fftSize = 256;
bufferLengthLeft = analyserLeft.frequencyBinCount;
dataArrayLeft = new Uint8Array(bufferLengthLeft);
analyserRight = audioContext.createAnalyser();
analyserRight.fftSize = 256;
bufferLengthRight = analyserRight.frequencyBinCount;
dataArrayRight = new Uint8Array(bufferLengthRight);
// Split the signal
splitter = audioContext.createChannelSplitter();
// Connect the stereo panner to the splitter node
stereoPanner.connect(splitter);
// Connect each of the outputs from the splitter to
// the analysers
splitter.connect(analyserLeft,0,0);
splitter.connect(analyserRight,1,0);
// No need to connect these analysers to something, the sound
// is already connected through the route that goes through
// the analyser used for the waveform
}
And here is the new function for drawing the two volume meters:
function drawVolumeMeters() {
canvasContext.save();
// set the fill style to a nice gradient
canvasContext.fillStyle=gradient;
// left channel
analyserLeft.getByteFrequencyData(dataArrayLeft);
var averageLeft = getAverageVolume(dataArrayLeft);
// draw the vertical meter for left channel
canvasContext.fillRect(0,height-averageLeft,25,height);
// right channel
analyserRight.getByteFrequencyData(dataArrayRight);
var averageRight = getAverageVolume(dataArrayRight);
// draw the vertical meter for left channel
canvasContext.fillRect(26,height-averageRight,25,height);
canvasContext.restore();
}
The code is very similar to the previous one. We draw two rectangles side-by-side, corresponding to the two analyser nodes - instead of the single display in the previous example.
Indeed, the proposed examples are ok for making things "dancing in music" but rather inaccurate if you are looking for a real volume meter. Results may also change if you modify the fftSize in the analyser node properties. There are accurate implementations of volume meters in WebAudio (check this volume meter example) but they use nodes that are out of the scope for this course.
We can also propose another approximation that gives more stable results using the method getFloatTimeDomainData which seems to give a normalized array between -1 and 1. Then, it is enough to draw the real values of the wave level . This is still not perfect, since the canvas is drawn 60 times per second (i.e. at a frequency of 60Hz), whereas the audio sampling frequency is most often 44.1 kHz. But it is closer than the simplified method proposed in this section (which averages the amplitude of the frequencies present in the signal). This also allows us to keep the same levels, whatever the fftSize we use (whereas the volume can change slightly with the simplified method).
Here is a CodePen example made by a student, using the latter method. This example is also interesting because it offers multiple and original visualizations: