Hello! Using Web Audio with streamed content is really easy.
I’m going to show you the most simple example we can do, directly on JSBin.
I first create an audio element, using the standard way.
Here we are, ok, [music] so this is just the standard audio element.
I will add the “crossOrigin=anonymous” attribute because when we stream audio content and we control it with Web Audio, we need to follow the “same origin policy” constraint.
That means that the HTML page and the JavaScript code that manipulate the content of the stream, should be normally located on the same server as the audio file.
This is not the case here because the HTML I’m typing is hosted on jsbin.com.
By adding the “crossOrigin=anonymous” attribute, this will send different HTTP headers to the mainline.i3s.unice.fr server.
And this server has been configured for accepting external requests.
So this will make the example work.
From JavaScript, if I want to get the audio stream, I must first wait until the page is loaded.
I’m writing a window.onload event listener, and the first thing I do is to get a handle on the audio player.
I need to add an id attribute here… Ok, like that!
Like the canvas, that is using a graphic context, here, with Web Audio, we use an audio context.
So, I created the context.
Now, I can create a special source node using context.createMediaElementSource, that takes as a single parameter a video or an audio element.
I’m using the player variable here, that corresponds to the audio element.
Now, I connect this source to the destination.
A destination is a special node that corresponds to the speakers.
Each node has a connect and a disconnect method.
I’m using the connect method here, and I’m using ctx.destination as the destination node.
If I play it [music], the stream is going directly to the speakers.
And if I comment this line, the string is disconnected and nothing is outputed.
Once you got an handle on the audio stream, using createMediaElementSource, the behavior of the audio element is changed; all the signal, the audio signal, is routed to your own audio graph, not to the default route that goes to the speakers.
And I can visualize this graph.
With JsBin, I need to be in standalone mode, like that.
I open the devtools, I check that Web Audio is activated, here, and now I can go to the Web Audio tab, reload the page, and I can see my graph here.
Ok, that was all for this very first lesson.
You learnt how to make the simplest audio graph possible, and how to visualize the graph.
In the next lesson, we will add in the middle, here, different nodes for processing the sound like having control over the stereo, making filters... like an equalizer, and things like that...
In the previous lesson, we encountered the MediaElementSource node that is used for routing the sound from a <video> or <audio> element stream. The above video shows how to make a simple example step by step, and how to setup FireFox for debugging Web Audio applications and visualize the audio graph.
Typical use—
HTML code:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Web Audio simplest example</title>
</head>
<body>
<audio id="player" src="https://mainline.i3s.unice.fr/mooc/guitarRiff1.mp3" controls crossorigin="anonymous"></audio>
</body>
</html>
JS code:
let ctx;
window.onload = () => {
// the page is loaded
ctx = new AudioContext();
// the most simple source
let player = document.getElementById("player");
player.onplay = (e) => {
ctx.resume()
}
let source = ctx.createMediaElementSource(player);
source.connect(ctx.destination);
};
The MediaElementSource node is built using context.createMediaElementSource(elem), where elem is an <audio> or a <video> element.
Then we connect this source Node to other nodes. If we connect it directly to context.destination, the sound goes to the speakers with no additional processing.
In the following lessons, we will see the different nodes that are useful with streamed audio and with the MediaElementSource node. Adding them in the audio graph will enable us to change the sound in many different ways.