Written by Terry Doner, Updated January 22 2021.
Many people have started to stream their services and found that the sound quality is disappointing. Why? and What do we do about it?
In this article we will discuss why your in-house and stream sound so different and various approaches to improve it.
Fundamentally, your online audience needs a different audio mix than you provide to your FoH Audience.
Your auditorium is a complex acoustic instrument. A person sitting in the seat is hearing sound from all directions: direct from some instruments, like drums and pianos; direct from the audio sound system speakers, and indirect from every reflective surface in the room.
Your best audio engineer is also hearing those same sources and are crafting a sound which includes all of those. The result is a an environment that envelopes the in-person participant.
But what does your online audience hear? The in-house engineer is really focused on the in-person experience. So the online audience will hear the house sound MINUS the direct sound from the instruments MINUS the reflected sound from the room. It will sound unbalanced and flat.
With all of those extras missing, there is one more thing. Any errors made by a musician or a vocal will be very exposed on the online stream because of all of the other pieces that are missing.
The end result is a sound that nobody is proud of, and may not be very appealing to new listeners.
The ability to create a mix that is tailored to online
For example, instruments such as drums, which produce a lot of acoustic volume are likely very low in the FoH mix because they can be directly heard. They need to be substantially louder in the broadcast mix because this is the only way they can be heard.
2. The ability to apply frequency equalization that is tailored to the broadcast audience
The auditorium has its own 'tone' which influences that tonal choices made by the FoH engineer. The online audience are all participating from their own distinct listening spaces, none of which will be similar to the main auditorium. The broadcast engineer needs the ability to shape the tone to best suit the online spaces.
3. Many auditoriums are mono (even if they have speakers on two sides!)
The in-person audience has the benefit of the room in which they are present; walls and ceilings reflect the sound and create a listening space that surrounds them. The online participant is only hearing sound that emanates from the device that they are using; cell phone, laptop, TV, or maybe a surround sound system. In almost all of those cases the device will be stereo. Pumping dual mono through a stereo delivery device will still sound flat. This is in part why many people feel that listening to church services online are not at all like being there.
4. Many devices that are used by our online audiences have limited bass response
In my facility, I have 4 x 18" subwoofers driven by 2000 watt amplifiers. This can create a reasonable experience for the people in the auditorium. However this is very different than people at home on their phones or laptops. In order to approximate the same kind of experience, the broadcast engineer might boost the low-frequency content a bit. And they might like to add some variety of bass 'saturation' technique that increases the presence of the bass instruments in the lower mid-range frequencies.
5. The walls have voices
A participant in the auditorium can listen to the walls; not that they are aware of it. An online participant can only hear what a microphone picks up. To compensate for the difference, the broadcast engineer will want to add so spatial content to their mix. This is typically done via some reverberant or echo components.
6. The musicians are naked
The online audience can only hear what is heard be a microphone. Any error made, wrong note or wrong time, is typically very apparent. It cannot be masked by the room or other ambient sound (other people singing). This is why many broadcast mix solutions also include the capability to perform pitch correction.
The action you need to take is to create an audio mix that is tailored for exactly for the online audience. There are several ways you can tackle this challenge. And to do so we need to make a series of decisions:
One console or two?
Mix at FoH or in a separate room?
One Engineer or Two?
Not all combinations make sense, the chart at the right helps narrow down the choices to those that make sense. (For example one console cannot be in two rooms).
This is the place where many people start, and a lot can be done with just one capable console. How much can be achieved depends on what the capability of the specific console. There are couple of techniques that can be used to maximize one console.
There are some techniques that can be used within one console to get the best results.
One console does not mandate one room or one person. Many consoles offer the capability to run software on a computer or tablet to enable operation of the console from a different space. Some companies even offer a control surface that can be used remotely (Allen&Heath IP8).
The audio processing is still performed by the console itself, by control is independent of the main control surface.
The primary choice with one console is whether you create the mix using an aux mix (sometimes called a mixbus) or using a matrix which is available on some consoles. Which is best depends a lot on your specific console capabilities.
The two videos to the right demonastrate how to set these up on an X32. Other consoles offer similar options.
Using a Matrix
Using an Aux bus Configuration
Post-Fade: This is a setting that is available on many consoles. It essentially dictates whether the feed to the aux bus will be before or after the FoH main fader. If you have one engineer trying to do both FoH and Broadcast mix, you might prefer the Post-Fade approach. This means that as the FoH engineer responds so dynamic changes in-house they will also affect the online mix. This is not a bad choice.
Pre-Fade: However, if you can afford the people resources, it is better to have a completely dedicated person to the online mix and in this case, give them the responsibility of making their own dynamic decisions. By using the pre-fade configuration, the FoH engineer and the Broadcast Engineer won't be fighting over dynamics.
Here is a decent video that discusses pre-fade versus post-fade in general if you want more background. (It is demonstrated on an X32 but the concepts are universal.
Note: Depending on the console, the pre/post fade choice may also be available for a matrix.
When an input comes in on a single channel you are typical constrained to using the same gain, EQ and dynamics for all destinations (FoH, Aux, or matrix). This may not be ideal. Some advanced consoles (eg Digico console) offer the ability to split the processing on a single channel. Optionally many digital consoles would allow you to double patch a single input to multiple channels to gain more independence. Even on an analog console you can split the input or patch a direct out to another input.
If you are restricted on the number of channels available to you, could selectively apply this technique.
Many FoH systems are mono but for your stream you should consider producing a stereo mix. How exactly to do this without affecting your FoH depends on the console. Double patching maybe helpful even required in this case.
When you do a stereo mix, you must also monitor your mix in mono as well to ensure it sounds good in mono as well and that you are causing cancellations.
Two consoles make the most sense when you have two operators and a separate room for the livestream engineer. The key capability to make this work is the ability to split your inputs to both consoles. You may not need to do this for all inputs. For example, you may have room mics that only need to go to the livestream, or maybe your drums in your room don't need micing.
You can use an analog splitter to create independent feeds to two consoles. Two key thing about analog splitters: phantom power and transformer quality. I recommend that you only consider devices that support phantom from one console and isolate the other. Also the quality of the transformers used can greatly affect sound quality, and is reflected in the price. Here are a few choices for how to do this:
Behringer 8-Channel Split ($129)
ART 8-Channel split ($280)
32-channel split ($3290)
If you have a digital stage box (eg Dante) you can split your signals via software (Dante Controller).
The picture to the right shows four inputs from a stagebox being routed to two different consoles. This is a simple 4 channel example, so it is readable on this page, but the same can be done for hundreds of channels.
Some of the other digital protocols offer similar capabilities.
Using a console that is similar for your FoH console makes it easier for your staff to move between positions. A different possibility that many opt for is to use a Digital Audio Workstation (DAW). The main attraction for the DAW approach is the ability to use plug-ins.
If you have no choice but to work in the main acoustic space then you will need a good pair of closed back headphones. Closed back gives you better isolation. Some examples can be found here: Best Monitor Headphones for Live Mixing .
Some people have reported using noise-cancelling headphones. Some may work better than others, but consider that the 'noise' you want cancelled is similar to the program you are trying to evaluate. NC headphones will add their own colour to the sound.
It is always a good idea to listen to your mix afterwards on several different devices: phones, tablets, laptops, desktops and home theatre systems.
Regardless of which of the console approaches you adopt, the most important thing you can do is create the capability for the person doing the live stream mix to hear what they are creating. The best way they can do this is if they are in a separate acoustic space from your FoH auditorium. (And noise reduction headphones are not a solution).
A well designed broadcast room will be quiet, have controlled reflections off the walls, have choices for speakers as a reference, a video monitor to allow the engineer to see what is going on in the main room and a means to communicate with the rest of the team.
Some collocate with the video switcher, others prefer to have their own room so it is quieter.
The details of this process vary greatly based on your specific installation. Below are a few examples of how this can be accomplished. If this matches your installation closely, you could copy it. Otherwise adapt or combine these techniques to suit your requirements.
This is a very common mixing platform for churches. Rather than trying to explain in text, watch this video from "Allam House" and check out that channel.
This is how I am actually doing this for my own church.
I have a mono aux bus set-up for the broadcast mix. This mix is sent out via a Dante card where I can monitor that mix from a separate room. The M7CL has control software that can run on a computer or tablet. I use that software to create the mix I want while listening to it via the Dante feed.
The console outputs feed directly to my video mixer.
This technique would work on many newer Yamaha console, and other brands as well.
Another approach is to use more than one bus and create aux mixes, say a vocal aux mix, and an instrumental aux mix and feed those into your streaming system. If using OBS or vMix, for example, you can then use them as mini DAWs to further refine your sound.
I don't need to say much - A&H wrote this excellent guide.
A DAW is a Digital Audio Workstation; a computer running software for the purpose of processing audio. There are many programs to choose from. I did a three-way comparison for my own facility and you can read about the details here. If you are using vMix for video, you can also do quite a bit with audio as well.
The basic ideal is that you split your audio very early in the signal path and route them independently to your Front of House Console and your DAW. There are several ways of doing this:
Use and audio splitter and run an analog snake to a digital audio interface for your computer.
Bring your audio into your FoH console and then send it out via "Direct Outs" provided by the console. This would also likely require some sort of digital audio interface for your computer.
If you have a digital snake our digital sources that support Dante, route then Dante signal to your DAW using Dante Controller. Your computer would need to be able to run "Dante Virtual Soundcard" to be able to route the audio into your DAW. (Read more in my Dante 101 article.)
Leverage some other interface available to you to get the audio to your DAW, for example maybe your console supports a USB interface. This can work too (although USB is length limited).
On your computer you run software such as "Reaper". You are able to create a custom mix for all of the inputs channels, as well as, apply channel or group processing to add reverb, saturation or pitch correction.
The resulting mix down is then routed to your stream encoder of video mixer.
Broadcast Audio Mixing : https://www.youtube.com/playlist?list=PLR7hxbYNsHgzllCXcVYMYwUDaZaXzGD8t. The Producer of this video series also has their own guide to streaming audio which is a good read if you want a different perspective on the topic: https://attawayaudio.mykajabi.com/churchaudiostreamguide
Production Advice: https://productionadvice.co.uk/
How Loud does YouTube think you are? https://productionadvice.co.uk/stats-for-nerds/
The Attaway Audio channel on YouTube has some great content.
From FoH to Streaming. Beginning streaming audio concepts, with Michael Bangs. One key things Michael talks about that I didn't mention is to also listen to your mix at different levels.