Assignment 3: Project Output
Assignment 3: Project Output
A Little Walkthrough
For this project, I decided to work on a short audiobook-style excerpt from The Little Prince. My main goal was to balance narration with background music (BGM) so that it feels whimsical and alive. This walkthrough blog covers everything I did: from recording to mixing to final touches.
Originally, my plan was completely different. I wanted to create a “Get Ready With Me” (GRWM) style audio piece called Mirror Check, where I would layer foley sounds from my skincare routine with guitar background music. But when my CUVAVE pedal/audio interface failed to work, I knew I needed a new plan.
Peep the artwork! (Featuring the boa constrictor swallowing an elephant)
˗ˏˋ ★ ˎˊ˗
Short Pre-Production Section
Narration
Own voice
Foley
Snake hiss & swallow (mouth SFX)
Wind woosh (mouth SFX)
Book flipping (paperback, exaggerated turns)
Pen scribbles (ballpen on paperback)
Airplane sound effect
Ambient layers
room tone
white noise
jungle ambience
Music
Gabriel Fauré – Pavane, Op. 50
Claude Debussy – Golliwogg’s Cakewalk
Hardware
Macbook Air M2
Onikuma Headphones
Maono AU-PM471TS Condenser Mic
Software Used
Audacity
Waveform Tracktion Free
Project Goals
Adapt Chapter 1 of The Little Prince into a short audiobook excerpt
Record narration in a DIY “blanket booth” to minimize room noise
Add simple foley (snake hiss, book flips, pen scribbles, wind whoosh) to bring scenes to life
Layer narration with classical tracks (Fauré Pavane and Debussy Golliwogg’s Cakewalk)
Balance narration as the clear foreground, with music and foley as supporting layers
Keep the final output at around 4 minutes.
Problems I May Encounter
Noise issues: my environment (chickens, karaoke, tricycles) would likely bleed into recordings
My mic tends to exaggerate “s” sounds, which I knew would need fixing later
Making narration stay clear without making the background music disappear
Gear limitations: my setup is decent for basic recording but not tailored for audiobook-style production
Project Timeline
Signal Flow
My project uses mainly one input source: my mic for vocal/foley.
All audio will be mixed and edited inside Audacity and Waveform Tracktion. From there, the final sound is monitored using my headphones.
˗ˏˋ ★ ˎˊ˗
The Process
Recording of the Audiobook
I chose chapter 1 because it's my favorite as it contained the iconic boa constrictor illustration. I recorded the excerpt straight into Audacity with my Maono condenser mic. To avoid picking up too much room noise, I ended up building a DIY booth -- basically covering the mic with a blanket fort inside my room.
Weirdly, it worked. The fabric dampened the echo and outside noise (chickens, karaoke, motors), and the mic stand gave me enough stability to stay consistent. It was one of those “student hacks” where you realize that sometimes you just have to use what you have.
I first recorded a trial voiceover just to test pacing and identify which foley I needed to record. After adjusting my delivery, I recorded the final narration across three days.
Trial Voiceover, recorded in Audacity
My Effects Chain for Final Narration
Noise Reduction – get noise profile from a silent portion, then reduced background hum.
Noise Gate – To cut remaining noise in between lines.
Amplify – To raise the overall level. (mostly automatic for this)
Compressor – To even out loud and soft parts, so the track felt more stable.
Final Voiceover with Effects Chain
my DIY blanket recording booth
Recording of the Foley
Even though I abandoned the GRWM concept, I still wanted to keep foley as part of the project. For The Little Prince, the text itself gave me cues for sounds I could add:
Snake hiss & swallow - for the boa constrictor description. Mouth sound effect.
Wind woosh - for the pilot’s flying passages. I recorded gentle whooshing with a mouth sound effect.
Book flipping - used an actual paperback. I exaggerated the page turns so they’d register clearly in the mic.
Drawing (pen scratching) - pressed a ballpen against hard and rough paperbaper and recorded the scribbles. This was subtle but added texture.
Recorded SFX
First Draft of the Project
After those recordings, this was how the project first looked inside Audacity. I tried to search up ambient beds like room tone, white noise, and a long wind effect. My idea was to create a sound bed so the narration didn’t feel empty.
At this point, I was still just testing out what the project could sound like -- with very rough and messy placement of narration, foley, and background. The foley sound recorded are only the pen sribbles.
This draft was important because it let me see how long my excerpt would run. It also helped me plan where to insert sound effects. It reminded me that narration alone can feel too bare. And lastly, it helped calm down my anxiety telling me that I was going to fail this class because I had decided to change Project Plans.
Perhaps the best way to describe this stage is: sketching before painting.
Choosing the Music & Some Sound Effects
The next thing to do after realizing I did want music for my audiobook, was search for the pieces. I've asked some classical music enthusiast friends for some recommendations, and most said something from Saint-Saëns and Debussy. Originally, I thought of doing “cartoon-like” music, but after hours of searching and listening, I went with:
Gabriel Fauré – Pavane, Op. 50 for the first half and,
Claude Debussy – Golliwogg’s Cakewalk for the second half which had a more "mischievous" feel to it.
There are also sound effects that I can't record myself which are the jungle sounds and airplane sound effect.
Mixing
Mixing was the part where things finally got a little manageable -- for the most part. My project was still fairly simple compared to a full production, but it had narration, foley, music, and ambient layers all competing for space. The first thing I focused on was making my voice clean and clear.
Once the everything was laid out, I worked on how the other elements sat around the vocal. I used EQ to carve space so narration always stayed on top, then added fades so sound effects didn’t just cut in abruptly. Auto-ducking the music helped push it down whenever I was speaking, though sometimes I still had to adjust levels manually. At this stage, I learned that mixing isn’t about making everything equally loud, but deciding what should be foreground (narration) and what should be background (music, foley, ambience). Every adjustment was about guiding the listener’s focus without sacrificing vocal tone.
At first, I thought I could just lower the music volume to make my voice stand out. But when I played the mix on speakers, the background almost vanished and it sounded flat. We don't want that. What worked better was balancing with both volume automation and EQ: the music kept its body, but I scooped out a bit of the midrange so it didn’t clash with my vocal.
Cleaned up narration. Trim out unnecessary sighs, loud breaths, and little mouth noises that slipped through. Add short fades in/out for smooth transitions.
I also made sure the music and ambience didn’t just stop suddenly — I gave them a fade out so the project would close gently instead of cutting off.
For this part, make sure your music is above your narration. And then my favorite part... the Auto-duck.
If you feel like your narration is getting "drowned out" by the background music. add a EQ for both narration and background music. Mostly here it's experimentation on my part.
How it Should Sound at this Stage
Mastering (From Friends' Feedbacks)
I sent a rough version to a couple of friends for feedback. They liked the pacing and didn’t want me to mess with the tempo too much (said it didn't sound like me), but almost everyone said the same thing: the “s” sounds were painful. That’s when I went down the rabbit hole of de-essing. Audacity didn’t really give me the control I needed, so I had to export my vocal track as WAV and move to Waveform (Tracktion).
The method I followed was basically a DIY de-essing trick. I duplicated my vocal track, turned the duplicate into an “S-only” track using AUGraphicEQ, and muted everything except around 6 kHz. Had to do it with both GraphicEQ and BandEQ because if I used only one, it still has leftover audio. Soloing it sounded like a hissy snake which meant I isolated the sibilance. After that, I added Phase Invert to the duplicate, which made the hiss cancel out against the original track. The balance was the hardest part: too high and it distorted, too low and it didn’t fix anything. Took me hours to do this, not gonna lie.
Finally, once it sounded less ear-slicing, I didn’t want to leave two tracks running with phase tricks, so I “bounced” them into one. In Waveform I've learned that it’s done by right-clicking and selecting Render, which basically prints the effect into a new track. This gave me a single clean voiceover file with the S’s under control. It wasn’t perfect compared to a real de-esser plugin, but I learned a lot about how sibilance works and how to listen critically. I may not have perfected it, next time I will try my best to not record the "s's" too much. Or actually the initial compressor or effects I used brought out the sibilance. (Unfortunately can't reverse it with Audacity saving system... sigh)
Final Screenshot of the Project
Welp... It has been a journey.
˗ˏˋ ★ ˎˊ˗
The Product
Final Project