“Can you be clyde, nevermind, another cliche i can write”
– 2nd verse of "2 months in"
I’ve always been a hopeless romantic growing up. From surprising people with a bouquet of flowers to writing a gooey and cringey poem in the locker rooms, I’m that girl. I’m the one who my friends run to when they need help with being in love. That being said, I wrote a song about it.
“2 months in” is a song about the honeymoon stage where lovers feel sparks and fireworks at the beginning of the relationship. I wrote this song last May 2025 when I first got together with someone I saw as a best friend for almost 2 years. It is about the big things and small gestures while being together like planning the first trip together, the future you both have with each other, apartment hunting, the sweet scent of cologne, and things that just make sense for the two of you. I decided to pick this song out of my thousand archived songs because this is a fresh feeling for me that I want to share with others.
ꨄ︎ WORKING WITH WHAT I HAVE AS A BEGINNER
ꨄ︎ CHANGES I DID AFTER PRE-PRODUCTION
ꨄ︎ HOW DID I DO IT?
ꨄ︎ RECORDING (INTRUMENTS, VOCALS)
ꨄ︎ MIXING
ꨄ︎ MASTERING
ꨄ FINAL OUTPUT
I didn’t get to buy a new mic for this project but I did get a fx pedal as an interface for direct input of my guitars. A Maono AU-PM471TS USB Condenser Mic for my vocal recording, a pair of wired unbranded earphones for my playback, TANK-G effects pedal as my interface for instruments, and my Asus Vivobook Laptop. I stuck with my RJ Notre Dame Bowlback guitar as my base or rhythm instrument and my electric guitar for tabs and additional effects. I used 1 fender TRS cable as a connector of the guitar to my interface.
From using Audiotool and learning more about the features given, I have decided to use Audacity as my Digital Audio Workstation or DAW for the recording and mixing process of this project. Audiotool is a good DAW browser for having to mix effect plug-ins, drum samples, and online virtual instruments making it efficient for producers to use instruments that are not accessible to them. Although it is useful, Audiotool isn’t a good audio workstation for recording tracks as it provides a complicated flow and process of adding new recording samples and recording vocals and instrument tracks. I struggled with connecting my USB condenser microphone to the online DAW and the playback was delayed no matter how many times I adjusted it. I tried looking into it by finding solutions but it seems like people also have the same problem as me that can’t find a solution. At first, I thought of just recording my vocals and instrument tracks in Audacity and then editing and mixing in Audiotool to still maintain its features of free samples and cool plug-in effects. However, I decided to fully switch to Audacity, as handling the effects and plug-ins in Audiotool got more complicated, as well as importing new tracks to the project file, the timeline became a mess.
Audacity is a popular editing DAW for recording, mixing, and editing audio. I’ve known it for awhile and had used it in the past years when I need to enhance my audio tracks for school projects like reducing noise which is the most known editing tool for people who is not familiar with music mixing (like me), but only last year I found out how to transform a compressed and bad audio track into something bearable to listen. Like I said, I have Audacity for a while now but didn’t really use it to its full potential, so after experimenting with the tools and learning more about the effects used for post-production, I started to record.
The first thing I did after opening a new file in Audacity was generating a rhythm track which I used as a metronome. Setting it to 123 bpm with a ¼ time signature, helped me stay on beat while recording instruments and recording vocal tracks. Although I had my share of time in singing with a metronome, I still have struggles in singing on beat even though I already adjusted it to the right bpm. But as my ears listen to it every time I practice, I get used to it more and more. For my tracks, I’ve made the mistake of recording some tracks in stereo mode, but decided to switch into a single mono track since I only used a USB mic (it just duplicates the mono track as it turns into a stereo track which has no use at all). Having mono tracks also reduces the file size which is efficient for me who is saving on storage. For the recording place, I moved to the living room instead of my sister's spot in our room for no reflecting of sounds and a still quiet environment.
SWITCHING TO TANK-G AS AN AUDIO INTERFACE
I originally bought a TANK-G effects pedal to better enhance my sound when I perform in stages but this device served as an interface to directly record from my guitars to the DAW I was using for this project. The TANK-G effects pedal has 1 input port where the plugged guitar goes into and 1 output port where the amplifier or playback device is connected to. It also has an earphone jack which I used to hear the blend of the effects or sound while playing. The main port I used to record my guitars using this device is the USB jack connecting to my laptop picking up the effects applied to the sound of my guitar. The effects pedal has an app that can be connected as bluetooth to the device. This gives me the option to adjust the knobs virtually and makes it easier for me to blend (or ‘timpla’ in guitar effects slang) on what sound I want to hear with the other tracks. After testing and troubleshooting, I finally started recording my instrument tracks that lasted me a week. The TANK-G effects pedal was second-hand used and we bought it for a cheaper price than the original price.
MY OTHER INSTRUMENTS..
My acoustic guitar (RJ Notre Dame Bowlback guitar) was the first instrument I recorded as it served as my rhythm or basis for production of other tracks. In easier words, the next tracks I recorded were following this acoustic guitar track to stay on parts needed for the next instruments. As I record, I adjust the knobs of the effects pedal to my liking. The acoustic and electric guitar were set to clean sound effects while the tabs (electric guitar) were set to high delay and reverb. I don’t have anything to say much about recording the instruments aside from the struggle of being able to time and think of which sound is better. Although I can say that an acoustic guitar and electric guitar sound the same as said by many people, the sound differs from the richness and sharpness of the two.
How my recordings travel (signal flow)
I basically did the same thing with recording my vocal tracks as on how I recorded my instrument tracks but instead of plugging my mic to an interface, I plugged it directly to the USB port of my laptop having my USB mic process the A/D conversion. My Maono USB microphone was positioned in front of my laptop attached to its adjustable mic stand. This is to ensure stability while recording. I adjust the gain knob to 50/50 as much as possible so that I have a balanced amplified sound that is signaled to the microphone. While recording, my wired earphones were connected directly to the laptop and not my microphone because the DAW records the playback of the tracks that was already recorded beforehand from the microphone. Although I already checked and double checked my signal flow and monitoring, I did my research and troubleshooting but was left with the same outcome. I tried adjusting the built-in sound mixer as well and my default recording/playback devices but still no use. The only solution I was left was to plug in my earphones to the laptop and record my vocals with the input monitoring on to hear myself. My mouth was positioned at around 5 inches away from the mic to reduce bass and avoid distortion.
Enough about the struggle, having to record the vocals was a blast as I got to be all giggly and fun while recording the whole song. I also impromptu monologues in between. My favorite parts of recording were the adlibs like having to make a kiss sound but also not trying to kiss the mic. I recorded a total of 4 tracks (1 main vocal track, 1 left panning, 1 right panning, 1 adlibs track, 1 speech track) The left and right panning tracks of the main vocals ensures to produce a spatial depth effect and create volume instead of a thin audio track, while still having 1 main vocal track at the panned at the center to still give focus. I also did this with the other tracks but instead of re-recording, I duplicated the others such as the adlib track and pan it to right while the other panning to left, both set to 40%.
How my recordings travel (signal flow)
First mix of "2 months in" but was re-did a while later for a more clear representation of the mix
A screenshot of all of my tracks
When I posted my progress in the optional activity feedback DF, someone suggested that I make the dialogue parts sound like a telephone line, “So how long have you been dating?” for example. I initially planned to do this already but hearing it from another person made me excited to edit it better. To do this, I added a filter curve EQ from the EQ and filters tab under effects to adjust the frequencies to produce a distorted effect by boosting the frequencies of the track.
The effects I used after I recorded the instruments were:
ꨄ︎ Limiter
ꨄ︎ High-pass
ꨄ︎ TDR Nova (Plug-in)
ꨄ︎ Filter Curve
ꨄ︎ Loudness Normalization
ꨄ︎ Reverb
ꨄ︎ Noise reduction
This was a matter of playing with effects and trying what sounds right with the other tracks whether it would be muddy or too sharp, I was finding the right tone and sound to better complement one another. I applied the limiter as the last step effect to keep the volume balanced all throughout the track. This effect also helped with maintaining the compression and loudness peaks of the track without distorting it. On the other hand, the first step I do is cleaning the audio through noise reduction. I only added around a strength of 4 for reduction of noise to maintain the original quality I wanted when I was recording it, as I already have a noise gate during. After noise reduction, it all now depends on what the track is lacking. For example, if the track is too harsh, I would add an EQ effect including the filter curve and low-pass filter to reduce high frequencies making it more soft. For the 2nd and last chorus of the acoustic guitar, I recorded it using my Maono mic instead of plugging it directly to the interface to produce a more whole sound (Barre chords can't be heard when direclty plugged to the interface and automatically mutes the sound no matter how hard I adjusted it).
Why install another Equalizer if I already have a Filter Curve EQ?
Although the filter curve EQ built-in in Audacity was doing it's job in balancing frequencies, TDR Nova EQ made it easier for me to balance frequencies and compression as dynamic adjustments unlike the filter curve EQ. TDR Nova has multiple dynamic EQs while the built-in Audacity EQ only gave me the option to adjust the curve as a limited static EQ.
Filter Curve EQ to balance frequencies
High-pass filter to adjust crispiness and reduce low frequencies (focusing on high instruments)
TDR NOVA EQ to process high-pass, low-pass filters, and compression
Reverb effect adjusted for the tabs for spatial depth and soften the sound
Limiter for reducing input levels and prevent loudness peaks
Loudness Normalization for overall loudness
TUNING & MIXING MY VOCALS
In my discussion forum entry where I shared my progress, my main problem was being off-key. It was such a big deal for me that I almost wanted to pass just the instrument tracks as it already passed the requirements given. I tried installing different free alternative plugins of melodyn but my Audacity kept crashing every time I applied the effect to the vocal track. I finally budged and tried GSnap, a free pitch corrector that can adjust to minimal and extreme tuning to vocal tracks that are off-key. All I had to do was adjust the pitch corrector tool in the plugin to the right key and adjust the preset and knobs to my liking.
Adding the reverb effect served as ‘final touches’ or last step for me when finalizing the vocal tracks as the wetness and dryness of the tracks helps it blend with the rest of the mix. For my main vocals normal reverb with a minimal of wet gain was applied. As to my backup vocals is set to zero wet gain and -6 dry gain to give a more echoey and soft mix. When overlapping vocal tracks are present in the mix, I minimize the volume of the other vocal track to give space and focus to the main center track. I do this by adding a limiter set to a soft limit preset with the limit of -10 dB. Editing the vocals was easier than the instrument tracks. I only used loudness normalization to increase loudness frequencies, limiter to reduce peaks and limit the frequencies from exceeding the threshold, filter curve EQ to adjust the frequencies whether to be compressed, high, or distorted (specifically for monologues), and reverb.
GSnap for minimal autotuning and pitch correction of off-key notes
Filter Curve EQ for make that 'telephone effect"
Limiter for reducing input levels and prevent loudness peaks
The last step of making the demo song is mastering, where I adjust the loudness of the overall track. After exporting the mix, I imported it to a new Audacity file and finally started (or rather, finish) on mastering. I used compression to enhance and brighten the energy of the audio track. Compression and Limiter helped me with balancing the overall loudness and frequencies of the sound signals in the overall track. I also used TDR Nova EQ to reduce the harsh 's' sounds and sharpness of the track and although I already used noise reduction to reduce the white noise in the background, some air noise can still be heard in the final track because of the distortion of effects using my effects pedal during recording. I liked it that way to keep the noise as a build-up to choruses. The main last thing I did for mastering was adding a filter curve and reducing the harshness of the sound. Separated the frequency points to four, first set to 300-400 Hz to reduce muddy mix, second is cut to 1,500 Hz to cut stuffy effects, third set to 4k Hz to reduce nasal harshness, and fourth point set to 6k removing hissing or harsh 's' sounds.
Filter Curve EQ
Recording the instrument tracks was the hardest part of this project as I got to experience connecting a guitar to the recording software using an interface for the first time. Although my resources and my recording station were limited, I still got to enjoy the process and writing about it made me more confident on how I can face more possible challenges in the next recordings. I would say that the execution of this project made me realize that sound gear is important as having the skill to make music. With all that said, here's the final output of 2 months in (demo) made by me!