As referring back to my weekly timetable, my aims for this week are to start experimenting with removing the music from my proposed advertisements, this will be very important as it will determine how I continue with this project; contact possible collaborators to see if they will collaborate with me on my project, and to start my research, looking into the elements of music and advertisement, as this will help give me a scope on how I should tailor my progression and result to be realised as a professional sounding composition to advertisement.
When starting to research the elements of music and advertisement, I decided to start with my proposed research, specifically looking at an article by someone by the username of 'yapa' (https://vi-control.net/community/threads/i-write-music-for-commercials-heres-what-they-pay-and-how-to-get-into-it.135703/). In this article they overview many important parts of working as a composer to advertisements, such as:
In this website, 'yapa' makes it clear that composers who work in this field of the industry can expect a wide range of payments, this ranging from £2,000 - £50,000+ for those creating custom music to adverts. They state that this is down to a wild variety of factors, some of these factors they provided being: the client, for example if they are part of a larger or smaller company; the terms you agree on, how or how frequently your work will be used, and your artistic brand. They also mention the payments for composing sonic logos (audible logo) and sound mnemonics (memorable sound linked to the brand), this can commonly range from £20,000 - £60,000, being defined and determined by the same factors mentioned above.
Another thing 'yapa' mentions is kill fees and demo fees, a kill fee being made when you work doesn't get licenced on the advertisement and a demo fee being payments you receive not dependent on if you get the job. 'yapa' explains that as a composer with enough experience and trust (between a publisher and client), might get demo or kill fees, this being more frequent to those newer in the field of music and advertisement. They further explain that this ranges from a couple of £100 to a few £1,000, and that these fees are more common when working major holidays, like Christmas.
In this site, 'yapa' recommends joining a Publisher when working on composing music to adverts, and specifically one that specialises in the same field as they have plenty of connections, which will give you more opportunities for work. 'yapa' also mentions that a good way to find and collaborate with a Publisher is to search for them on Google and narrow down a few of your personal favourites. Then once you've done this, you should look into what projects they typically land and create some compositions in a similar style to present on first contact.
I found this very interesting, and when discussing with my teacher Ian Rossiter, he said I should research into this further, so I decided to delve into this and look into the role of a Music Publisher and found this article https://soundcharts.com/blog/how-the-music-publishing-works#what-is-music-publishing
There is a lot of information here about the relationship between the composer and the publisher and the purpose of the publisher. To highlight some of the most relevant points and simplify them: a Music Publisher's purpose is to promote and monetize musical compositions, as well as ensuring that the songwriters receive their royalties and create opportunities for their work to be performed and reproduced. For copyrights for a songwriters music, you can have a Composition copyright, in which you write the songs melody, harmony and/or lyrics; and The Sound Recording copyright (or the Master), in which you produce and record the song. To help explain, if you record a cover of a song, you will only get the Master recording copyright, not the Composition copyright. Whenever a song is created, there are two equal shares of royalties created: a writer's share and a publisher's share. If you are credited as a Writer, you will always own the writer's share. This ownership can't be assigned to a publisher, as it is directly sent to the songwriter by PRS (Performing Right Society). I am unsure of the validity of this, due to this article being from an American writer, so the laws around this may be different to in the UK, as he introduces some parts as 'in the US', however the research gathered here does not feature anything explicitly said to be only in the US.
'yapa' says that when creating custom music, you will usually be provided with a picture outlining the narrative of the advert (such as rising action, climax, falling action etc.). (Image copied from website)
They also say that the typical durations tend to be 15s, 30s, 60s, 90s, and sometimes they will cut timings or it may be your responsibility to cut timings.
Moving away from researching what it is like to work in Music and Advertisement, I found this article by Tom Ewing (https://system1group.com/blog/music-in-advertising-the-sweet-sounds-of-brand-growth#:~:text=Music%20is%20a%20potent%20emotional,powerful%20tool%20for%20brand%20impact.), which looks over 'How to Harness the Power of Music in Advertising'. These are some of the points that I found that would be most useful for my Project:
Ewing understands music is a great tool for evoking emotion, and using this to exaggerate a emotion in an advert, whether that is, joy, sadness, anger etc. will create an emotional connection, thus giving the advertisement more power and better brand impact. This is a very valid point and will prove particularly useful in my proposed second ad about snow leopards, as it is made to make the viewer understand the sadness and severity of the situation, so evoking melancholy through use of minor chords and other means within this composition will be important.
In this website, Ewing suggests to 'adhere to your brand's distinctive style', as this will help make your ad stand out. Not following competitor standards and industry conventions helps highlight your brand's uniqueness, leaving a lasting impression on the viewer. This idea will be very useful for both of my ads, but particularly my proposed ad 1, as the game advertised has a very distinctive style, so further emphasizing this novelty through music will prove useful.
Ewing states that rather than focusing on a specific audience, you should aim for wide market penetration; resonating with a broad audience will contribute to the recognition of your brand and its long term growth and success. I feel this will be the biggest challenge; however, I think I could demonstrate this best through my proposed ad 2: sadness is a very common emotion to evoke amongst all audiences. However for my proposed ad 1, not everyone will find the displayed game to be exciting and thrilling, so creating a piece to communicate to a larger audience for this ad will be more difficult.
For Primary research, I wanted to interview one of my teachers, Matt Wilkey, who understands aspects of music and advertisement. I wanted to do an interview as it would allow me to ask any specific questions I have about music and advertisement and would help me significantly going forward on this project.
This was my opening email to Matt, the purpose of this to try and set up an interview this week. I also asked if he was comfortable being recorded for this interview to ensure I didn't make him feel pressured to be recorded when it came to the date.
This was his response, giving me a date could partake in this interview.
In my response, I say I was unable to do this interview at his proposed date, so I responded with another possible date in the next week.
This date worked for Matt, as visible in the screenshot, and we have arranged a interview for Tuesday at 3:15.
This is the ripped version of my ad 1. In this we can see that the video quality has decreased, due to it being ripped, but it is still clear and visible so I don't consider this a problem.
For my first means of removing the music from this advertisement, I use the stem splitter within logic. What the stem splitter does is analyse the audio you want to split and separates it into its instrumental parts, such as vocals, guitars drums etc. In by doing this, I managed to remove the vocals, drums and guitars, however I wasn't able to completely remove the stems labelled 'Other' or Bass. The issue with this method is that due to this advertisement having gun noises and laser noises, this gets mixed up into the music. I tried to remedy this by further splitting the 'Other' and Bass stems, however it removed to much content from the ad, so I decided to take a different approach.
This was the result from using the stem splitter on the ad's audio (the audio being heard labelled "bounce thing", shown in the image above).
For the next method of removing the music from the audio of my advertisement, I decided to experiment with Phase cancellation. I was able to find the song heard in the background of this advert (LIKE A PUNK - Joey Valence & Brae), this being done by just putting the hook-line in to the Spotify search bar and listening to the songs that came up.
This is an exemplified explanation on how intentional phase cancellation works. I would like to further state that this is not accurate to how phase cancellation works, however it is a good simplified exemplification that helps to understand how the concept works.
After finding the song, I ripped the audio off of YouTube using https://tuberipper.com/33/ (which I have used for all my audio and video ripping), which allowed me to download the sound and put it into Logic. As mentioned in my Proposal, there are issues with this method of obtaining advertisements, this specifically being a drop in audio and video quality, however this was the best way in order to obtain this videos and be able to edit them.
I then, after putting this into logic, inverted the sound wave by double clicking the audio wave, going to the file section, clicking the functions drop-down and selecting the Invert option.
Then, as can be seen on the image on the right, I lined up a section of the song (as the song underneath the ad was an edit) as best I could, using the Bits (detail) of the sound wave to give an accurate measurement.
This is a section of the ad using the Phase Cancellation method to remove the background music.
After assessing these methods and how well they worked when removing the music from the ad, I decided this:
For the stem splitting method, I found that since the gun noises and laser noises got mixed in with the 'Other' and Bass stems, removing it through this method wouldn't work unless I removed all the audio from the ad, so I've decided not to go with this method as removing the music would involve removing the contents of the ad itself.
As for the Phase cancellation method, this worked really well in reducing the volume of the song and if I spent a large amount of time purely focusing on this, I believe I could remove the song almost entirely. But this is the issue. I would have to spend a lot of time trying to remove the song from the ad using this method. This has lead me to decide not to use this method, as it would consume to much time and would make me lose my sense of good time management, causing me to fall behind.
This is my opening email to my Session Drummer, James. In this email I have sent him the week in which I would like to record so he knows in advance and I have said when I find out exact dates I will email him, also, well in advance.
This is James' response. In this email he accepts to work with me, however due to him having a busy schedule himself, he cant say for sure what days he is unavailable to do.
In my response to his email, I have expressed my excitement to work with him and also asked him to keep me updated on what days he is busy and have said that I aim to have a idea of a date set by either this week or the next.
This was the initial email to Marcy asking if she would like to sound engineer for my project.
This was her response, in which she says she would like to sound engineer for me. She also mentions that her timetable may change in the coming week so notice is very important.
I replied saying I would try and provide as much notice as I could as I understand she has a busy schedule.
This is my opening email to harry asking if he would like to collaborate and if he would consider mastering my work, and also asking if he would have any questions regarding the mastering. I wasn't expecting a response from him this week as it can be seen I emailed him rather late on the Friday, so I await a response in the following week.
yapa (no date) I write music for commercials. here’s what they pay and how to get into it... | VI-control. Available at: https://vi-control.net/community/threads/i-write-music-for-commercials-heres-what-they-pay-and-how-to-get-into-it.135703/ (Accessed: 25 March 2025).
Pastukhov, D. (2019) Market intelligence for the music industry, Soundcharts. Available at: https://soundcharts.com/blog/how-the-music-publishing-works (Accessed: 25 March 2025).
Ewing, T. (2024) Music in advertising: The sound of brand growth, System1 Group. Available at: https://system1group.com/blog/music-in-advertising-the-sweet-sounds-of-brand-growth (Accessed: 25 March 2025).
Joey Valence and Brae - Topic (2024) LIKE A PUNK, YouTube. Available at: https://www.youtube.com/watch?v=D_pafOY7aJI (Accessed: 25 March 2025).
Original advertisement -
IGN (2025) Borderlands 4 - Official Release Date Gameplay Trailer | State of Play 2025, YouTube. Available at: https://www.youtube.com/watch?v=oJS4Rjqs7As (Accessed: 25 March 2025).
For this week, my aims are to create a structure for the song to my first advertisement (Borderlands 4 ad), outlining where each structural part (i.e. hooks, chorus', build-ups etc.) will begin and end. This will also include a basic idea of what the song will sound like, creating melodies and drum patterns to help with this demo. This will be important for my next week as I aim to get into the studio and record these parts, so having a structure, hooks, melodies and a drum beat will be very helpful to have when it comes to recording, as I will enter the studio well prepared which will help save time and stress when progressing further with this song. To further this, having a drum beat structured out is very important, as I will be sending James (the session drummer) this drum beat so that he can be prepared when we come to the studio. Another thing I aim to do for this week is to Interview one of my teachers, Matt Wilkey, about different parts and aspects of working in Music and Advertisement, as this will help give me a better scope on what parts I should focus on during this project, ways I should approach my compositions and what it's like being a part of this section of the industry.
As I had scheduled in week 1 of my blog, I interviewed my teacher Matt Wilkey on aspects of music an advertisement, this including the professional and work elements, as well as practical elements.
This is the entire interview over the sequence of 3 videos.
After this interview, I reviewed the answers I had received, summarising and putting them down into the Notes app on my phone.
When looking back on this, I realised this was a lot of information and considering the length time I had for this project, I should highlight what I found were the most important and relevant points to my project and focus on these points throughout the process and completion of my project.
As shown in the screenshots above, I selected these key points from my interview with Matt:
Look at the demographic and product being advertised and Age demographic can help target genres
for my first composition to the Borderlands 4 trailer, the age rating of the game or similar games could help me to target a genre. the age rating for their last game in the series (Borderlands 3) was placed at PEGI 18, so by making the assumption that it also is at this rating (assumption based on the levels of violence displayed), we can see what the most commonly listened to genres of an 18 year old is. https://simplebeen.com/popular-music-genres/ says that the top 3 most popular genres in 2025 are: Pop at number 1, Hip-Hop/Rap at number 2 and Rock at number three. Due to the sources being rather outdated, I unfortunately wasn't able to find a current list of popular genres for people around the age of 18. I decided to take a upbeat-rock direction with this advert as I found that this fit the style of the advert when watching it with different songs over top. I also found that when watching adverts from similar games, they tended to take a rock approach, this leading into the next point Matt mentioned (shown below).
fulfilling expectations can be helped by looking at other ads to see trends
I decided that the best way to find similar ads would be to stay in this same area of gaming and search up what people think are similar games to Borderlands 4. When searching this up I found this website https://www.pcgamesn.com/best-games-like-borderlands-pc which listed similar games, so I decided to look at the advertisements for the top 3 games and see what kind of music was commonly used.
When listening to the music for each of these ads, I found that they weren't dissimilar from each other. Each ad features a strongly guitar and drum based song and these songs also follow a strong feel of the rock/heavy-rock genre. This is useful to know as I now understand that when composing I should try to focus on having a strong, upbeat drum part and a powerful guitar part.
Importance of music dipping during dialogue or important content
This will prove very useful in the following weeks when I approach my second composition, as the advert has dialogue over top which gives the advert a lot of it's context. So when it comes to my second composition, I will consider this and research and experiment with ways of doing this.
Understand the visuals and what they're trying to communicate
This will be very good to consider in the composition for my second advert as, juxtaposed to my first advert, the second ad I have chosen has more visual markers and implications of loneliness and sadness. Analysing how the advert may have done this will be useful as to amplify these elements when composing the music.
The importance of matching mood and tempo
As Matt explained during the interview, Matching the mood of the advert will help to accentuate the emotions or message trying to be communicated. This can be done a few ways, one of these being through the use of tempo, which was also mentioned in the interview. The tempo (or speed) of a song will help to convey feelings like excitement and liveliness (for fast tempo), or calmness and even sadness (for slow tempo). This helps to show how the tempo of a song can help to accentuate an emotion or message behind an advertisement. Going back, Matt also mentions the importance of knowing music theory. This can also help to convey different moods, the most commonly seen use of this is happiness with major chords or sadness with minor chords.
I can review sync libraries playlists on Spotify
When searching, I couldn't find any of these sync Libraries on Spotify, so I have decided I will try and find these sync libraries when I am creating my next composition.
I should name my songs as descriptors
This point brought up by Matt will prove very useful when it comes to uploading my work to a Sync Agency and for future possible work in this part of the industry. As stated in the interview, the purpose of naming your songs as descriptors is to make it more likely for your music to get listened to and considered, as employers will have descriptors and ideas in mind for what they want, so naming your songs by descriptors will help possible employers to find your music.
Use of TAXI
I found this very interesting when Matt brought it up. He mentioned that he thought it was expensive, so I decided to research how much this service costs, as it seems very useful and could be a good way to evaluate my work over this project. After researching (https://www.taxi.com/songwriting3) found that an annual membership to TAXI would cost around 300 US Dollars, equating to roughly 224 pounds, this not including the additional submission fee for each song submitted. Due to the price of this service, I decided not to use it, however this is good information to know for the future if I decided to try and make a career out of this part of the industry.
Music for advertisements can be rather black or white, either its accepted or not
I found this point to be something I should be very aware of. At the end of this project I aim to upload my compositions to a Sync Library (as I have stated in my proposal), and with this point in mind, I should expect that my response will be rather black or white. This also highlights that it may be even less likely that I can contact the Agency for feedback on my compositions if they aren't accepted, so researching into ways that I can evaluate how professional my work is may prove useful in the future of this project.
Picking a specialty will help to define a sound which may be desirable for employers
This is a very good point that Matt brought up and has made me reconsider many ideas I had for my second composition. Being able to use my speciality, guitar, will help me mainly in two ways:
One way is it will help me to create better music, as since I am most comfortable with guitar and understand music theory more when in relation to guitar playing, it will help me to create better music which can reflect the message of an advertisement more than if I were using an instrument I were less confident with. Also since I am more familiar with writing music on guitar, I will be able to come up with ideas quicker, which is ideal when working in the industry of music and advertisement as it would help me saturate the market with my work quicker. This is very useful as this part of the industry is very competitive.
The second way this helps me is that if I am known for having a speciality (guitar), employers who are looking for someone with this specific specialty will be more likely to use my work or contact me for work. This calls attention to the importance of frequent practice of this specialty, as if employers look for people with this specialty you will want to stand out against others to give yourself a better chance of being employed.
Additionally, making music that it primarily guitar-based (particularly if it uses layers of interesting guitars ideas similar to an orchestra) will help me and my music stand out in an area which is usually synth- or orchestra-based.
The first thing I did when structuring out this song was outline what instruments I wanted to use, these being the ones shown to the side. I decided that having 2 different basses would help to add a lot of energy and power to the track, this proving very useful for this advert as it is high energy. I also put down filler parts, by this I mean parts which aren't necessarily music but will help to build tension or suspense or energy. A good example of this are 'risers', in which a sound will increase in volume and sometimes pitch to help introduce the next part in a song.
The next thing I did was created the structure of the song. I did this by using the marker feature in Logic which allowed me to label when and where each part gets introduced (seen above the green boxes). I also created a structure overview by using MIDI regions, which are the green boxes that are displayed in the image. Using these allowed me to see when each part of each track will enter and helped me to visualise the songs structure as an entirety.
Next, I created drum parts within the given drum MIDI regions. This is what the drums resulted in sounding like.
In the audio heard above, we can hear that the drums Pause in the middle and begin to come back in, slowly rising in volume. I did this through automating this specific drum part, this meaning that by pressing A on my keyboard, I could graph out specifically when I wanted the volume to rise and by how much, as well as how gradually. This is shown in the image above.
For this build up part I also Equalised it, this meaning I altered which frequencies of that sound I wanted more or less coherent. For this I cut out some of the high frequencies, This making it sound almost underwater. I also cut some of the low frequencies to balance the sound out, as due to the high frequencies being removed, it sounded more bassy.
Next I moved on to recording some basic Ideas for the guitar parts. I didn't record this in the studio as it was too time consuming just for a structure of a song, so I decided to record using the audio interface provided in the Mac Suites.
Audio Interface
An Audio Interface is a device that can be plugged into your computer through USB, used to convert analogue signals, such as vocals and instruments, into a digital signals this allowing you to record such.
On these guitar tracks, I used this fuzz pedal as I found it gave the level of energy I was looking for. This isn't finalised for what I want the guitar to sound like, but I found adding this helped me to visualise better what the result could end up like.
This is how the Guitar turned out.
This is the result of both the guitar and drum structure. I decided that the bass would follow a similar pattern to the guitar so as to save time I didn't program or record any bass for the structure. I also decided that any synth parts I would complete in the mixing stage as to also save time. As you may notice, the advert this is composed for is roughly a minute long, while this song is roughly 2 and a half minutes. This is because I plan to do edits for this song as it is standard to do when uploading to a sync agency (which is what I want to achieve by the end of this project). To help myself with this, I have made each part easy to loop and edit, which will help when making longer or shorter versions, as well as adding a few changes the classic hook in the song in parts, this with the intention to create an edit which can still hold musical variety.
This week, I received a response back from Harry saying he would be up for mastering my music as this a good route for him to gain more experience in preparation for university.
I responded saying I would keep him updated on when I am expected to be done with each mix and that I will send it over when each is done.
I also emailed James with a studio recording date. I decided that now would be the best time to inform him as it gives him good notice to know when the session is but also gives me good time to reschedule if something comes up. I also sent him an email with my first composition, both sending a full structure and a part with just the drums as he in a conversation in person he would prefer the songs be sent in just audio format as this is the best way for him to learn the song.
I was also sure to email Marcy as to inform her on the date I have the studios booked and again asked if this works for her as to know if I have to reschedule.
Shaikh, E. (2025) Top 10 most popular music genres of 2025 [updated list], SimpleBeen. Available at: https://simplebeen.com/popular-music-genres/ (Accessed: 01 April 2025).
Lees, G. (2023) The best games like borderlands 2024, PCGamesN. Available at: https://www.pcgamesn.com/best-games-like-borderlands-pc (Accessed: 01 April 2025).
Join taxi (no date) How Much To Join TAXI A&R Music Service. Available at: https://www.taxi.com/songwriting3 (Accessed: 02 April 2025).
IGN (2010) Bulletstorm: Gameplay Trailer, YouTube. Available at: https://www.youtube.com/watch?v=Ty1H29WMgkE (Accessed: 01 April 2025).
IGN (2021) Outriders - Official Launch Trailer, YouTube. Available at: https://www.youtube.com/watch?v=8iQnuJxfj-c (Accessed: 01 April 2025).
IGN (2019) The Outer Worlds - Official Launch Trailer, YouTube. Available at: https://www.youtube.com/watch?v=zNmjNA6dtEA (Accessed: 01 April 2025).
In this week I aim to Record my 1st composition, for the Borderlands 4 advertisement, in the studios. I will make sure that when recording I record the drums first, as this will help to keep the timing of the song, also making it easier for the other instruments to record. Also due to me having planned myself an extra week in my proposal, I aim to structure out my second composition for the 'adopt a snow leopard' advert, as to try and remedy this and complete everything I set out to achieve by the end of this project. This will be very similar to the previous week in which I structured my 1st composition, however I will aim to include more of techniques mentioned by Matt in my interview with him (go to Week 2 - Research). I also aim to make a shorter composition as to demonstrate more variety for editing music to advertisements and also due to the quantity of work which is aimed to be completed this week.
From my last weeks research, interviewing Matt, I found a lot of points which I found would be particularly useful for my 2nd composition:
As mentioned last week, I found this a very interesting point and something I hadn't considered initially. I thought that experimenting with this Idea would be best for this composition. my specialty is mainly guitar as I am very comfortable working with it and can effectively write songs with it, however for my advert 2, the original background music has little to no guitar and is mainly a Piano-based piece. This made me decide that trying to utilise the guitar for this advert would help me to be more proficient as a guitarist working to create music for advertisements, but would also help me to be efficient when composing the structure for the song, this being useful due to the previously mentioned quantity of work I aim to achieve this week.
After my interview with Matt, he sent me this video which goes over the importance of reference tracks and how you can efficiently use them to create inspired music, including each step of creating and using your reference tracks. Here is a summary of each stage he goes through during this video:
Understanding what a reference track is and what the purpose of it is and what you aim to get out of it.
From my knowledge, I have a good understanding of what a reference track is, this being a song, series of songs, melody, part etc. that are used as inspiration when creating your own work. The purpose of this reference track will be to help create a song that conveys specific and desired emotion. With using this, I aim to create 1 composition for my second advertisement.
search wider than the provided reference tracks
This point applies less to me as I don't have provided reference tracks, however I do understand that the reason for this is to find a broader range of inspiration, so perhaps by finding my reference tracks from multiple sources, I can achieve this. This will also help me to separate myself from the reference tracks by not accidently copying to much or making it too similar, an issue brought up in the video.
instruments used and how they are used
what instruments are common in these reference track and it what manor are they usually played (i.e. softly, plucked, bowed etc.).
what are the stylistic features of the track
What tempos, chords, structures etc. are common amongst all of your reference tracks or their genres.
create a basic template
create a basic outline of these factors of your song, such as how many instruments/what kinds to use, how long the hook line is, maybe create a melody or chord progression (This is how I would do a basic template however it will change from person to person).
write 3 tracks in the style of the reference
I have decided that for this project I won't write these 3 tracks, as due to me composing this during the week I am also recording my 1st composition, it would be too time consuming, however I do see how this would benefit my work through what Matt was saying during the interview and how some employers may like 1 part from one song and another part from a different song.
When searching again for these playlists on Spotify, I couldn't find them. As a means of still listening to these playlists, I found that FELT MUSIC sync agency provides their playlists on their website. These were 3 results from searching the prompts 'Sad' and 'Acoustic':
Reflective sad track with beautiful acoustic guitars and strings.
https://www.feltpm.com/tracks?tracks=ear0134-26
Calm piano line with a nice acoustic guitar combined with drums.
https://www.feltpm.com/tracks?tracks=ear0124-87
A collection of sad, lonely, depressing and melancholic tracks for farewell scenes and sad moments. This collection is perfect for daytime TV and dramatic scenes
https://www.feltpm.com/tracks?tracks=ear0045-87
These are 3 of the songs I found worked well with the advert when listening to both together. As referring back to the video suggested by Matt, I purposefully selected songs which featured guitar as a common instrument, as also going back to what Matt said, I should utilise my specialty in guitar as to help myself stand out. I can also head that piano is used in each of the other songs too, typically played in simple chords. Again going back to the information gathered from the video sent by Matt, I have noticed that out of these 3 songs, 2 feature picked guitar rather than strummed, referring back to the 3rd point in the video. When listening to the chords used, there are a mixture of both major and minor chords, this perhaps helping to create this sad but almost uplifting sound, the minor chords associated with sadness and the major with happiness, creating a feeling of hopefulness.
https://open.spotify.com/playlist/74D7Bd72ppA6mjK6GHhxNy?si=24_9ioX3SqyaC2llBm72SA&pi=p3B9KtzlT6Kjv
This is the playlist I created for this project. I treated the song that was originally in the background of the advertisement as if it was a provided reference track (this being the song displayed at the top of the playlist, I Wont Let Go - Rascal Flatts), going back to the video Matt sent, in my research above. I then found songs with a similar energy, whilst also watching the advert to make sure it fit with the ads energy and message. I tried to pick songs which featured more guitar in them as it would be helpful when it comes to trying to take inspiration from these songs, and as mentioned in the video above, it's good to have a variety as it will help me to separate myself from the reference track, not copying and making the song too similar.
From the visuals, as well as the dialogue heard and message communicated through the advert, it is clear that this is a sad advert. The display of videos of snow leopards, being of only one snow leopard, suggests this idea of isolation and also a decline of the number of this animal. This helps to further the message as it shows that they need the help of others, as well as tells you through the dialogue heard in the advert. Another thing I think is communicated through this advert is a sense of hope, this being assisted by the message at the end saying to help stop the killing today, suggesting it can be stopped through the help of those watching the advert.
As I previously mentioned, this will prove very useful in my second chosen advertisement, as the dialogue over top holds the most content for the advert so making sure this message gets across clearly and coherently is very important.
Here are the steps and some information to know when recording in a studio:
Set up your microphones
The first thing you do when beginning a studio recording session is select your microphones, whether this be a condenser microphone or a dynamic microphone (this is further explained below in the drum recording section of my blog). Next you want to consider your microphones position. This is best exemplified with micing up amps, as setting your mic in the centre of the amps speaker will allow it to pick up a lot of clarity from the amp, however if you place your mic around the edges of this speaker, it will make the recorded sound more dampened. This highlights how even a slight change in mic position can affect the sound recorded.
Make sure everything sounds good before recording
By this, I mean making the thing you are recording sound its best. This particularly applies to instruments. To provide an example, if a guitar amps distortion doesn't sound very good when recorded, you may not be able to fix it in the mixing process, so ensuring everything sounds correct and good when recording is very important, as otherwise it may cause you to rerecord.
Connecting to the wall-ties and Management
On the wall in the Live room (the room which contains all the music equipment and in which you are recorded in for the most part), there are wall-ties, in which you plug each of your cables (XLR) into. It is very important to note down which microphone and thing being recorded is going into each input, as it can get very messy with many cables being around and can be hard to remember which input corresponds to each mic. Noting this down on a piece of paper or a phone's notes app helps to manage this and saves time going back and checking which cable and input is for which mic.
Patching
The signal runs from the mic, through the wall-ties and this leads into the live room XLR connectors in the control room (the room with the computer where sound will be engineered). This then will be patched, using smaller XLRs (XLRs typically referred to as 'mic cables', these being very good at transmitting balanced audio) to connect each channel from the connectors to the Element 88 Audio Interface inputs (audio interface description in Week 2 - Guitar). This sends the signal from the live room into the control room and onto your Digital Audio Workstation (DAW for short, this being what logic is).
Preparing your project for recording
The next thing you would do is prepare your project for recording, however most of the time the audio engineer will set this up during the bands set up time. This includes making new tracks for each of the inputs used and making sure that the condenser microphones have 'Phantom Power'. This is external power sent to the condenser microphones as they require it, unlike dynamic mics (you do this by locating the button that says '48v' on the track you want to provide phantom power to). This is also a good time to check each of the mics are working by having someone tap or speak into them to see if they are picking up sound, and also to ensure each mic was correctly labelled (going back to step 3), referring back to the note of each input.
Sound Check
In the sound check, the musician(s) will play to ensure that the sound engineer can set the levels correctly. The engineer will raise the gain (boost the signal) so that the sound of each track reaches around -10dB (when the input level reaches the point where it turns from green to yellow). This is to ensure that each track is recorded to a high quality of detail. It's important to be careful with this as if the input level is too high, the audio will increase beyond the point it can handle, this causing the parts passing this point will be cut off and this causing the audio to distort. This is called 'Clipping', and once this is done it isn't reversable in the mixing process, so if done unintentionally the only way to fix it is to rerecord, ensuring the gain is lowered so that it doesn't happen again.
Recording
Once the sound check is done and all the levels are correctly set, you are ready to record.
As you can see in the videos shown above, James and I use wallets to tighten the sound of the snare and compare what works best. In the first video the snare sounds very ring-y and rattly, so we try using the wallets to make it rattle less, and in turn ring less, which creates a tighter and more snappy sound. We found that using 1 wallet to do this wasnt enough as it still sounded to rattly, so we placed another to tighten the snare further, creating the desired sound of a more punchy snare.
We also experimented putting cushioning in the kick drum. This achieves the same idea as in the previous video, as with the more items placed inside the kick, the more the vibrations get absorbed, creating a tighter and punchier sound. I decided that for both of the experiments above I would use the tighter sound, as since the drums are fast paced, it would help to give clarity to each.
This is how we mic'd up the drum kit. We placed 2 Rode M5 Condenser mics above the drum kit to capture the overheads. We used these as due to them being condenser mic's, they are more sensitive to sound, which allows more detail to be captured and recording, thing being ideal for cymbals, as they have a detailed sound.
We used a AKG C1000 for the hi-hat, this also being a condenser mic and allowing us to get the detail from the hi-hat. Another good thing about this microphone is that its very directional when picking up sound, meaning it will record more sound in 1 direction than it will another, allowing us to direct it toward the hi-hat and pick up less from other drum parts (image on the right displays a diagram of directions sound can be picked up from, and how it rejects more sound from directions seen as empty).
We used a AKG D112 for the kick drum. This is a dynamic mic, dynamic mic's capturing less detail than condensers, however dynamic mic's are very good in live settings as they tend to pick up less sound from surrounding instruments or sounds. This mic is especially good at capturing detail at lower frequencies due to it having a large diaphragm (this being a membrane inside the microphone that vibrates with the sound waves), allowing it to pick up lower frequencies easier.
For the snare drum, we used a Shure SM57 dynamic microphone, as due to the snares placement, a dynamic microphone is most effective. This is because, as prior mentioned, it picks up less surrounding sound, unlike a condenser which has a higher likelihood to pick up more surrounding sounds, as condensers are more sensitive to sound and it would likely pick up the detail from the hi-hat also.
See https://sites.google.com/d/1VHTQT0Rty8DTEiTHpGFwt1gM7OY3Kk23/p/1J3pNN5fpYiyf_Ss-suJT52lbOomUltin/edit for more information on these microphones or go to Project 1 - Task 2 on my website.
Microphone polar patterns are the visualisation of where a microphone can pick sound up from.
Omni-directional - this polar pattern picks up sound all around the mic (360 degrees). This is particularly good for capturing room sounds as it captures the sound from every angle.
Cardioid - Cardioid mics are very useful for targeting a specific thing to record as they are directional, rejecting sound from behind the mic, this being very useful for recording vocals in a live recording, for example.
Hyper-Cardioid - Hyper-Cardioid mics are even more directional than cardioid mics, as they reject more space from the sides, however they do pick up some slight sound from behind. This could be useful for recording a hi-hat for a kit, for example, as it would be able to more precisely target the sound of the hat amongst other parts of the kit whilst only picking up minimal sound from behind.
Bi-Directional (figure of 8) - Bi-directional microphones pick up audio from both opposite sides, this being good for recording 2 vocalists or instruments like guitar onto 1 track.
the person in this video places to microphones on the the floor far away from the drums, with the condenser microphone set so it picks up noise well from every direction (referred to as Omni in the video). The purpose of this, as he mentions, is to miss the early reflections, this being the earliest points the sound bounces around the room, to get a better sense of ambiance and to make the sound more punchy. It had been a while since I had watched this video and in retrospect I should have watched this before I attempted it myself. In the picture above on the left, we see me trying this method. I used a AT3043 condenser microphone, this picking up sound directionally, instead of from all angles as suggested in the video, this not giving the desired effect. However, I found it did capture the rooms ambiance nicely when listening back.
further about the ambient mic's, I used an AKG C3000 condenser microphone, as it was the last condenser we had available to us and is fairly diverse in what it is good at recording, however I placed this inside the top of the piano (as seen in the picture above on the right) as to experiment with what sound this gave. This ended up having no coherent difference then if I were to place it in the corner of the room instead.
When recording the guitar for this track, I decided to use the AKG D112, the same one I used to record the kick drum. Usually I would use a condenser microphone for the guitar as it captures more detail, however since the main guitar riff I was playing was played on the lower strings, I decided to use this dynamic microphone because, as prior mentioned, it's very good at capturing bassier/lower sounds. I thought that this would benefit the sound as it would help the guitar to be more powerful, which was what I was aiming for looking back at my research on other advertisements. It is also worth mentioning that I placed the microphone directly above the speaker as to capture the best quality of sound.
For my settings on the amp, I decided not to tamper with the EQ settings too much as I would be doing this in the mix, But I added more bass and mids (around 250 to 4000 Hertz) as this is where I thought the guitar would sit. I also added distortion to this, as to make the guitar more driving and I thought adding this pre-production would help to create a more 'Raw' sound, which I thought would make it more powerful. Lastly, I added a little bit of reverb to again help with this 'Raw' sound, however not too much as if I decided I didn't want it, I would have to rerecord. This also applied to the distortion, however I was more certain I wanted to use it.
This week I started on structuring and planning my composition 2. I started out the same way I started structuring my first composition, which was by creating Regions to show where I would place each of the instruments and by deciding on what instruments I would use. as can be seen in the image above, I have only planned out what the piano will be. This is because of time constraint, as I thought it would be best to plan it on the piano because I am confident in my ability to write a guitar part quickly and it would allow me to focus more on the recording of my first composition. This has helped me to realise what me and Matt discussed in the interview, which was that having and knowing your specialty will help to create music more efficiently. As referring back to my reference tracks, I used a mixture of major and minor chords in my composition as to create a sad but hopeful song through the minor chords resolving into a major, this almost reflecting a sense of resolve to the situation presented in the advert. I also used a slow tempo with this, helping to further these emotions. The reason for trying to emphasise the possible emotions projected by the advert through the music refers back to Ewing's point in week 1 in which he says to 'Tap into Emotion' as it is a powerful tool.
I thought that doubling the piano with a synth would create a nice mellow and airy sound, so I tried this and really liked the result.
This is a screenshot of the synthesiser I used. The section on the left labelled 'Oscillator' creates sound waves of different variety, this synthesiser (RetroSynth) only being able to do 2 at a time. You can blend these together using the 'Mix' fader. In this section, I used a saw wave (shape 1) and a triangle wave (shape 2), with more volume on the triangle wave. I then detuned the triangle wave slightly using the 'Cents' dial. After this I used the filter section to cut some of the high end frequencies out and boosted the frequencies at the cut off (or frequently referred to as the addition of 'Resonance').
I also turned up the 'LFO' dial to around half way, this allowing the LFO to apply. An LFO is a Low Frequency Oscillator, and what it does is, when applied, it will modulate your synth throughout its duration. To provide an example, you could create a tremolo effect (this effect being quick or short successions of volume decreasing and increasing again) by putting this LFO on your synth, and as the wave shape of the LFO goes down, your volume will, and as the wave goes up, your volume will also. This is not limited to volume, as on other synths you can modulate anything such as pitch, filtering etc. These shapes are seen in the LFO section above the diagram. For this LFO, I put a saw wave on (slow rise, sudden drop) and modulated the volume with this, creating a buzzier sound to my synth.
In this example, I gradually apply the LFO as to hear the difference of it's application. This gradual increase of the use of the LFO is seen in the image on the left's automation.
Next to this section is the Filter and Amp Envelopes. In terms of the amp section, the Envelope allows you to decide how long it takes for your sound to reach its peak volume (Attack), how long it takes from it to drop from its peak volume (Decay), the volume after the decay and before the release of the sound (Sustain) (this is also the continuous volume if you were to hold a note down) and the length of time taken for the sound to end after the note played has been released (Release). This also can be applied to the filter, and this follows the same idea except it effects how the sound is being filtered and not how its volume changes.
In the section labelled 'AMP', I moved the volume dial down (this controlling the volume of the entire synth). I also moved the 'Sine Level' dial to its max. What this does is adds and adjusts the volume of a sine wave which mixes with the overall sound of the synth, this giving it a warmer and smoother sound.
And finally, I used the 'EFFECT' section to add an effect to my sound. I decided to go with chorus, as it made the sound more full. https://www.izotope.com/en/learn/understanding-chorus-flangers-and-phasers-in-audio-production.html?srsltid=AfmBOorj7lNIVU_h7wRMTq5wyeqtf7TvOuVEMk6tqfiJ5e7ZI9vw_HjI this website describes chorus as a simulation of the subtle changes in pitch and timing that occurs when multiple musicians play or sing together. I did this by moving the mix dial, this changing how much of the original and effected signal I want (say I have the dial at 50%, that means I will have an equal balance between the original signal and the effect signal, if at 75%, for example, I would have 25% the original signal). The rate dial affects what speed the effect applies itself and how quickly the sound changes. I decided to keep the rate low as I didn't want the effect to be overpowering.
These are the basic waves you will usually find on a synthesiser oscillator and what they look like, and below is how each one sounds.
There is no evidence of communication with my collaboraators this week. However, I did set up another session with Marcy after recording this week as to record the bass guitar. I was also sure to be polite and considerate during these studio sessions, as this is a very important part of collaboration. This included making sure I wasn't too demanding with what I wanted to be done in the studio sessions, thanking my collaborators for helping me with my work and making sure to uphold basic manners (such as saying please). This is all important as it allows you to keep good connections with your collaborators that may prove useful in future opportunities.
Desert Road (no date) Felt music. Available at: https://www.feltpm.com/tracks?tracks=ear0134-26 (Accessed: 22 April 2025).
Sad Cold Days (no date) Felt music. Available at: https://www.feltpm.com/tracks?tracks=ear0124-87 (Accessed: 22 April 2025).
Remembering You (no date) Felt music. Available at: https://www.feltpm.com/tracks?tracks=ear0045-87 (Accessed: 22 April 2025).
AD 2 INSPO (2025) Spotify. Available at: https://open.spotify.com/playlist/74D7Bd72ppA6mjK6GHhxNy?si=24_9ioX3SqyaC2llBm72SA (Accessed: 22 April 2025).
Compton-McPherson, W. (2024) TASK 2 - Preparation for Performances, Google Drive: Sign-in. Available at: https://sites.google.com/d/1VHTQT0Rty8DTEiTHpGFwt1gM7OY3Kk23/p/1J3pNN5fpYiyf_Ss-suJT52lbOomUltin/edit (Accessed: 22 April 2025).
Messitte, N. (2021) Understanding chorus, flangers, and phasers in Audio Production, iZotope. Available at: https://www.izotope.com/en/learn/understanding-chorus-flangers-and-phasers-in-audio-production.html (Accessed: 25 April 2025).
Andrew Hind Music (2025) How to Use Reference tracks Like a Pro, YouTube. Available at: https://www.youtube.com/watch?v=U1LyYciXtXw (Accessed: 22 April 2025).
amplifiedwax (no date) Recording, mixing, Mastering Studio on Instagram: ‘here is a trick to save money 💰 on gear 🎚️ when recording drums🥁 !!!! let’s say you don’t own any boundary mics, but you have a couple small diaphragm condensors laying around. you can set those to Omni and lay them on the ground, pointing at the drum kit or the drum kit, go crazy. this will give you the same effect as those expensive boundary mics. why is this a good idea? the early reflections will be minimized to almost 0 since the mic is directly on the floor and will only be picking up the thing you’re pointing at and the ambient space. throw a little compression on it and have some fun !!! #drums #recordingdrums #drummics #boundarymics #microphones #condensormic #recordingtips #recordingstudio #drums #recording’, Instagram. Available at: https://www.instagram.com/reel/DB4Q0C4SF39/?igsh=NXF6ZmtiNGYxOTY0 (Accessed: 25 April 2025).
In this week, I aim to mix my track 1, as well as editing and syncing it for the advert. I also aim to create a series of edit for this song that I can upload to a sync agency, as having multiple edits is typically a standard. For the edits, I aim to do the full length, the edit for the advert, a 60 second edit (not dissimilar to the edit to the advert as it is of a similar length and will help to save time) and a 30 second edit. Due to there being a lot of work for me to do, I have decided to prioritise the Practical work over the research as to help with my time management (this means that, unlike other weeks, there won't be any 'Research' section, any research done will be included in the 'Practical' section). I have also decided that I will prioritise the full length edit and the edit for the advert, as these are the most useful and important to my project when comparing. I also aim to record bass this week, as I didn't have enough time in the previous week. I am going to prioritise this and do this in my first lessons on Tuesday, as to catch up as soon as possible.
After the last session working with Marcy, I asked her if she would be able to record bass in the first lessons on Tuesday, to which she agreed. The way we decided to record the bass was by recording in the live room and plugging into the Element 88 audio interface. To do this we used a DI box, which meant we could plug straight in to the audio interface and record in the control room. This made it easier to communicate and do multiple takes, helping to swiftly complete this process.
A DI (Direct Input) box converts an unbalanced signal, from a jack cable for example, and converts it to a balanced signal, this creating a cleaner sound (DI box shown on the far left image).
The first thing I did when approaching the track was try to fix any imperfections. The first of these imperfections that I noticed was that the recording for the intro guitar part was played out of time. I found that the best solution to remedy this would be to rerecord it using the Audio Interfaces provided in the Mac suites. Unfortunately this came with a fairly big issue, this being that the guitar at the start of the composition would sound significantly different to the rest of the guitar parts, due to one being recorded by micing up a guitar amp, this giving a more natural sound due to it, for example, capturing more room sounds; and the other being directly recorded, giving a more raw, pure and untainted sound. I found that the best way to make this issue less apparent from experimenting was to make this feel more intentional, giving the rerecorded guitar track less high frequencies to make the other guitar sound larger when it is introduced (this EQ is shown below in the Equalisation section).
Another imperfection I noticed was that the main guitar part would waver in and out of the tempo to much, this being an issue as it may sound off-beat or even cause issues when trying to loop (have play on repeat) sections for my Edits. To remedy this I found the most in time part by listening with a metronome in the background and looped this for the parts that contained the riff (this being the majority of the song), making it more in time. The main issue with using this technique is that there is a risk of making the song sound unnatural, due to a specific part being played absolutely identically on repeat, however, I did not find this as big of an issue as the natural and free sound of the drums helped to mask this.
The next thing I did was organise the drum tracks. I did this by muting the rest of the tracks on the project using the small 'M; button seen on each channel. I then set the volume of each track by using a method taught to me that I call the 'Priority Method'. How this is done is by first reducing all the tracks no volume (this shown as -∞ (negative infinity) decibels). Once you have done this you select which track you want to be the loudest and which one the quietest, listing each track in importance. Next, you raise the volume of your most important track to your desired peak volume (for me, I like to raise it to -3.0 decibels, as adding more sound will increase the overall loudness), then you raise your second most important track to a volume below your most important track, and so on until you have adjusted the volume of all the tracks you wish to. This helps to create a level sound where each important part of a song is accentuated and not masked by other, less important parts.
Also seen in this image, I have used Auxiliary (Aux) tracks to allow multiple similar tracks to be controlled under 1 Aux (these are shown as OVERHEADS, AMBS and DRUMS). These are called 'Summing Tracks'. This means that I can edit these tracks collectively. For example, if I wanted my overheads, left and right, to to have the exact same reverb as to make them sound as if they are in the same room, I would sum these tracks together, creating a new Aux which I could name 'Overheads', and place reverb on this new aux (Summing Track) which would apply this reverb to both of the overheads. It also allows me to mute these both at the same time by muting the summing track, or equally adjust the volume of both using the volume fader on the summing track too, this making it very easy to manage my tracks. I also have all of these tracks under a Drum summing track (including the Overheads and Ambs summing tracks), which allows me to edit all of the drum parts collectively if I decided to do so and great for keeping my drums organised.
When mixing my track, I used EQ (Equalisation) on every track, so as not to make this section too long, I have decided to pick the tracks which highlight my understanding most and show the most change. For each of these, I used another method taught to me, called 'The Sweep Method'. How this is done is using a bell curve (this being an EQ filter which boosts or cuts specific frequencies ranges, it being called a bell curve due to its bell-like shape (see image on the right) shown on a frequency graph (frequency graphs shown in the images below) and significantly boosting its volume affection and dragging across the centre line (shown in the images below), this drastically boosting the volume of specific frequencies as you drag it along. This helps to distinguish good and bad frequencies, allowing you to easily hear what frequencies you want to cut (reduce volume of) or boost (increase volume of).
It is also Important to add that these EQ's did change over the course of the mix, however the changes were so slight that I deemed unnecessary to involve as to help manage my time.
This was the video I watched to learn the 'Sweep' technique. I watched this video again as a means of brushing up on this technique and to solidify my knowledge.
I used the resources on the webpage to help me sharpen my understanding of EQ: https://sites.google.com/view/btecmusictechnology/unit-13-mixing-and-mastering/mixing-techniques/eq-techniques
From this I learnt that the purpose of EQ filters is to add, remove or reduce frequencies from a given sound, helping to contribute to a better sound. here are the different types you will find:
High Pass Filter
Cuts frequencies below a specific point (also referred to as a Low Cut)
Low Pass Filter
Cuts frequencies above a specific point (opposite of a High Pass Filter) (also referred to as a High Cut)
Bell Curve Filter
Boosts or cuts a specific frequency range of which you can adjust
Shelf Filter
Used to boost or cut high end frequencies or low end frequencies at a constant level, dependent on if its a High or Low Shelf
Notch Filter
Used to precisely remove very specific frequencies
This Image very well shows each of these filters.
For my kick, I used a slight low cut reduction at around 20 Hz (Hertz - used as a unit measure for frequency) as to remove any rumble from my kick, as this would give it an undesirable and muddy sound. I used a Bell curve to reduce the frequencies at the range of 30-200 Hz and another wider bell curve to reduce frequencies at around 100-800 Hz, as to remove the rumble from the kick and make it sound more clean and punchy. To make the kick more punchy, I used a bell curve boost at around 900 Hz, to add more pop into the sound and make the kick stand out more. I then did another bell at around 3000 Hz (3 kHz) to reduce any clicking sounds the kick had, this helped by a shelf at around 5 kHz to further reduce this and any unwanted higher frequencies, such as any other parks of the drum kit picked up by the kick mic, like the cymbals which have a high frequency.
For the snare, I used a low cut at around 100 Hz, this done to remove ant rumble that was picked up through the mic from the kick drum. I also used the sweep technique and found some unwanted rattly sounds at around 175 Hz, but since this is the frequency where the most punch of the snare was, I decided to only slightly reduce this as to still keep a lot of the power in my snare. I then boosted the frequencies around 200-500 Hz as to keep more of the power in my snare, using a bell filter to dramatically increase the frequencies to better hear if they sounded good and were going to dd this power I wanted. I also added a boost at around 800-2000 Hz to make the snare sound more snappy and add to this power. finally, I used a high cut at around 13 kHz to remove some of the sound picked up from the hi-hat. The reason it is important to try and remove as much of the sound and frequencies from the other parts of the kit as you can is because these sounds combine with the other parts of the kit and can make other parts louder, which makes it harder to get good volume levels and can even mask some of the other parts of the kit so they are barely recognisable in the song as a whole.
The right overhead is a good example of this idea of cutting frequencies to avoid volume increasing for certain parts of the kit. As can be seen in the image, I used a very large high cut, starting at around 600 Hz and removing most of the frequencies from this track. This was done to help the Toms stand out (the toms being mainly on the right side of the kit) in the mix and not get covered up by the cymbals, which were closer to the mics. There is an issue with this, which is that since there are cymbals on this right side, this removed a lot of their volume, however with the left overhead microphone, the cymbals on the other side were rather easily picked up and could be boosted by boosting the higher frequencies.
In this image, we the use of a slightly wide Notch in the floor room track at around 400 Hz. The reason for this is that this mic picked up a lot of click from the kick which didn't sound very good, so using the sweep method, I located the frequency where this was coming from and cut it from the track, creating a better sound and helping to decrease the sound of this click from the kick sound as a whole, this referring back to how the layering of parts increases its volume in the overall output.
This was the EQ for the guitar part I had to rerecorded, mentioned previously. As we can see, to try and emphasize the main guitar part as it came in, I used a High cut at around 3000 Hz, which made the main guitar part emphasised as it came in, as this sudden addition of more frequencies made the guitar sound wider and more powerful.
For the bass, I used a High cut at around 1500 Hz, this is because the bass doesn't cover these high frequencies, but parts like someone's hand slapping the strings does cover the higher frequencies, so by doing a high cut I can remove these sounds, resulting in a cleaner bass sound. I also put a low shelf reduction at around 100 Hz, as I wanted to make my bass sound more tight and reducing some of the lower frequencies helped to remove some of the bassy rumble and make the bass sound tighter.
Bass 2 is the bass part played during the change section of the song, and in this change section there I wanted there to only be Bass and drums. To fill the empty space in this section, due to there being no guitar, I did a low shelf at around 500 Hz to make emphasize the bassy rumble of the track and make it fill more space in the sound, I also did a bell curve boost at around 800 Hz to still give the bass some clarity, this making the initial pluck of the bass heard so it doesn't get lost in the track.
And my final example of equalisation is the EQ for a summing stack for lead guitar parts. For this, I wanted this to be more background, so since this riff was played higher on the fretboard of the guitar, it means it covers more of the higher frequencies. I decided the best way to make this more background was to do a high cut at around 4 kHz, as it made it sound more muffled as if it was further away (the reason for this being that lower frequencies travel further than higher frequencies). However, I did still want to add some clarity for the guitar, so I decided to do this through the addition of resonance. To add resonance, you boost the point in which the frequency range ends, this adding more detail within the frequency range. So in my track, I added resonance at 4 kHz, which is roughly where the frequency range of this track ends.
When Equalising, you are boosting and reducing the volume of frequencies, this meaning that this can effect the volume of the track as a whole, so it is important to readjust volume levels after EQing as to ensure that everything is levelled correctly and that parts aren't getting lost within the song as a whole.
This diagram found at https://doctormix.com/blog/parallel-compression-explaned/ helps visualise parallel processing very well.
The next thing I did was move on to compression and reverb for my drums. I first started out by creating Buses for my compressions and reverbs (seen as COMP, COMP 2, SMALL V and BIG VERB). As can be seen in the image, the buses and the sums look very similar as they are both separate aux tracks.
The difference between these is that a sum controls the collection of sounds, whereas a bus controls the route the sound takes. This leads us into Parallel Processing, in which the signal from the original track is doubled, playing both from the original track and the new bus track. The new bus allows you to add effects to the sound without affecting the original track.
You can then blend this with the original track through the use of the volume fader on the bus track or the increase of the signal sent from the original track to the bus, controlled with the gain knobs. These gain knobs are seen under the sends section and next to the bus you have sent it to, on your original track. To help locate, see the 'kick' track, it has a few blue boxes on it labelled B 52 through to B 55, B standing for bus. These buses are where my signal is getting copied and sent to, these numbers also seen on the input section above my bus tracks.
This was the image that originally helped me to better understand compression and how each part contributes to it, back when I started on the BTEC production course. I decided to use this again as to check my understanding of compression and it's settings were correct. I also used each parameter of the compressor while listening to the composition, as to give myself an audible refresher on what each part of the compressor does. (https://sites.google.com/view/btecmusictechnology/unit-13-mixing-and-mastering/mixing-techniques/compression-techniques?authuser=0#h.95pjknr6hoea)
A compressor essentially squashes the audio wave, so that the peaks of the audio wave are reduced, helping to get more consistent volume and less large spikes in volume (if you track has large spikes in volume). In this I will describe the most important parts of a compressor and the parts which I have use throughout this, both through the research and experimentation I had done to remind myself on this tool, and my own prior knowledge from using the compressor.
The threshold knob sets the volume in which you want the compressor to activate (the threshold). Lets say there is an audio signal which reaches up to -6 dB in volume. If we set the the threshold to -5 dB, the compressor won't activate, as no volume will pass this point. However if we set the threshold to -10 dB, any time the volume surpasses that, the compressor with activate.
Ratio determines how much the compressor will reduce the volume by, once the volume passes the threshold. How this is measured is by a ratio. To provide an example, if the ratio is set to 4:1, for every time the volume exceeds the threshold by 4 dB, it will reduce it down to 1 dB which exceeds the threshold.
The make up knob allows you to increase the gain of your track, as when you have compressed your track it will lose some of its volume, making it quieter after compressed, so you can use this knob to add back any lost volume. There is an option to automatically have the gain equalize again, using the 'Auto Gain' button, however I tend not to use this as it allows me to have more control over my compression and the compensated volume added through the make up gain.
Knee is the how aggressively or smoothly the compressor activates when the volume reaches the threshold, 0 being smooth and 1 being aggressive.
Attack determines the time it takes for the compressor to activate when reaching the threshold. To explain, if I set the attack to 30ms and the volume reaches the threshold, it will take that much time for it to squash the audio wave and reduce the volume of the peaks (loudest parts of the tracks), this making the audio have a moment (30ms) where it is unaffected by the compressor.
The release is how long it takes for the compressor to deactivate, essentially doing the opposite of the attack. If the compressor is activated again while the last time it compressed the audio hasn't released yet, it will reset this time, essentially making it as if it had an even longer release.
Output gain is similar to make up gain, however make up gain primarily compensates for the lost volume due to compression and the output gain is used to control the volume.
The mix knob allows you to blend the original signal with the compressed signal.
These are the 2 compressors I used on my bus tracks, using the FET compressors as they are aggressive by nature. For My first compressor, I used a low threshold of around -40dB as to make the compression very severe. I also used a 30:1 ratio again to make tis compression very intense. Again going into the intensity of this compression, I turned the Knee knob all the way up as to make the compression as aggressive as possible, as well as setting my attack to zero, making the compressor activate instantaneously. I then set my release to 5ms, which I later changed to 10ms as I thought it released too suddenly. The reason I have done this and made it so aggressive and sudden is to remove the initial hit of the drums, making the tail of each drum hit remain.
For my second compressor, I did roughly the same threshold of -40dB for the same reason as before. I used a lower ration of 10:1 as to make it less aggressive than the first compressor. I also used an attack of 10ms (the same as the release on the prior compressor) and a release of 100ms. The reason for all of this was to make the initial hit of the drums reduced with the first compressor, leaving the tail of the sound; and to make the tail of the drum hits reduced, leaving the intial hit with the second compressor. This allows me to control how aggressive and snappy I want each of my drum parts. Then as can be seen in the second image, I adjusted the amount of the signal sent to these compression bus tracks.
As well as the compression added to the kick previously in the compressors above. I decided I wanted my Kick to be less aggressive, so I used a compressor with a very low threshold of around -45dB, this making the compressor always activate after each hit of the kick. I used a low ratio of 2:1 as not to completely reduce the kicks volume. I didn't use the knee, but from stock it is set to 0.7, in reflection I should have maybe made it less aggressive as to make the compression smoother. I used a very quick attack of 1ms and a release of over a second. This made the dynamics of the kick very consistent and not vary to much, allowing me to better control its aggression.
Finally, for the bass I again used a very low threshold of -40db so the compressor would activate consistently, as well as a knee of 1 to make the compressor activate abruptly. I then used an attack of 10ms and a release of 50ms, with a ratio of 5:1, this making the initial pluck of the bass come through the strongest and the tail quieter, adding to make the bass snappier and tighter, this being what I was also aiming to achieve with the EQ (see the Equalisation section of this week).
Above you can see how the compressor has 'Squashed' the audio file. (audio comparison on the left)
As can be noticed when looking through the compression section, I didn't compress my guitar. This is because when listening back to my guitar and viewing the wave form, I noticed that it was already rather compressed. This lead me to not use compression on my guitar, as it would also add to make the guitar sound more natural and dynamic.
Reverberation (or reverb) is the continuation of sound after the original signal has stopped, this being caused by the sound reflecting off the surfaces of a given space. This is very useful for making a song more uniform and sound more natural, as you can make it sound as if everything was played or recorded in the same space. In logic there are a lot of reverb emulators, and ways which you can alter the sound of a reverb. These are the main parts of reverb which I have specifically used:
Decay is the time it takes for the reflections of the sound to fully fade away.
The Dry fader controls the volume of the original signal and the Wet fader controls the volume of the reverberated signal, allowing you to use these faders to blend them together, controlling the volume that your reflections are (this meaning you can make the reverb really prominent or less noticeable).
Pre-delay determines the time it takes for the reverb to activate after the initial audio has started. This is very useful as it can help the original sound to not get lost and washed in the reverb to much, helping to giver the sound more clarity.
Attack determines how gradually the reverb will introduce itself, this starting when the pre-delay reaches the time set and activates the reverb. This goes from 0%, meaning the reverb introduces itself immediately; to 100%, where the reverb will introduce itself smoothly and more gradually.
The Output EQ allows you to target what frequencies you want to be reverberated. For example, if I was recording something that had a lot of higher frequencies, but when reverberated the higher frequencies became really unpleasant, I could EQ it to make sure the reverb doesn't effect the higher frequencies, or effects it less.
For the small reverb, I used a emulation of a room with a decay time of 1.4 seconds, which I reduced down to make the reverb less noticeable, typically described as 'Adding space' to a sound, as it doesn't make it the audio sound reverberated, but adds the saturation of the room to make it sound more as if it is in a given space. I set an Attack of 57.9 ms, as to make it increase somewhat gradually, smoothening out the sound of the reverb and allowing the pre-delay of 8 ms to let the initial hit of each part of the drum remain unaffected by the reverb, without the reverb being introduced suddenly to make it sound unnatural. I also used a wet of -17 dB and muted the dry. This was because I would be blending this in anyways due to putting the reverb on a bus. The other settings I left at their default and didn't use.
For the Big reverb, I used the emulation of a concrete room, which had a natural decay time of 4.3 seconds. I also reduced this down, reducing it to 2.8 seconds. The reason I didn't select an emulation with a lower decay time is because I liked the sound of the reflections the room gave off. Ties into the material the room is made out of, concrete walls being smooth and easy for sound to reflect off of, making the reflections of the sound very clear.
The material of the room can contribute a lot to the sound of your reverb, as things such as holes in the walls can capture the reflections and make them come out less intensely, this being called 'diffusion'. Another thing that can contribute to this is the softness of the wall material or material on the wall, as the softer it is the more of reflections volume is absorbed. his why a flat and hard surface such as concrete gives a clearer reflection.
I used an Attack of 179 ms, as to make the reverb introduce more gradually, as with this reverb having very clear reflections, I thought having it come in too suddenly would end up being very harsh. I also used a pre-delay of 8 ms for the same reason as the last reverb which was to allow the initial hit of the drums to be clear and clarified. Again, I only used the wet fader as I was already using this on a bus to blend it with a dry signal. the rest of the settings I left at default and didn't use.
When finishing the previously mentioned parts of mixing the drums, I noticed that my snare sounded quite dull and didn't stand out so much in the mix. I decided to ask James, my session drummer, for help as I thought he would have a good idea on what would make the snare sound more punchy. I let him take over and show me some techniques for this. The first thing he did was apply a distortion called ChromaGlow, which he said he used as he thought it was a really good way of saturating drum tracks. The next thing he did was apply a second distortion, this being done to add more drive and power to the snare. Lastly, he used an Exciter, which he said adds clarity to the frequency selected, in this case being at 1900 Hz. There is a lot of snap from the snare at this frequency, so making this more clear added a lot more punch to my snare and created a more full sound. I found this very useful and decided to keep his changes in my mix.
As I mentioned previously in my blog (when structuring my composition in week 2), I wanted to add filler parts in to my song to add more interest and power. One that I mentioned previously was a riser, which is what I created in the image on the left. The purpose of a riser is to add more suspense and expectation for the next part of the song, this creating more excitement in your track.
I created this by first putting my mix all the way to shape 1, in which I had use a white-noise wave. Next I filtered a lot of the high frequencies as the sound was very sharp, but I added a lot of resonance which helped to make it cut through and be more prominent in the mix. I then applied some chorus to this synth, this making it sound almost alien-like which I thought would suit the post-apocalyptic, sci-fi style of the advertisement. I then decided to play into this more by slightly adding an Triangle wave LFO, this making to sound modulate up and down and sound slightly wobbly. I then added a filter envelope with a long attack so that the sound faded in more. I did the same with the amp envelope (changed since the screenshot), this making the rising effect.
As I also mentioned in my previous week, I wanted to create a synth for a bass double to add more power to my bass. I did this using a different synthesiser called 'Alchemy', as it had more option which I could use to create a good bass synth. This works very similarly to Retro Synth.
The section labelled 'Sources' is your oscillator, allowing you to use any of the waves shown back in week 3 (sine, triangle, square and saw). In this oscillator under each wave you have a volume knob, which allows you to have more control when blending waves together in comparison to Retro Synth, as on Retro Synth, if you increase the volume of 1 wave, the other decreases. Another benefit is that you have the option to use 2 more waves in comparison to Retro (Synth). Another difference is that rather than having ADSR (Attack, Decay, Sustain and Release), it also has a Hold (AHDSR), which if you hold down the note, allows the sound to remain at its peak volume for a set time before decaying.
I still decided to use only 2 waves for this synth, using a Triangle wave and a sine wave, with more volume on the sine wave to make the synth more smooth and using the triangle to give it a slight bit more definition and harshness. For the filter I cut of the high end as to remove any buzzy, high frequencies that had been added by the triangle wave, and adding a bit of resonance as not to make the synth too smooth because of the sine wave. To help this, I also used a fast attack and a steep decay, this creating a 'pluck' sound to emphasise the start of each note and almost emulate the 'pluck' of a string on a bass guitar. I also added a short hold as if I did hold the note too long, it would still create this 'pluck' I wanted. As can be seen in the image, the release looks like it is really long, however this is just because Alchemy's graph (displaying the envelope or AHDSR) zooms in automatically, in reality the release being relatively short, but not so short that the sound cuts of suddenly and loses the emulation aimed for.
Another good addition is a 'Sub Boom'. This is used to emphasize the beginning of a new or important section, and is mainly in the sub-frequency range, (which is 0-20 Hz) the human hearing range starting at 20 Hz, meaning this is heard less but will be felt more (like how you can feel the bass in the ground at a gig).
I created this synth by using a sine wave and a very slight amount of a saw wave, which in retrospect didn't really add anything to the sound as it was so masked by the sine wave. I then cut off a lot of the high frequencies and added a lot of drive to make the boom more intense. I used a very short attack as to make the synth more sudden and again more intense. I then set a long release to make this fade away, as I found setting a short release made the power of the boom to short and sudden. I didn't change any of the rest of the envelope sections, as I wasn't playing the note long enough for them to be heard.
I wanted the change part to stand out in the song more, as due to this part not having guitar, I felt it lost a lot of its power. I decided to experiment with some ideas, the first one I had being to use a tremolo on the drums. This tremolo panned the sound from left to right, and it did this very suddenly, so I changed this by adding smoothness to it, which made it more smoothly transition from one side to another. I set this tremolo to a rate of 1/8d (dotted eighth notes). This means that each bar of this part had the tremolo pan to a different side 8 times. However, the 'd' in this part stands for dotted, making it dotted eighth notes, making each note 1.5x it's length. This made it sway in and out of time and created interest in this part. I also combined this with a high cut on its EQ, as to further add to this interest.
(Note - This is best heard through headphones)
The final thing I did to my track was add the automation section I had previously planned to use in week 2, This being the part where the drums fade back in during the build up. I decided I wanted to make the drums increase quicker at the start and slower when approaching the hook again, so I used two points for this, one to make my volume increase more suddenly, and then another to raise the volume more slowly back to the original volume, as can be seen in the image above. The reason I did this part last was because when you automate volume, or anything for that matter, it will stay at the point you have automated at. To explain, if I did this and then decided I wanted the drums to be louder, I wouldn't be able to use the volume fader on the track because as soon as I play the song again, it will revert back to the volume set by the automation, in this case being -0.7 dB. This is why I did this last, as I knew I was happy with my levels and that I wouldn't have to go into the automation section to adjust the volume, this making managing my time on this project easier.
From this week, again, I have no evidence of my communication with my collaborators. However, I did ask james for help with mixing the snare. This proved really useful as he had a lot of insight on what could and would work to help me. This has brought up something which I should do more often, which is to ask for help from people who have knowledge on a certain topic or expertise, as they will have a lot of useful insight on what works for them and what could work for me. I also set up another studio session with Marcy after the recording session for the bass, as I found from the last week she was able to give me a quick response and it lets her know in good time so she can schedule it in.
(note - I am very quiet in this video so a high volume and headphones may be required)
Music Guy Mixing (2024) The 6 EQ filters - when to use each one, Music Guy Mixing. Available at: https://www.musicguymixing.com/eq-filters/ (Accessed: 02 May 2025).
DOCTOR MIX (2024) Parallel Compression explanained, Parallel Compression Explained. Available at: https://doctormix.com/blog/parallel-compression-explaned/ (Accessed: 03 May 2025).
Mastering․com (2016) How To Use EQ Boosts To Find The Nasty Stuff | musicianonamission.com - Mix School #4, YouTube. Available at: https://www.youtube.com/watch?v=PJmOQiXmspc (Accessed: 05 May 2025).
Wilkey, M. (no date) EQ Techniques, BTEC Music Technology. Available at: https://sites.google.com/view/btecmusictechnology/unit-13-mixing-and-mastering/mixing-techniques/eq-techniques (Accessed: 05 May 2025).
Wilkey, M. (no date b) BTEC Music Technology - Compression Techniques. Available at: https://sites.google.com/view/btecmusictechnology/unit-13-mixing-and-mastering/mixing-techniques/compression-techniques?authuser=0 (Accessed: 05 May 2025).
In this week, I aim to edit my first composition to the 'Borderlands 4' advert. I decided that I would do this by editing the master, rather that using the project to edit it. Whilst using the project to create the edit would be easier, I thought it would be more realistic to this part of the industry to edit the master (the difference being that instead of being able to edit each individual part of the song, you have to edit it all collectively). I also aim to record parts for my Composition 2, this including the Dialogue heard over the advert.
When researching if it was standard to receive a full project with stems when editing music for advertisement, I received a mix of answers. For example https://www.linkedin.com/posts/dafingaz_the-importance-of-stems-in-sync-licensing-activity-7232408708461334528-SBvy#:~:text=Typically%2C%20no%20more%20than%2012,appealing%20to%20music%20supervisors%20/%20editors?&text=Great%20advice%20%F0%9F%99%8F%F0%9F%8F%BD%20I,%F0%9F%91%8C%F0%9F%8F%BC%20I%20like%20that.&text=At%20this%20point%20STEM%20prep,STEMs%20is%20SO%20MUCH%20easier. states that no more than 12 stems is enough for editors, where as when you look on https://www.feltmusic.com/ libraries, there is no visible stems. I wondered if this was only due to not having purchased, so therefore I wasn't able to view any stems. I decided to ask my teacher Ian about this, as he has had music uploaded to Felt before. When asking him, he said he hadn't sent stems when submitting work. This has led me to a personal conclusion that it isn't standard to submit a full project with stems, however it is optional and dependant on the agency you work with. I decided to continue to edit with the master.
The first thing I did this week was record the acoustic guitar for my second composition. I did this by putting the microphone in the control room as to be able to quickly and efficiently record the guitar. I used a similar method to how I recorded the bass, putting the XLR cable straight into the Element 88's input. However, I didn't use a DI box for this, as I didn't need to balance the signal. I decided to use the AKG C3000 condenser microphone for this, as it covers the human hearing range of 20-20000 Hz, allowing there to be a lot of detail picked up from the guitar. I also made sure to place the microphone over the sound-hole of the acoustic guitar, as this is where the most volume comes from from the guitar.
Also due to not being able to keep the dialogue for this advertisement, I decided it would be important to rerecord this, otherwise the advert would lose the majority of it's important content. I watched the video through and copied the dialogue parts into a notes app. Then I had Marcy speak the dialogue, again using the AKG C3000 as it was already set up and readily available, as well as it being good for recording vocals because of it sensitivity and frequency range, allowing the dialogue to be recorded in high detail.
I prioritised this and the guitar over the piano in the song, as reflecting from my previous week, recording the bass removed time from doing my edits, so I prioritised the guitar (as to reflect my speciality) and dialogue (important content of the advert). This was also because I knew I could use MIDI piano and it would still sound good.
MIDI - Musical Instrument Digital Interface - a way to communicate information from devices that make or control sound to each other (https://cecm.indiana.edu/361/midi.html#:~:text=MIDI%20is%20an%20acronym%20that,each%20other%2C%20using%20MIDI%20messages.)
The first thing I did when syncing the advert to the video was cut the master up into it's separate sections (the one listed in the structure in week 2), using the scissors tool in logic to cut up the audio file, this allowing me to easily drag and drop sections where I wanted them.
Next I dragged each section into the correct place making sure it syncs with the points of the ad I wanted to add more emphasis of the excitement brought by purchasing the game. I also added PlayStation's 'Sonic Logos' (see week 1 research) as this is an important part of their brand in this advert. Sync Points are parts of the video and the music that occur simultaneously to add emphasis to a critical point of the video or add interest to the video as a whole. For example, if something blows up in a film, the music may suddenly cut out or jump up in energy to emphasise this. however it can be less noticeable than this.
The opening scene to the movie 'Baby Driver' is a really good example of sync points, as nearly every movement the characters do is synced with the song playing. You can see this with the doors closing collectively on the beat and the trunk of the car opening and closing on the beat (0:28).
This is a good example of sync points used when syncing my composition to the advert. In this add, their is a section where the main character falls into water, so for this I wanted to find a water splashing sound effect. I found the sound I wanted and dragged it into Logic's Quick sampler, where I could change what section of the audio I want, how long I want the audio to be and how quickly I want it to fade in and out (this all seen in the graphed section in the image on the left, the section of the audio seen between the markers on the bottom of the graph, the length of the audio being the space between these and the fading seen with the white slopes). I then placed this at the point of the character falling in the water.
This is Remix FX. https://support.apple.com/en-gb/guide/logicpro/lgcpe9906785/mac explains that Remix FX includes a variety of buttons, sliders and XY pads which can be used in real time to control effects which are typically used in electronic and dance music. Some of these effects will be explained in the section below. I decided to experiment with this and see if this would help to make the edits more seamless.
In the edit, I used a lot of automation (making the DAW automatically apply changes to your track) as to smoothen parts between edits and cuts. On both tracks we can see I have used volume automation for changes in sections. Particularly, I have used this so the volume of the track fades out more gradually, making it sound more smooth and cohesive.
I have also automated the Remix FX for the same purpose to make the edit more cohesive and smooth, but also to add more interest into the edit and to help add interest to sync points. On the bottom 'EMPTY' track (this track only contains the end of a section so I could apply effects to this separately if I wanted too) we can see 'Filter Cutoff', representing the X axis on the filter section of Remix FX; and 'Filter Resonance', representing the Y axis. Instead of clicking and dragging each point on the automation line (seen with volume), I used a technique called 'Latching', in which I can play the audio and apply effects and automation in real time. It also meant I could have remix FX open and use it's buttons, sliders and XY pads in real time rather than altering the automation line. So rather than altering the X and Y of the filter XY pad by clicking and dragging on the automation line, I was able to use the XY pad to automate the filter in real time, this done to have the higher frequencies filtered automatically over time. I used this consistently when using and Automating the Remix FX. This helped me to experiment with ideas within Remix FX and implement these ideas with ease.
On remix FX there is a 'Tape stop' button which replicates the sound of the song stopping on a tape machine. Again I used Latching to automate this onto my bottom 'EMPTY' track, pressing a button on Remix FX to activate the effect (seen as the square button). when experimenting with this, I found it helped to create interesting and seamless transitions, particularly between parts which I had edited together, as due to this being naturally recorded and played, it is near to impossible to be completely on tempo, meaning some edited together parts sounded a bit 'Jumpy'. Shown below is a good example of what this effect sounds like.
I also applied this same effect at the same time on my top 'EMPTY' track, so that both the bottom and top 'EMPTY' track had this effect applied at a similar time.
I also Latched a 'Repeater' effect, this seen on the right side of the Remix FX display. What this does is activate as soon as you press down on the section and repeats a specific section until you release it, the length of the section repeated determined by what rate you select (for example, 1/4 would repeat the sound 4 times in a bar, and the length of the part captured or repeated would be shorter then say 1/8). The 3 images above show the activation point, the mix (how much of the original sound compared to the repeated sound you have (going 0 - 1)) and the rate. I used this for the section in the Borderlands 4 ad where the screen flickers, as this is a good sync point to add, inspired by the original advertisement (seen below).
Again, I used the same XY filter Pad and Latched it. Allowing me to filter sweep (progressively apply the filter more and therefore to remove more frequencies) the highs, specifically at sync point like: falling into the water (progressively removing the highs to make it sound more underwater), emerging out of the water (gradually adding the highs back in), with the tape stop (removing highs, helping to add more emphasis to the tape stop) and when the 'Borderlands 4' fades into view (the highs gradually coming back in as well as the volume, helping it add more emphasis and drama when slowly appearing).
As I forgot to last week, the first thing I did at the start of this week, before recording the bass, was email harry with the finished mix for him to master.
I ended up receiving the finished master the next day. This was excellent as I was worried I may not receive this until a later date due to not giving him very good notice. This truly reflected Harry's reliability.
I responded with positive comments on the master, as well as specific details I liked about it as to show I had really taken in and appreciated the changes he had made. I ended this email saying I would definitely message him again for another master for him to do, this hopefully showing my appreciation and liking toward his master.
(note - I am very quiet in this video so loud volume and headphones will most likely be required)
Caution - Lower volume is suggested
Manderson, M. (2024) The importance of stems in sync licensing if you’re not offering stems...: Marcus Manderson: 10 comments, The Importance of Stems in Sync Licensing If you’re not offering stems... | Marcus Manderson | 10 comments. Available at: https://www.linkedin.com/posts/dafingaz_the-importance-of-stems-in-sync-licensing-activity-7232408708461334528-SBvy#:~:text=Typically%2C%20no%20more%20than%2012,appealing%20to%20music%20supervisors%20/%20editors?&text=Great%20advice%20%F0%9F%99%8F%F0%9F%8F%BD%20I,%F0%9F%91%8C%F0%9F%8F%BC%20I%20like%20that.&text=At%20this%20point%20STEM%20prep,STEMs%20is%20SO%20MUCH%20easier (Accessed: 06 May 2025).
Felt music (no date) FELT MUSIC. Available at: https://www.feltmusic.com/ (Accessed: 06 May 2025).
Course content is copyright ©2013–2025 John Gibson (no date) Indiana University, The MIDI Standard: Introduction to MIDI and Computer Music: Center for Electronic and Computer Music: Jacobs School of Music. Available at: https://cecm.indiana.edu/361/midi.html (Accessed: 08 May 2025).
Remix FX in Logic Pro for Mac (no date) Apple Support. Available at: https://support.apple.com/en-gb/guide/logicpro/lgcpe9906785/mac (Accessed: 08 May 2025).
Rotten Tomatoes Coming Soon (2017) Baby Driver Opening Scene (2017) | Movieclips Coming Soon, YouTube. Available at: https://www.youtube.com/watch?v=7ARFyrM6gVs (Accessed: 09 May 2025).
Gaumarcos85 (2021) Tape Stop Sound Effect, YouTube. Available at: https://www.youtube.com/watch?v=fs0o04Lf8tU (Accessed: 09 May 2025).
IGN (2025) Borderlands 4 - Official Release Date Gameplay Trailer | State of Play 2025, YouTube. Available at: https://www.youtube.com/watch?v=oJS4Rjqs7As (Accessed: 25 March 2025).
This week, I aim to Mix my second composition, as well as send this to the mastering engineer to have by the end of the week to sync to my advertisement. I also aim to make my mixing process take a shorter amount of time, as reflecting on week 4 made me realise I should try and do less mixing techniques, but still aim for a good result. I am also now aware that my editing to advertisements shouldn't take me very long, so being aware of this I can spend more time Mixing.
I decided that for this weeks research, I would research more on approaching sync agencies in the hopes to get some information on what I need to do, how I want to come across, whether I should send my songs in my opening email etc.
https://www.linkedin.com/posts/dafingaz_lets-stop-overcomplicating-sync-licensing-activity-7242555543339290625-9MYf In this link, Marcus Manderson provides a simplified explanation on some tips for sync licensing. In this, he states to focus on Quality over quantity as this will help with recognition. He also emphasises the importance of 'starting small', as this is mentioned three times. Unfortunately, he doesn't elaborate on this, but I assume this is to do with smaller projects, but again I am still unsure on how this fits in with syncing. Another thing he said which resonated with me is to 'target those who need what I offer'. I can use this as a means of trying to upload to specific sync libraries, researching which ones are closest to my genres and styles. This has lead me to think I should try and have my 2nd composition uploaded to Felt's Library (https://www.feltmusic.com/work), as this is where I gathered some of my reference tracks from. However, I am unsure on whether they would take my 1st composition as their rockier songs (https://www.feltpm.com/tracks?q=Rock) are more processed, whereas my composition is rather raw, so it may be worth looking into other sync agencies with a wider variety.
I also found this video which provides useful tips on how to pitch songs to sync libraries.
His first tip is to 'Understand who you are pitching to', this relating to each sync agencies unique needs, so researching this and looking at their needs is important. I think the best way for me to do this is to look at info sections on websites and go through libraries to see what they like, relating back to Marcus' point on targeting those who need what I offer.
Tip 2 refers to presentation. so this meaning that if I am reaching out through email, it needs to be neat, concise and to a professional standard. In this video he also mentions using a catchy subject line to help get noticed, this being alike to using descriptors for songs (mentioned by Matt in week 2 research), so perhaps looking at these needs and then naming the song to tailor to that is a good method of grabbing attention. he also says to have a 5 sentence email in the body, with a brief description of yourself, contact info and most importantly ask if they are taking submissions.
In his 3rd tip he mentions the use of Disco to help manage your music, however this is a subscription based platform so I will not be using this, however this is something I may look into in the future if I decide to continue in this section of the industry.
He also mentions it is important to make sure that I show value to whoever I am pitching to. This could be looing at what they are working on at the moment and explaining how your work could benefit them on the project currently worked on. However he does specify this does not mean just sending them unrequested tracks, as this can come off as rude and unprofessional.
Next, he speaks on building connections with the Agents contacted, this means following up, possibly conversing with them on topics outside of music to help create a good relationship.
Finally, he mentions a point which he says is really important, which is to provide options, this could mean instrumental versions of the songs, stems for the song, clean versions, as it help to give them a lot to work with. Due to time constraints I wont be able to do this, however this is helpful to know if going forward with this as work.
The first thing I did prior to mixing my song was put MIDI piano into my track. I did this by using the MIDI keyboard provided in the Mac suites to play the piano part long with the guitar. While I could have made the piano part snap exactly onto the beat (this being called quantising), I decided to play it long with the guitars as it would make the guitars and piano part more in sync and cohesive, as I was not able to record this perfectly to the beat, this being near impossible. I then created a separate synth track and copied the MIDI information onto it, these seen in the green regions at the top of the screenshot above. This would be my synth double.
For my synth double, I used Retro Synth. I used a triangle wave to make it sound smooth and slightly harsh, this being helped by the filter section in which I cut out a lot of the high and middle frequencies, removing a lot of the harshness. I also used a small amount of resonance to give it a slight bit more clarity. I also detuned one of these waves to make the synth sound a bit more like an analogue synth (these not being digital plug ins but separate machines for synths) and giving it a bit more of a warped texture to the sound. I applied chorus to this to help create this warped sound and an LFO to make the volume of the synth modulate slightly up and down. For the amp envelope, I used a slightly longer attack and a longer release to make the sound more soft and less sudden, as the piano would be doing more of this, and for the filter envelope I made the highs gradually filter out to add more interest to the sound.
Next, I created the bass synth, using the same wave as the piano double, but removing any detuning from the triangle wave and adding a slight bit of a saw wave for some more harshness and clarity. I then cut off a lot of the highs as to make the bass more low, and again added a small amount of resonance to add more clarity. I also used a sine wave as it gave a smoother and boomier sound to the bass. Then for my filter envelope, I had a gradual but sudden attack, as to give the bass sound some more bounce. I then also used a harsh attack for the amp envelop and had this decay very harshly, making the sound fade of very suddenly after initially playing it, this making a slightly plucky sound.
For the ambiance of the track, I wanted a very background, airy sound. To do this, I used a triangle wave and a white noise wave for this, cutting all frequencies but the lows and lower middle frequencies to remove any harshness from the white noise. I then added a sine wave to the overall sound to help round it out and make it more smooth. I also use chorus to warp the sound to add some interesting texture to the ambiance. finally, I used a long attack and a long release which really helped to make the sound more airy and background as there was no sudden rise or decrease in volume.
When EQing my piano, I removed any unpleasant muddiness or rumble by cutting the low end frequencies slightly at around 50 Hz. I also removed the very highest frequencies at around 20 kHz to make the sound slighly less harsh. Also when listening to the MIDI piano, I could hear some clicking from where the pianos keys had been pressed, so I used the sweep method to find the frequency where this was mainly and used a bell curve at around 150 Hz to remove some of the volume from this.
Next I EQ'd the piano double, using a Low cut to remove frequencies at around 100 Hz and a bell curve to slightly remove more low end at around 125 Hz, as to remove some muddiness. I then used the sweep technique and boosted the middle frequencies at around 800 Hz, as this was where the cleanest and most pleasant frequencies of the sound were, and I cut the frequencies up from around 3500 Hz, as this was mainly unused frequencies with only a slight amount of the sound coming from this range. In doing this I made the sound more warm and less bright sounding.
For the bass synth, again, I removed the high end, cutting this from around 1000 Hz, as to remove any high end from the bass. I also used the sweep technique to find any unplesant or pleasnant frequencies. I ended up reducing the volume at around 120 Hz and boosting the volume at around 200 Hz. I also cut a small amount of low end at 20 Hz, as to remove the sub frequency which was adding unwanted muddiness and rumble to the track.
For the ambiance, I cut frequencies on both high and low ends, leaving me with a range of 200-1000 Hz, as I wanted to cover this range, as this range felt empty when listening to the song, filling the song out more. Again I used the sweep technique to find any undesirable or good frequencies and added some resonance to the cut off point of the lows, this being at around 200 Hz.
Finally, I used EQ on all of my guitars by placing the EQ on the guitars summing track, Allowing them all to be affected equally by the EQ and as not to make them sound dissimilar from each other by EQ each one individually. I cut off the low end frequencies at around 300 Hz, as the guitars had a lot of rumble, but also to make the guitars sound more bright. I also cut off the harsh high frequencies at around 13.5 kHz, and I again used the sweep technique to find a good frequency to boost to add more clarity to my guitars, this boost being at around 2000 Hz with a bell curve filter.
As mentioned in my week 4 evaluation, my mix took me a very long time and it ended up being that I couldn't edit my composition to the advertisement in that week. So as to simplify the mixing process and still end up with a good result, I decided to use buses to control the reverb and compression on every track, thus saving time not doing this all individually. As we can see in the image on the right, I highlighted all the track I wanted to be sent through these buses and clicked the send section, hovered over the bus section which opened up this big list of buses. These are not all on your track yet, only when you click on them are they created. So doing this, I created 3 buses which all of my tracks would go through: a small reverb, a large reverb and a compression bus.
For my small reverb I did very minimal changes to the default preset that is used when the reverb is first opened on a track. I used a room reverb as they naturally have a shorter reflection tail (shorter time for the sound to stop). I then used a Decay of 0.61 seconds and left the attack at 0 as to make it apply more gradually, as well as keeping the pre delay at 8ms for the same purpose. This is very light, however the main purpose of this reverb is to saturate the tracks and 'give them a space', making them sound more as if they were played in the same room to make the song and the mix more cohesive. Again, alike to week 4, I used a wet of 100% and a dry of 0%, as I will be blending this in using the volume faders on my buses and the gain on the tracks sending audio through these buses.
Seen in the drop down, I use the EQ section of this reverb, cutting the highs and the lows so that the reverb doesn't effect them. The reason for this is because when you reverberate something low, it can sound really rumbly and muddy, and with the highs really sharp and too bright, so by making these unaffected it both avoids this and also allows more clearness and clarity to the lows and highs of the song.
For my large reverb, I used a concert hall emulation as this is naturally a much stronger reverb. I also set the decay to 0.90 seconds as to make the sound last longer and the reverb to be larger than the small reverb. I also used a predelay of 4 ms as to have the reverb come in quicker, however still having the attack at 0% to make it introduce more gradually. I set the the dry of the track to 0% for the same reason mentioned above, however I set the wet to 50% to make this reverb quieter, this acting as a security measure to make sure I don't accidentally add too much reverb and wash the original audio.
I also did a low and high cut again in the EQ section of the reverb for the same reason as described above.
This is a sample of this composition comparing the sound with no effects on it, compared with my addition of the Large and Small reverbs.
For my compression bus, I used the Classic VCA compressor, as it is very good a 'gluing' multiple tracks together, making the sound sound more tight and cohesive, 'gluing' the song together. On this I used a threshold of -40 dB, as to make sure everything is picked up by the compressor. I also used a ratio of 8:1, as it would reduce a lot of the peaks and make the song sound more flat. I though that this would be ideal for the song and would sound good, as it wouldn't have any sudden jumps in volume which would take away from the peaceful and sad nature of the composition. Finally, I adjusted the output gain to better match the input gain to make sure that no Decibels (volume) were lost.
This is how this sample of my composition sounded with the addition of compression. I have also added a sample of how it sounded with both the reverbs and the compression applied.
I did use another compressor, however, on the intro guitar part (audio 1). This was because (as mentioned in my week 5 evaluation) I had the gain levels set very high to pick up the quietness of my guitar playing. This meant that there were a lot of loud slides from the guitar when I was switching between chords which sounded unpleasant, so I used a compressor to try and reduce the sudden peaks of volume caused by this. For this I used the Opto compressor as it had a Knee dial, allowing me to aggressively reduce the volume of these peaks. I set my threshold to -7 dB as to just capture these slides and not the rest of the guitar. I then used a very strong ratio of 30:1 as to try and remove as much of the peaks as I could. I also used a Knee of 1 and Attack of 0ms so the compressor would activate as soon as these peaks came through, and I set a release of 20ms to make the compressor deactivate more slowly and not cause any distortion from sudden compression. As can be seen on the graph by the white arches long the top, the compressor didn't reduce this very much at all, so I decided to use another method to help with this.
This other method was to Automate the volume so it dips down when the peaks come through. I made sure to do this last as to ensure I was happy with the volume set. This ended up working really well and reduced the slides' volume so that it blended in whith the guitar track better.
For the strummed chords, I wanted to add more effect to them to emphasise the start of a new section of the composition more. I decided to use a mono tremolo effect for this, as opposed to using a stereo tremolo. A good way to exemplify the difference between mono and stereo is to imagen you are wearing a pair of headphones. If the song or track is stereo, you will be able to hear it from different directions (maybe only in the left or right side of your headphones), whereas if the song or track is mono, it will equally come through both sides of your headphones. The reason for using a mono tremolo is because I didn't want the tremolo to go from one side to the other (like with the tremolo used in my 1st composition, see week 4 practical) and instead just rise and decrease in volume, as the track was already panned on the right side.
For the tremolo, I used a rate of 1/8t, meaning 8th note triplets. What this means is that each bar is divided into 8 (8 times the tremolo will rise in volume), however where 2 divisions would be, there is another 3rd division. I also used a depth of 58%, as to make the volume roughly half itself on each activation of the tremolo. I also smoothened out the transition between the tremolo deactivating and activating to make the application of the tremolo softer and less aggressive with its volume reductions, this fitting the timbre (quality of the sound) of the composition as a whole.
Last week, I didn't receive my master in time to finish everything I set out to do, so in this week I will be continuing with the same aims as to finish the work I set out to complete. This acts as, and will be displayed as, a continuation on week 6.
The first thing I did when starting to sync my second composition to the Snow Leopard advertisement, was drag all of my audio files and the video file into a new logic project. Once I had done this, I put all of the dialogue recorded for the advert into a summing track, so that I could edit all of the dialogue collectively, as to make sure the dialogue doesn't sound dissimilar in parts and to help with contingency.
The next thing I did was EQ both the dialogue and the master. The reason I did this was to both make the dialogue's vocals sound better but also to create space for the dialogue to sit. I did this by cutting the higher frequencies at around 10500 Hz on the master and boosting the high frequencies at around 16500 Hz on the dialogue sum. This means that the higher frequencies of the master are dominated by the dialogue, making the vocals more easily heard.
For EQing the dialogue, I cut the lower frequencies from around 90 Hz and below, to add clarity to the vocals and to also remove/reduce plosives in the dialogue. It is worth mentioning that plosives (ex. Ps and Bs), fricatives (ex. Ks and Fs) and sibilance (ex. S') won't always reside in the same frequency range as it differs from person to person. I also Used the Sweep technique to find any pleasant or unpleasant frequencies and made a reduction at around 1000 Hz.
I decided to use a compressor on the dialogue, as since there wasn't a pop shield on the microphone, there were a lot of loud plosives and fricatives. I was unsure if this was going to work as from reflecting on the prior week, when recording the acoustic guitar, the slides weren't easily or effectively removed from using the compressor. From this I decided that I would automate these plosives and fricatives, alike to how I automated the guitar slides, as well as compressing it.
For this compressor, I used a threshold of -18 dB, as the dialogue was quiet and this was the point that captured these plosives and fricatives. I then used a ratio of 12:1 as to not drastically remove these plosives and fricatives but also to try and generously reduce their volume. I then used a very fast attack and release as to quickly reduce these peaks. Looking back the release time is way to short to reduce all of the peak signals. I then used the output gain to match the input gain, making sure there was no volume lost from this compression.
I also used a compressor on the Master track as well. The purpose of this wasn't to squash any peaks down or create a more consistent volume, as this is done both in the mix and the master. The reason I used this was to 'Side Chain' the music from by the dialogue. Side Chaining is when a separate input controls the activation of the compressor. The input can be selected at the top right hand corner of the compressor. This is particularly useful when trying to make the dialogue on top of music stand out, as (like I did) you can make the music duck out so that the vocals or dialogue can be more present. This is particularly used with bass and kick drums, as since they cover similar frequency ranges, you can have the bass duck out briefly so the kick can be heard better.
For this, I set a threshold of -37 dB as to make a lot of the master track get affected when the dialogue starts. I set a ratio of 1.2:1, as to apply the compression lightly and not make the music duck out drastically, making it sound more natural and less noticeable. I set a fast attack of 0ms, as to make sure the master drops out as soon as the dialogue starts, and a release of around 40ms, making sure that in between each sentence spoken, the music can come back in, but not so suddenly as to make the effect noticeable. I left the output gain at 0, as we want the volume to drop a little to allow the dialogue to come through clearer.
This effect is hard to notice, but I have provided examples on the left with it applied and without it being used.
When asking my teacher Jordan for input on my work, he suggested I used what is called a 'DeEsser'. I wasn't sure what this was so I went to https://support.apple.com/en-gb/guide/logicpro/lgcef1bec850/mac, as to search up how to use it. I found the way this website explained it hard to follow so I decided to watch a YouTube video on what this does and how to use it.
I found this video and decided to watch it as it was the most recently released video about Logic's DeEsser. In this video, he states that the DeEsser creates 'Frequency sensitive dynamic control', which he describes simply as a way to effectively reduce the volume of 'a frequency specific area'. He says that DeEssers are good on vocal tracks for reducing sibilance, which is why Jordan suggested I use this. He also mentions that this sibilance tends to come when adding air to a vocal track. Adding air to a track is another way of saying adding very high frequencies to a track, as this addition tends to make the vocals sound airy. I found this relevant to my situation as I have done this to my dialogue track, as mentioned above. He mentions that this addition of the high frequencies tend to add more more volume to the sibilance of the vocals, so a DeEsser is a good tool to use for this as it controls the volume of this sibilance without affecting the EQ. He then proceeds to talk about the parameters of this tool:
Relative and Absolute Mode
When absolute mode is selected, we will see the group of specific frequencies represented as a blue line which will appear to rise from the bottom of the 'detection' area, this allowing us to see when this specific frequency is lower in volume (less height to the blue line) or louder (more height to the blue line). When Relative mode is selected, we see this same blue line represented as, in our case, this frequency not having this sibilance issue (this line going downwards from the threshold line) and anything in the yellow as a sibilance problem (this going above the threshold)
Threshold
The threshold allows us to select which volume we want to introduce the volume attenuation of the sibilance. The more you lower the threshold the more sibilance will be seen as an issue and the more you increase the threshold, the less.
Frequency
the Frequency dial allows us to select which frequencies we want to focus on when using the DeEsser.
Max Reduction
This allows us to alter how much we want the volume to decrease when a big sibilance issue is detected. The amount of volume reduce can be seen on the 'Reduction' area.
Filter
The filter section allows you to choose which kind of EQ filter we want to use, whether this is a bell curve (which will reduce the volume at the frequency set) or a high cut (which will reduce the volume of the frequency set and all of the frequencies above this point). These filters can be seen in Week 4 - Practical - EQ. These do not reduce exactly at the specific frequency, but will also slightly reduce the frequencies around this point, the amount can be selected with...
Range
The range allows us to choose how specific we want to be with our filtering, whether we want it wide and to effect more of the surrounding frequencies or split which will effect less, allowing broad and precise filtering types.
Filter Solo
When activated, the frequencies attenuated will be solo'd, allowing us to hear what is being removed or reduced (the problem frequencies).
This helped me to better understand how the DeEsser works and how I can use it to my advantange. I decided before I use this properly, I would use each of these parameters myself and see what each one does to affect the sound and what the best approach would be to start using this tool.
After experimenting with these parameters, I decided to use the Relative mode, as it would better allow me to see the amount of problem frequencies. I decided I would first start out by selecting a frequency where there was the most sibilance, using a wide range and a bell curve filter to better detect where it resides and also using the filter solo button to check if this was correct. This is the start of the process that worked best for me when experimenting, as it would allow me to hear the results of the DeEsser better as I go forward using other parameters. I next selected the threshold level to try and pick up what I thought was the correct amount of problematic frequencies. I then set the Max reduction all the way up and slowly decreased it until I felt it sounded natural, this being at 5.2 dB reduction. This was something I found useful to do when experimenting, as it allowed me to better hear when it started to sound more natural, this Idea of making it sound more natural being brought up in the video seen above. I then adjusted my threshold until I thought the correct amount of problematic frequencies was detected (-5.4 dB). I found this best to adjust last from my experimentation, as since everything else has been set, you can now hear the full result and find the place where the problems reside less by adjusting this threshold. Since this was used very lightly, it may be hard to hear, but I found the use of this tool is best heard and exemplified by listening to the S of the 'Rise' in the sentence provided to the left.
The final thing I did when syncing my composition was Automate the volume. I decided to have the music fade in at the start and ends, as I felt this added to the soft sound of the track and helped to make it more dramatic, this reflecting the seriousness of the advertisement. I also reduced the volume of the music slightly to give the dialogue more space and for them to be better heard. Finally, I applied sudden volume reductions to the plosives and Fricatives of the dialogue, as I previously mentioned I would do.
In this week, I made sure to message Harry as soon as the mix was done and sent it over. I also showed my excitement as to see what he would do with this song, as well as another comment on the previous master to still show my appreciation for the work he is doing for me. However, I did not provide notice for when this may be arriving.
At the start of week 7 I received a email back from Harry with the finished mix. In this he says that this song was mixed really good. I found this to be both a success and a surprise as I aimed for a good result, however I attempted to get this good result through mixing quicker, which makes me question the correlation between the time spent on the mix and the quality of such. I will review this in my evaluation.
Finally, I said my thank you's to Harry for mastering my compositions for me. Also reviewing back on my research from this week, I thought I could apply some of this into this Email. I particularly focused on the aspect of making connections, and as to fulfil this, I decided to email him with a thank you, but also going into a slightly more personal note and wishing him luck at university, this being something he mentioned in his opening email. This was inspired by the section of the video used for my research in which they talk on building connections, specifically looking at how having conversations outside of music can help to create a relationship, and I thought this part on University was good as it helps to create a bridge between work and personal life, this helping me to contact Harry more easily as I have a point of topic to bring up if I do contact him for more work.
This was the opening Email I sent to https://www.feltmusic.com/. In this Email, the first thing I did was think of a eye-catching subject line, as suggested in the video, this being 'If Bob Dylan and Billy Joel had a baby in a cathedral'. I used this to describe my second composition as this was the one I thought they would be more likely to accept as I had gathered inspiration from their library. I specifically used Bob Dylan's name, as the song features some acoustic, finger-picked guitar, but more than that I implemented his name due to the cultural relevance of him right now, as the Bob Dylan movie has recently released. I thought this would help to make this email pop out more.
The reason I decided to go for Felt when selecting my sync Library was because: 1 - I had specifically used reference tracks from this library, making the 2nd composition inspired from this library and more likely to be selected; 2 - because my teacher Ian has frequently uploaded to their library, this providing me with both a good part to implement into my email submission but also a chance to get some good information from Ian, as he has worked with this specific Library and will be helpful to get some specific info on this agency. I made sure to ask Ian for feedback throughout the creation of this email as a for of quality control for this submission.
One thing I didn't do which the video recommended I do was wait for them to ask for my tracks before I sent them over. This was because the submission section on Felt stated to provided a short Bio and some links to work, so I decided to go with what the info section told me to do.
In the video, they suggest showing your worth in the submission Email. I didn't know the best way to do this, so I decided to say I had reviewed their Libraries and had constructed some song which I thought would work for them, this hopefully showing that I am able to provided them with more work which I know they value from reviewing their site.
Finally, I provided them with descriptors of my work, this going back to week 2's interview in which Matt states that giving your songs descriptors helps to making your music stand out in libraries. I also named my second composition based on what name I thought would fit alongside the names of the reference tracks gathered from Felt, this giving me 'Hopeful for a Better Day'. As for my first composition, I wanted to emphasise this Raw-ness and in-your-face energy that the song has, so I decided to go with 'Eat it Raw', which I felt helped to describe the song well.
Something worth adding that I had forgot to mention in this Evaluation, is that I decided not to score to an image when syncing the song to the advertisement. This was because, as previously mentioned in my evaluations, the advertisements I selected didn't have a story or dynamic arc to them, so I decided against doing this as I thought it didn't apply to my work. However, this is something I will practice in the future as I can see this being a useful skill, helping to accentuate parts of an advertisement and helping emphasise a story told in the advertisement; if deciding to go down this route for work.
Manderson, M. (2024) Let’s stop overcomplicating sync licensing for beginners. this quick tip...: Marcus Manderson: 46 comments, Let’s stop overcomplicating sync licensing for beginners. This quick tip... | Marcus Manderson | 46 comments. Available at: https://www.linkedin.com/posts/dafingaz_lets-stop-overcomplicating-sync-licensing-activity-7242555543339290625-9MYf (Accessed: 13 May 2025).
Work (no date) FELT MUSIC. Available at: https://www.feltmusic.com/work (Accessed: 13 May 2025).
(No date) Felt music. Available at: https://www.feltpm.com/tracks?q=Rock (Accessed: 13 May 2025).
xJ-Will (2023) How to Pitch Your Music for Sync Licensing: 5 Tips and Tricks, YouTube. Available at: https://www.youtube.com/watch?v=SEpblg7xRJc (Accessed: 13 May 2025).
Deesser 2 in Logic Pro for Mac (no date) Apple Support. Available at: https://support.apple.com/en-gb/guide/logicpro/lgcef1bec850/mac (Accessed: 22 May 2025).
Jono Buchanan Music (2024) Logic Pro X: How to use Logic’s De-Esser, YouTube. Available at: https://www.youtube.com/watch?v=gFz32f0ftuI (Accessed: 22 May 2025).
Original advertisement -
lone dragon queen (2018) Adopt a snow leopard! *read description*, YouTube. Available at: https://www.youtube.com/watch?v=5C4JiQBI0do (Accessed: 25 May 2025).