The number of cycles per second (Hertz) (abbreviated as, hz) of anything that oscillates is called the "frequency". The electricity of an AC wall outlet is said to have a frequency of 60 Hertz as it cycles negative then positive 60 times each second.
Sound is an oscillating wave, but it has a broad range of frequencies. A low frequency sound (say, 50 hz) might sound like a low rumble, while a high frequency sound (say 12,000 hz), might sound more like a "sizzle". A person with normal hearing can hear all the way up to about 20,000 hz.
Sound is actually more like "compression" waves, rather than waves on the ocean. When something makes a sound, the air is compressed or rarified in waves that travel out from that source in all directions. When those compressed or rarified areas of air hit your eardrum, it vibrates in sympathy with those compression waves in the air and allows you to hear.
The higher the frequency, the shorter the distance between each successive compression (or rarification) in the incoming sound wave. This distance is called the "wavelength". Sound travels at about 750 miles/hour, so the compression waves between 100 hz and 20,000 hz have wavelengths that range between several feet (for the 100 hz sound) to a fraction of an inch (for the 20,000 hz sound).
Speech also has a range of frequencies, but it mostly limited to the range between a hundred (or so) hz and 8,000 (or so) hz. The frequencies that make up vowel sounds are typically lower frequencies, while the consonant sounds (at least the parts that help us hear which consonant sound was spoken) tend to be higher frequency sounds.
People with even moderately good hearing up to about 3,000 hz can understand speech fairly well. Wired telephones typically do not transmit sound above 3,500 hz.
Wavelength is the distance between waves. A lot of things travel in waves. Water has waves, radio also travels in waves. Even light has wave-like properties.
Sound is actually "compression" waves in a medium (like air), rather than the waves on the ocean. When something makes a sound, the air is compressed or rarified in waves that travel out from that source in all directions. When those compressed or rarified areas of air hit your eardrum, it vibrates in sympathy with those compression waves in the air and allows you to hear.
The higher the frequency, the shorter the distance between each successive compression (or rarification) in the incoming sound wave. This distance is called the "wavelength".
Sound travels at about 750 miles/hour, so the compression waves between 100 hz and 20,000 hz have wavelengths that range between several feet (for the 100 hz sound) to a fraction of an inch (for the 20,000 hz sound).
Amplitudes are expressed either as instantaneous values or mostly as peak values.
Amplitude is the fluctuation or displacement of a wave from its mean value. With sound waves, it is the extent to which air particles are displaced, and this amplitude of sound or sound amplitudeis experienced as the loudness of sound.
From the "Encyclopedia Britannica": For a transverse wave, such as the wave on a plucked string, amplitude is measured by the maximum displacement of any point on the string from its position when the string is at rest. For a longitudinal wave, such as a sound wave, amplitude is measured by the maximum displacement of a particle from its position of equilibrium. When the amplitude of a wave steadily decreases because its energy is "being lost" (converted to heat), it is said to be damped. Sound waves in air are longitudinal, pressure waves.
The lower the density that of a medium, the faster the speed of sound and the higher the compression is, the slower the sound travels. The speed of sound in air is approximately 331.5 m/s at 0 °C or around 1200 km per hour. The speed of sound through air is approximately 343 m/s at normal room temperature, which is at 20 °C. The speed of sound through air is 346 m/s at 25 °C. The speed of sound in air is approximately figured out by the formula.
speed of sound (m/s) = 331.5 + 0.60 T(°C)
For example, the speed of sound in air is 386 m/s at 100 °C. The sound of speed in air is increased by 0.60 m/s for each increase of degree in air temperature. The speed of sound is faster at higher temperatures because molecules collide more often.
Phase specifies the location of a point within a wave cycle of a repetitive waveform. For most purposes, the phase differences between sound waves are important, rather than the actual phases of the signals. ... Two sound waves that are in phase add to produce a sound wave of greater amplitude
An important characteristic of a sound wave is the phase. Phase specifies the location of a point within a wave cycle of a repetitive waveform. For most purposes, the phase differences between sound waves are important, rather than the actual phases of the signals. When two sound waves combine, for example, the difference between the phases of the two waves is important in determining the resulting waveform.
The phase difference between two sound waves of the same frequency moving past a fixed location is given by the time difference between the same positions within the wave cycles of the two sounds (the peaks or positive-going zero crossings, for example), expressed as a fraction of one wave cycle.
What do I now know?
I learnt about the different definitions frequency, wavelength, amplitude, velocity of sound and phase. We were asked the question about how were expecting to learn the overwhelming majority answered 'by doing' which turned out to be the correct answer. We discussed about what is going to happen over the duration of the course which included:
We will marked on the holistic basis of our proficiency of each task, the process of improving and our skills as a person.
How do I know this?
The teacher when through the slides of the powerpoint and also engaged us by asking questions and creating scenarios
What will I do with what I know?
I will implement the steps for each class presented on the Pre, During and Post Class activities and also research further into the definitions.
We had to watch this video before class and answer the embedded questions.
It has to do with the harmonic content of the not which can be broken down into the fundamental frequency which determines the pitch and the harmonics which are the sine wave whole number multiples of the fundamental.
The fundamental frequency of a concert A2 note 110 hz:
1 x 110 hz = 110 hz (A2) fundamental frequency
2 x 110 hz = 220 hz (A3) second harmonic (octave)
3 x 110 hz = 330 hz (E4) third harmonic
4 x 110 hz = 440 hz (A4) forth harmonic (octave)
5 x 110 hz = 550 hz (C#5) fifth harmonic
6 x 110 hz = 660 hz (E5) sixth harmonic
7 x 110 hz = 770 hz (unrelated) seventh harmonic
8 x 110 hz = 880 hz (A5) eighth harmonic (octave)
9 x 110 hz = 990 hz (B5) ninth harmonic
10 x 110 hz = 1100 hz (C#6) tenth harmonic
11 x 110 hz = 1210 hz (unrelated) eleventh harmonic
Recap:
A doubling in frequency is called an octave which is perceived by our ears as the same pitch.
The frequency determines the pitch of a sound.
Harmonics are whole number multiples of this frequency.
A doubling of frequency (ie. 440 Hz to 880 Hz) is perceived as an octave.
The fifth harmonic of 220 Hz is 1100 Hz
What do I now know?
I have grasped a basic understanding of Phase, Harmonics, Timbre, Acoustic, Envelope, Pitch and Chromatic Scale.
Well learnt about the difference between mono and stereo with Phase and also the inverting of mics can cancel sound.
We also discovered Harmonics through listening to two Pro Tools session the first session we recorded the sum of the harmonics and looked at the waveform created; the second we listened to the Timbre of the different instruments.
My Group partner has also told me for our group assignment that they are a visual learner and enjoy random games.
A doubling in frequency is called an octave which is perceived by our ears as the same pitch.
The frequency determines the pitch of a sound.
Harmonics are whole number multiples of this frequency.
A doubling of frequency (ie. 440 Hz to 880 Hz) is perceived as an octave.
The fifth harmonic of 220 Hz is 1100 Hz
How do I know this?
We learnt about Phase, Harmonics and Timbre through a Pro Tools listening session.
Also conversing with my assessment partner on how they learn.
What will I do with what I know?
I will be designing a physical flash card game with words that my partner then has to tell me the definition of. If this does not work I will explore further on what outlets I can use to create and interactive game.
How can understanding the human ear help you become a better audio engineer?
Understanding the basics of the hearing mechanism and how it works is the gateway into the basic of audio. If you can understand the the anatomy of the ear, its limitations, its strengths and weaknesses, and how it affects our perception of sound, you will go a long way to understanding the audio basics principles that follow.
Sound can be measured in dB or Hz.
Our Ear:
External or outer ear, consisting of:
Pinna or auricle.
This is the outside part of the ear.
External auditory canal or tube.
This is the tube that connects the outer ear to the inside or middle ear.
Tympanic membrane (also called the eardrum).
The tympanic membrane divides the external ear from the middle ear.
Middle ear (tympanic cavity)
Ossicles:
Three small bones that are connected and transmit the sound waves to the inner ear.
The bones are called:
Malleus
Incus
Stapes
Eustachian tube:
A canal that links the middle ear with the back of the nose.
The Eustachian tube helps to equalise the pressure in the middle ear.
Equalised pressure is needed for the proper transfer of sound waves.
The Eustachian tube is lined with mucous, just like the inside of the nose and throat.
Inner ear, consisting of:
Cochlea (contains the nerves for hearing)
Vestibule (contains receptors for balance)
Semicircular canals (contain receptors for balance)
In order to hear 100Hz at the same loudness as 1KHz I would have to play 100Hz at 65dB and 1KHz at 50dB according to the Fletcher Munson Graph.
What do I now know?
How do I know this?
What will I do with what I know?
The brain uses three primary methods for localising sound:
Due to your ear being separated roughly 12cm sound can arrive at one ear before the other.
This is known at Inter-Aural Time differences of ITD which is the term that describes the difference in the sound's arrival time between the left ear and the right ear, the term Inter-Aural meaning 'between ear'.
The maximum ITD can be observed when a sound is located at 90 degrees in the lateral place in line with the left or the right ear.
The maximum delay is based of the circumference of the human head which is roughly 0.660ms
The frequency that has a period equal to the delay between ears is about 1500 hertz.
Frequencies below 1500hz end up having a wavelength that is greater than the delay between each ear, resulting in a phase differences that is used by the brain to localise sound.
For phase shifts above 180 degrees:
'there will be an unresolvable ambiguity in the direction because there are two possible angles - one to the left and one to the right that could cause such a phase shift'
(Howard & Angus, 2009)
A sound source resonating from the right hand side creates a greater amplitude in the right ear than the left, this cause by the acoustic shadowing of the head.
Frequencies then have to bend around an object if it is over comparable or smaller size to the wavelength of said frequency, high frequencies can't bend around the head which causes an amplitude reduction of those high frequencies in one ear for side sound source .
This is known as an Inter-aural Amplitude Difference (IAD), this is used for sounds above 2000 hertz.
Sounds in-between 15khz & 20khz use a combo of IAD and ITD.
The final way of localising sound is to move you head which can cause a change in amplitude, phase and spectral components of sound allowing use to pin point the source of the sound.
The examples we used for these 3 localisation techniques was:
These examples exposed us to the use of 3D sound and sound spaces and how we use localisation to create these effects.
If a sound arrives at one microphone 0.5ms later than another microphones. How exactly will the sound change?
Certain Frequencies will be cut because if a signal is delayed by half cycle (180 degrees) there is cancellation.
This is because a Half Cycle is equal to 0.5ms so a Full Cycle is 1ms therefore 1000 full cycles in 1 second is equal to 1000 hertz
All odd multiples of this will be cancelled.
This is the study of how human perceive sound.
According to 'Fastl & Zwicker' in 2006 psychoacoustics is "the science of the hearing system as a receiver of acoustical information" and also "Psychoacousticians look to quantify the correlation between acoustical stimuli and hearing sensations."
Both ears perceive sound differently but combine to create one sound.
What do I now know?
How do I know this?
What will I do with what I know?