24. Hearing and balance
24. Hearing and balance
Hearing range describes the range of frequencies that can be heard by humans or other animals, though it can also refer to the range of amplitude levels. The human range is roughly 20 to 20,000 Hz, though there is considerable variation among individuals and a substantial loss of sensitivity is expected with age.
Several animals are able to hear frequencies well beyond the human hearing range. A sound with a frequency above the limit of human hearing is called an ultrasound whereas a sound with a frequency below the lower limit of human hearing is an infrasound. Some dolphins and bats, for example, can hear ultrasound at frequencies up to 100,000 Hz. Elephants can hear infrasound at 14–16 Hz, while some whales can hear frequencies as low as 7 Hz (in water).
Figure 1. The hearing ranges of some vertebrates. Notable omissions include fishes that can hear above 180 kHz and frogs that can hear > 30 kHz. Blue = fishes, green = frogs, gray = birds, orange = terrestrial mammals, brown = marine mammal, pink = human. More details.
Human hearing is not outstanding when its frequency range is compared to those of other vertebrates. The broadest auditory ranges and high sensitivity are most commonly found among nocturnal predators and prey. The extremes of the frequency range are found among animals that inhabit underground or noisy environments and among some marine mammals.
As a group, mammals have the broadest hearing range among vertebrates. They are also the only vertebrates with three ossicles in the middle ear. All other groups transmit vibrations from the eardrum to the inner ear using a single ossicle (the stapes, also called columella). The wider frequency range in mammals has been traditionally attributed to a more elaborate middle ear structure. This view has been challenged by the discovery of species of frogs in China and Borneo that can hear frequencies > 30 kHz using single-ossicle middle ears. Fishes like shad and some other clupeids are sensitive to frequencies > 180 kHz but they do not employ a middle ear with ossicles. These fishes are the main prey of dolphins which emit echolocation calls withing 80-120 kHz while foraging. The high-frequency sensitivity of clupeid fishes may have evolved in response to predation pressure by echolocating marine mammals. Most other fishes tend to have their hearing sensitivity restricted to frequencies < 4 kHz. Their thresholds are highly variable among groups, and the most sensitive species are usually in the superorder Ostariophysi (~8,000 species) which use the swim bladder and Weberian ossicles to enhance their hearing.
The hearing sensitivity of an organism can be characterized by an audiogram, a graph that shows the minimum detectable sound level at various frequencies throughout an organism's hearing range.
Figure 2. Audiogram of normal hearing in humans. More details.
Behavioural hearing tests or physiological tests can be used to find hearing thresholds. The test involves tones being presented at various frequencies (pitch) and intensities (loudness). When a human subject hears the sound, he or she indicates it by raising a hand or pressing a button. The lowest intensity that the subject can hear is recorded. This test is also done in animals, by previously training them to respond with a specific behavior (pressing a button) when they hear a sound. In human newborns or in animals that cannot be trained to press a button, the audiogram can be determined through physiological methods, such as the monitoring of brainwave activity or of very faint distortion sounds (otoacoustic emissions) that the ears produce in response to certain combinations of sound.
A great deal of variation in frequency range of hearing can be found within each group of vertebrates. Animals are selected to respond to those sounds that they encounter in their environment and that have biological relevance to them. Analogous adaptations can be found within the various vertebrates groups to cope with specials conditions encountered in specific habitats. The environment may affect the transmission or reflection of sound in ways that affect the availability of that type of sound to the animals.
Water conducts sound much faster and farther than air. This makes it easier to detect sound from a distant source underwater, but it makes it more difficult to localize the direction of the source. The distance between the ears is covered faster, so the brain has to be sensitive to smaller differences in both amplitude and time of arrival between the ears to use the binaural cues to locate the source. Another consequence of the sounds being transmitted farther is that antropogenic (human made) noise sources have a larger footprint. Few studies had been done until recently on the effects of underwater noise on aquatic wildlife. Considering the magnitude of the diversity of aquatic live, environments and human-introduced sources of noise, an extensive dataset is necessary to guide the proper management of subaquatic noise.
Water is a high impedance medium (molecules are more difficult to move than those in air). Since about 70% of the bodies of vertebrates are composed of water, underwater sound can cross the tissues and reach the inner ear if not blocked or absorbed by specialized tissues. A thin membrane like an eardrum is unnecessary under water and having an air-filled cavity behind it (middle ear) is problematic for divers, because the external water can produce immense pressures during a deep dive. Several types of adaptations of the auditory systems are observed in vertebrates that recolonized the underwater environment after having been terrestrial.
The ear canals in seals, sea lions, and walruses are similar to those of land mammals and seem to function the same way. In whales and dolphins, studies suggest that sound is channeled to the ear by tissues in the area of the lower jaw. In odontocetes (toothed whales) the ears are placed relatively far from each other in the head. Both the channeling of sound and the separation of the ears should assist in increasing the separation of the signals in the two ears and facilitate the localization of the sources of sounds. Odontocetes use echolocation in foraging, therefore sound source localization is key to their survival.
Figure 3. Pacific white-sided dolphins (Lagenorhynchus obliquidens) are odontocetes with elaborate adaptations for underwater hearing. More details.
Many vertebrates inhabit tunnel systems under the ground. High frequencies are severely attenuated when transferring between the air in the tunnels and in the soil (seismic vibrations), and also when transferred across the variety of materials encountered in most soils. The environment is therefore dominated by low-frequency sounds.
Several groups of mammals that live underground have independently evolved enlarged middle ear ossicles (malleus or incus) as an adaptation to detect seismic vibrations. The inertia of these massive ossicles sitting in an air-filled middle ear make them lag behind the skull when the head is shaken by seismic vibrations. This movement of the ossicles relative to the skull stimulates the inner ear, which decodes the seismic vibrations as sound.
Figure 4. Giant golden moles (Chrysospalax trevelyani; taxidermy) are blind subterranean mammals that live in South Africa and use enlarged middle ear ossicles to detect seismic vibrations. More details.
Golden moles are extremes of seismic sensitivity. They are blind and live on African deserts, burrowed in the sand. They navigate the sand mounds eating arthropods that they find mostly near vegetation. When foraging at night, they also walk on the surface of the sand but occasionally stop and dip the head into the sand. Golden moles are attracted to the seismic vibration patterns produced by sand mounds containing vegetation and insects. Their mallei are so large that the lateral surface of the skull has a bulge to fit the middle ear ossicle.
Frogs also tend to be very sensitive to seismic vibrations. They make use of their forelegs to transmit the signals through the shoulders to the middle ear through the opercular system (see previous chapter). Large mammals like elephants and whales also use low frequency signals to communicate over large distances on the ground or in the sea.
Sloped regions tend to be dominated by fast flowing streams that form noisy environments. The sound of water running over rocks characteristically has most energy at a few dozen Hz and gradually slopes down at higher frequencies. Several groups of frogs have species adapted to life in such streams. Some of them have changed to being diurnal and use visual signals for communication. These mostly include foot flagging, a behavior in which the animal suddenly raises and extends a limb to exhibit a vividly colored palm or sole.
Other species continue using sound. They communicate at high frequencies, from a few thousand Hz to ultrasound only, therefore escaping the masking that the stream noise produces at low frequencies. These species tend to have thin eardrums and light middle ear ossicles because reduced inertia is necessary for the high rate vibrations produced at high frequencies. As extremes of high frequency communication in frogs, the concave eared torrent frog (Odorrana tormota) and the hole-in-the-head frog (Huia cavitympanum) produce and respond to calls with fundamental frequencies at 5-30 kHz. Their eardrums are transparent and recessed, allowing the stapes to be shorter and lighter.
The acoustic reflex (also known as the stapedius reflex, middle-ear-muscles (MEM) reflex, attenuation reflex, or auditory reflex) is an involuntary muscle contraction that occurs in the middle ear in response to high-intensity sound or during vocalization.
When presented with a high-intensity sound stimulus, the stapedius and tensor tympani muscles of the middle ear contract. The stapedius stiffens the ossicular chain by pulling the stapes out, away from the oval window of the cochlea. The tensor tympani muscle stiffens the ossicular chain by loading the tympanic membrane when it pulls the malleus in, toward the inner ear. The reflex decreases the transmission of vibrational energy to the cochlea, protecting the hair cells in the organ of Corti from excessive displacements.
In humans, the acoustic reflex reduces the intensity of the triggering sound by 15-20 dB at the inner ear. This provides significant but not complete protection. It takes 100-150 ms for the muscles to contract fully, and the strength of the contraction is reduced to about 50% after a few seconds. This makes the reflex less effective against sounds with a sharp onset, like hammering, and sounds with continuous long duration, like and extended beep or a jet plane taking off.
In animals that produce loud vocalizations such as advertisement or echolocation calls, the acoustic reflex can be triggered by the vocalization command immediately before the onset of sound. This offers improved protection to the ears and prevents overstimulation, which would reduce the hearing sensitivity after phonation.
The high frequency limit in the hearing of bats varies from 1 kHz to 200 kHz. This wide variation is mostly due to the evolution of echolocation, which drove certain species to vocalize and hear at ultrasonic frequencies.
Figure 5. Echolocating bats emit calls at frequencies > 20 kHz. They can locate flying insects by detecting the echo of their ultrasonic calls. More details.
An echolocating bat produces a very loud, short sound and listens to the echos that bounce back. These bats hunt flying insects which return a faint echo of the bat's call. The type of insect and how big it is can be determined by the quality of the echo and time it takes for the echo to rebound. Low frequency sound has large wavelengh and it does not reflect well in small objects like the insects that bats target. Echolocating bats therefore produce ultrasonic calls to detect small objects around them. Some bats produce calls with a constant frequency, whereas others produce frequency sweeps.
But high frequency calls are energetically expensive to produce and they die off in air very quickly. Bats tend to fly emitting echolocation calls at a certain rate and then progressively increase the rate of calling as the approach an object of interest.
The same way a bright flash of light can blind you for a few seconds, an exposure to intense sound tends to reduce your sensitivity in the next seconds. This is a problem for bats, because they need to produce a very intense echolocation call and be able to detect a very fain echo immediately after the call. This problem can be circumvented in several ways:
The acoustic reflex turns on immediately before the call and it turns off immediately after the call. The middle ears are therefore highly damped during the call but are set completely unimpeded to transmit the echo.
The fact that the bat is flying generates a Doppler shift. Due to the Doppler effect, the call returns as an echo with a slightly higher frequency than that at which it was emitted. In bats that produce calls with constant frequency, the call and the echo are detected at slightly different positions of the Cochlea (tonotopy) and therefore the hair cells that detect the echo are less likely to be overstimulated by the call.
The audiogram of constant-frequency bats tends to show a sharp change from low to high sensitivity near the frequency of the call. The bat adjusts the frequency of its call to the speed at which it is flying, to maintain the frequency of the echo within the high sensitivity peak of the ears and that of the call in the low sensitivity range.
Echolocation is not unique to bats or to air as a medium. Several species of cave-nesting swiftlets and the unrelated oilbirds also employ less elaborate forms of echolocation than those of bats. Dolphins and toothed whales also exhibit very elaborate echolocation systems for foraging in waters where vision is hindered by high turbidity or lack of light due to depth.
While the range of human tends to be conveniently approximated to 20 Hz - 20 kHz, precise measurements have not produced these round numbers. Under ideal laboratory conditions, humans can hear sound as low as 12 Hz and as high as 28 kHz, though the threshold increases sharply at 15 kHz in adults. Humans are most sensitive to frequencies between 2,000 and 5,000 Hz. Individual hearing range varies according to the general condition of a human's ears and nervous system. The range shrinks during life, with the upper frequency limit being reduced. Typically, humans can discriminate between two sounds if their frequencies differ by 0.3% or more. For example, 500.0 and 501.5 Hz are noticeably different. At a given frequency, it is possible to discern differences of about 1 dB in sound amplitude, and a change of 3 dB is easily noticed.
Figure 6. Audiogram showing a typical result for an adult with slight hearing loss. In clinical usage, audiograms usually have the frequency axis on the top of the chart and the threshold sound level is shown in relation to normal hearing (zero dB is set to the threshold of normal hearing). This makes the results in dB express of the patient’s hearing loss directly. More details.
The human ear is not equally sensitive to all frequencies. In addition, our perception of both amplitude and frequency is not linear. For example, the difference between the amplitudes of sounds at 0.1 and 0.2 Pa appears greater than the difference between sounds at 0.3 and 0.4 Pa . Scales were therefore develop to quantify our perception of the physical qualities of sound: pitch quantifies to our perception of sound frequency, whereas loudness quantifies our perception of sound amplitude.
Loudness varies with frequency, due to the sensitivity pattern described in the audiogram. A unit called phon is used to express loudness numerically. Phons differ from decibels because the phon is a unit of loudness perception, whereas the decibel is a unit of physical intensity. The loudness scale was determined by having large numbers of people compare the loudness of sounds at different frequencies and sound intensity levels. At 1000 Hz, phons were set to be numerically equal to decibels. The relationship of loudness to intensity level and frequency has the shape of the audiogram at 0 phon. Curves of equal-loudness can be separated by more or fewer decibels of sound intensity, depending on the frequency and intensity of the sound.
Figure 7. Relationship between loudness (phons) and intensity level (dB) in humans with normal hearing. The curved lines are equal-loudness curves. All sounds on a given curve are perceived as equally loud. More details.
The ability to locate sound in our environments is an important part of hearing. The auditory system localizes sound by using cues that can be monaural (formed within one ear) or binaural (formed by comparison between the two ears).
Monaural cues are formed by the ways in which the complex shapes of the pinna reflect the incoming sound waves. The frequencies contained in the sound are modified in different amounts by the pinna forming a pattern. This pattern depends on the direction from which the sound arrives to the pinna. Our brain learns the patterns produced by our pinnae and uses them to help identify the direction of the source. Monaural cues can be obtained with a single ear and they help to locate sound sources both along the horizontal plane (azimuth) and the vertical plane (elevation). In any case, the generation of a recognizable pattern depends on amplitude comparisons across frequencies. Noise and clicks have energy in many frequencies and therefore provide more reliable monaural cues than tones and whistles that have one or a few frequencies.
Binaural cues emerge from differences in patterns of vibration of the eardrum between the two ears. If a sound arrives from the right side of your body it is more intense at your right ear than at your left ear because your head casts an acoustic shadow over your left ear. In addition, this sound arrives first at the right ear than at the left ear. Certain brain circuits monitor these differences to infer where along a horizontal axis a sound originates.
The reliability of binaural cues exceeds by far that of monaural cues for the localization of sound sources. In addition, they function well both with noisy and tonal sounds. Binaural cues only provide information for localization along the horizontal plane (azimuth), though, because changes in elevation do not alter the time of arrival or amplitude differences between the ears. Monaural cues are therefore the primary source of information used to determine the elevation of sound sources (vertical direction).
Figure 8. Localization of the sound source can involve the use of both monaural and binaural cues. More details.
Binaural cues can be made useful for the determination of elevation, however, by tilting the head laterally. This is commonly observed in dogs expressing curious behavior, although its association with more precise elevation assessment is unclear. The problem of localizing the elevation of the source is magnified in foraging owls, which search for prey at night while flying. In this position, the head faces the ground and prey that would try to escape running aligned with the longitudinal axis of the body of the owl would produce sounds that would arrive identically to the two ears of the owl.
This problem is circumvented by the morphology of the owl’s ears. While the feathers of the head make it looks symmetric, the skull underneath is asymmetric, having the opening of one ear high in the head and the other low. With this configuration, sounds with different elevations do generate binaural cues. This improves the localization ability of the birds and allow them catch mice even in complete darkness.
The frequency range of hearing in vertebrates extends from 14 to beyond 180 kHz although no species captures the entire range. Broad ranges and high frequency hearing are most common in mammals but some frogs and fishes can also hear ultrasound. An audiogram describes how the threshold of hearing varies with frequency. Water, air and soil differ greatly in sound transmission and extensive adaptations of the hearing systems occur in evolutionary transitions between aquatic, terrestrial and subterranean environments. Low frequency and seismic sensitivity are favored underground. High frequency signaling and echolocation are favored when vision is impaired, such as in nocturnal or deep water predators. The acoustic reflex dampens the ear and protects it from excessively intense sounds. The source of the sound can be localized using cues from a single ear and differences in time of arrival and intensity between the two ears.
Hearing range, frequency range, ultrasound, infrasound, audiogram, Weberian ossicles, echolocation, pitch, loudness, otoacoustic emissions, impedance matching, seismic vibrations, malleus, seismic sensitivity, opercular system, acoustic reflex, stapedius muscle, tensor tympani muscle, Doppler shift, Doppler effect, sound localization, azimuth, elevation, monaural, binaural.
Figure 1 by Cmglee - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=35890958
Figure 2 by Daxx4434 derivative work: Cradle - Human_Hearing_Graph.jpg, Public Domain, https://commons.wikimedia.org/w/index.php?curid=8390193
Figure 3 by Captain Budd Christman, NOAA Corps - http://www.photolib.noaa.gov/htmls/anim0802.htm., Public Domain, https://commons.wikimedia.org/w/index.php?curid=277157
Figure 4 by Emőke Dénes - kindly granted by the author, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=15895649
Figure 5 by OpenStax University Physics - https://cnx.org/contents/1Q9uMg_aZIP Download:https://cnx.org/exports/d50f6e32-0fda-46ef-a362-9bd36ca7c97d@6.4.zip/university-physics-volume-1-6.4.zip, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=64308017
Figure 6 by Welleschik - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1631553
Figure 7 by Openstax College - University Physics vol. 1. CC BY 4.0, https://cnx.org/resources/a75d442602e5dfddd96b0171984b9b480b080d08
Figure 8 by Wikimedia. CC BY 4.0, https://cnx.org/resources/2e1395a91d05b1a038df4dc7a50ca43f8ae42f73/CNX_Psych_05_04_MonInt.jpg
Figure 9a by Steven Katovich from United States USDA Forest Service - This image is Image Number 1388020 at Forestry Images, a source for forest health, natural resources and silviculture images operated by The Bugwood Network at the University of Georgia and the USDA Forest Service., CC BY 3.0 us, https://commons.wikimedia.org/w/index.php?curid=20037678
Figure 9b by Unknown - Proceedings of the Zoological Society of London (vol. 1871, page 740), Public Domain, https://commons.wikimedia.org/w/index.php?curid=15523706