group as a whole, and the note sounds coherent to us. Audiologists must get rid of the harmonics and partials to be sure a person is truly hearing at a particular frequency. They do that by creating either pure tones (one frequency) or narrow-band tones, which do exactly what they sayâexcite only a narrow band of the basilar membrane.
Itâs not just sounds like leaves and airplanes that vary in frequency. The words we say, even the different vowels and consonants in each word, consist of waves of different frequencies, generally between 500 and 3,000 Hz. The âtâ sounds in âtugboatâ for instance contain more high-frequency energy than the âbâ and the âg,â for which most of the energy is concentrated in lower frequencies. In a normal hearing ear, those frequencies correspond to particular points along the basilar membrane from low to high in a system that works like a piano keyboard. When low-frequency sounds excite hair cells at the far end of the membrane, the brain gets the message to recognize not just a jazz riff played in the deep tones of a stand-up bass but also the sound âmm.â The sound of the letter âf,â on the other hand, has the same effect as the top notes on the piano: It stimulates a spot at the end of the membrane nearest the oval window, where the high frequencies are found.
A portion of the top half of the audiogram is known as the speech banana. Itâs an inverted arc, roughly what youâd get if you placed a banana on its back on the sixty-decibel line, and is typically shown as a shaded crescent. To be able to hear normal speech, a person needs to be able to hear at the frequencies and decibels covered by the banana. An average person engaged in conversation, about four feet away, will have an overall level of sixty decibels. The level falls by six decibels for every doubling of the distance, and it rises by six decibels for every halving of the distance, which is why itâs harder to hear someone who is farther away.
By the end of that day, we knew Alex had an underlying hearing loss, but there was still the complication of the fluid. Over the next two weeks, in quick succession, Alex had tubes put in surgically by our ear, nose, and throat specialist, Dr. Jay Dolitsky, to clear remaining fluid. (âIt was like jelly,â he told us.) Alex had a bone conduction test, which I now understood measures what you hear through your bones. Every time you hum or click your teeth, you hear the resulting sound almost entirely through your bones. When you speak or sing, you hear yourself in two ways: through air conduction and bone conduction. The recorded sound of your voice sounds unnatural to you because only airborne sound is picked up by the microphone and you are used to hearing both. Finally, Alex had an auditory brain stem response (ABR) test, under sedation, which allowed Jessica to measure his brainâs responses to a range of frequencies and intensities and pinpoint his level of loss.
When it was all over, we knew that, in medical terms, Alex had moderate to profound sensorineural hearing loss in both ears. That probably meant that his hair cells were damaged or nonexistent and not sending enough information to the auditory nerve. In someone who is profoundly deaf, who can hear only sounds louder than ninety decibels, almost no sound gets through. In a moderate (40 to 70 dB) or severe (70 to 90 dB) hearing loss, that all-important basilar membrane still functions but not nearly as well. Like a blurry focus on a camera, it can no longer tune frequencies as sharply. The line on Alexâs audiogram started out fairly flat in the middle of the chart and then sloped down from left to right like the sand dropping out from under your feet as you wade into the ocean. He could hear at fifty decibels (a flowing stream) in the lower frequencies, but his hearing was worse in the high frequencies, dropping down to ninety decibels.