The Department of Communication Sciences and Disorders hosted guest lecturer Douglas Beck, executive director of academic sciences for Oticon. Oticon is one of the leading hearing aid manufacturers in the world. Beck is among the most prolific authors in audiology having addressed a wide variety of audiology and professional topics including; audiology, the profession of audiology, pediatric audiology, cognition, hearing aids, audition, counseling and more.
During his lecture, Beck focused on the clinical management of a patient’s inability to understand speech in noisy environments. According to Beck, this the most frequent complaint he receives from those with hearing loss.
In a traditional hearing evaluation, significant effort and attention is given toward creating a very quiet environment to measure a patient’s ability to hear the softest of sounds, but measuring a patient’s speech in quiet does not predict speech in noise because they are different domains of auditory function. According to Aaron Johnson, assistant professor and audiology director of clinical education for Samford University, it is not unusual for someone with an auditory processing disorder to show no signs of a hearing impairment and traditional tests may show hearing is normal. “The disorder can be difficult to identify because sound quality may only be poor during instances where there is background noise, more than one person speaking or when a person is speaking quickly,” he added.
To measure a patient’s ability to understand speech in noise, the audiologist should recreate the conditions representative of the noise environment in which the patient is reporting to have difficulty. Beck addressed a variety of tests that can be used to measure a patient’s speech in noise ability in a practical and meaningful way.
“Generally, these tests make use of either single words or whole sentences representative of the typical sounds produced in the English language,” said Johnson. “The speech is typically presented from a calibrated speaker at moderately loud levels in front of the patient while a competing background ‘speech babble’ noise is presented from a calibrated speaker behind the patient. This arrangement is effective in recreating the conditions where the patient normally struggles to communicate. Typically, the loudness of the ‘speech babble’ signal behind the patient is adjusted in relation to the loudness of the speech signal in front of the patient. At each level, the patient is instructed to repeat the speech signal that is presented in front of them.”
Beck also outlined various tools that can be utilized to reduce background noise and improve a patient’s ability to understand speech in the presence of background noise. For example, directional microphones utilize multiple microphones that, when used together, are able to pick up sounds from the front while reducing sounds that come from behind. Beck also explained the use of remote microphones, an additional accessory item that can be paired with a patient’s hearing aids to provide wireless, direct input of a desired sound source into the hearing aids over distances of 30 to 50 feet, ultimately restoring a patient’s ability to enjoy communication even in complex sound environments.
“This lecture will particularly aid our audiology students as they prepare for their careers,” said Johnson. “As the students develop their knowledge and skills in the area of managing speech in noise for their patients, they will be equipped to guide their patients to levels of communication that traditional hearing aids and other amplification devices cannot deliver.”