Relationships Between Cognitive Abilities and Benefit From Visual and Contextual Cues in a Speech-In-Noise Recognition Task in Individuals With Normal Hearing and Hearing Loss

Research output: Contribution to conferenceConference abstract for conferenceCommunication

  • Andreea Micula
Background: Visual and contextual cues facilitate speech recognition in sub-optimal listening conditions (e.g. background noise, hearing loss). Moreover, successful speech recognition in challenging listening conditions is linked to cognitive abilities such as working memory capacity and fluid intelligence. However, it is unclear which cognitive abilities facilitate the effective use of visual and contextual cues in individuals with normal hearing and hearing loss. The first aim was to investigate whether individuals with hearing loss
rely on visual and contextual cues to a higher degree than individuals with normal hearing in a speech-in-noise recognition task. The second aim was to investigate whether working memory capacity and fluid intelligence are associated with the effective use of visual and contextual cues in these groups.
Methods: Groups of participants with normal hearing (NH) and hearing aid users (HA) were included (n = 169 per group). The Samuelsson and Rönnberg task was administered to measure speech recognition in speech-shaped noise. The task consists of an equal number of sentences administered in the auditory and audiovisual modalities, as well as without and with contextual cues (visually presented word preceding the sentence, e.g.: “Restaurant”, “Train”). The signal-to-noise ratio was individually set to 1 dB below the level obtained for 50% correct speech recognition in the Hearing-In-Noise test. All participants received linear amplification, the HA group based on individual audiometric thresholds and the NH group 20 dB flat gain. The Reading Span test was used to measure working memory capacity and the Raven test was used to measure fluid intelligence. The data were analyzed using linear mixed effects modelling. Results: Both groups exhibited significantly higher speech recognition performance when visual and contextual cues were available. A two-way interaction showed that the HA group performed significantly worse compared to the NH group in the auditory modality but performed on par with the NH group in the audiovisual modality. A three-way interaction suggested that the group difference in the auditory modality was moderated by the Raven test score. Additionally, a significant positive relationship was found between
the Raven test score and speech recognition performance only for the HA group in the audiovisual modality. There was no significant relationship between the Reading Span test score and performance.
Conclusions: Both NH and HA participants benefited from contextual cues, regardless of cognitive abilities. The HA group relied on visual cues to a higher degree than the NH group, reaching a similar speech-in-noise recognition performance level as the NH group in the audiovisual modality despite a worse
performance in the auditory modality. Importantly, the effective use of visual cues was associated with higher fluid intelligence in the HA group.
Original languageEnglish
Publication date2023
Publication statusPublished - 2023
Externally publishedYes
Event46th Annual ARO MidWinter Meeting -
Duration: 10 Feb 2023 → …

Conference

Conference46th Annual ARO MidWinter Meeting
Period10/02/2023 → …

ID: 347482069