Thursday, June 6, 2013
Multiple studies have shown that objects and events can often be deteced by more than one sensory system. In addition, interactions between sensory systems (sometimes referred to as recruitment) have been found to increase the accuracy and completeness of one's perception. Common wisdom has said that some people actually hear speech sounds better when looking directly at the speaker's face. Indeed, there are clear benefits from visual cues (seeing the speaker's face) when trying to communicate with another person, especially in a noisy environment. Studies by Jacobs et all involving visual-auditory interactions have shown some perceptual advantages of combining information from these two modalities. In some instances a 60% improvement in word recognition can be achieved in a noisy environment when visual cues are added compared with presentation of the audio information alone. This work is being applied to develop and evaluate a new signal processing approach where audio and visual information are fused together to ultimately improve speech intelligibility in noisy environments for Veterans who suffer from dual-sensory hearing and vision loss as well as those who suffer from hearing loss alone.