Multisensory perception

How do we perceive objects and events by sensed through multiple sources of sensory information?

Our senses provide multiple signals that can be used for perception, but the information the signals provide is far from perfect: signals are not always sufficient to determine uniquely their environmental causes and are affected by noise due to transduction and neural processing. Perceptual estimates can only be “guesses” about the state of the world. To maximize the chances of making a good guess, the brain needs to use the information available in the best way possible (whether the information is sensed at that moment or it is available from memory, i.e., Kuschel, Di Luca, & Klatzky 2010; Harcher-O’brien, Di Luca, &Ernst, 2014). During normal interaction with the environment, the brain learns which signals are likely to statistical co-occur. Integration then is more likely to happen with congruent cues and to break with large spatial, temporal, or structural discrepancies between the signals (Battaglia, Di Luca, Ernst et al. 2010). The contingency between signals might change over time and adaptation occurs when a new contingency becomes prolonged (Ernst & Di Luca 2011; Machulla, Di Luca, & Ernst, 2012). To complicate things, perception is not independent of action and the sensed information often depends on the property judged (Di Luca, Domini, & Caudek, 2010) or the movements performed (Di Luca, 2011). As a result, our actions (and several other environmental causes) make the sensory stimuli to be varying continuously. Such changes in information over time have very interesting consequences in the modelling of computational mechanisms of perception (Di Luca, Knörlein, Ernst, & Harders, 2011).

Senior Lecturer

Dr Di Luca is Senior Lecturer at the University of Birmingham (UK) in the Centre for Computational Neuroscience and Cognitive Robotics.