Multisensory integration


Consider the following familiar phenomenon. In the “ventriloquist effect” you hear a voice, and you see the ventriloquist dummy’s mouth moving. You thereby hear the voice coming from the dummy. In another, laboratory-created, phenomenon called the  “McGurk Effect,”, you view a film of a man saying /ga/, /ga/, /ga/ repeatedly, synchronized with an audio track in which he says /ba/, /ba/, /ba/. When you look at his mouth, you hear him say /da/, /da/, /da/. But when you close your eyes you hear what’s on the audio track: /ba/, /ba/, /ba/. In both of these cases, and many others, data from one sense modality is reconciled with data from another.


In one respect, data reconciliation is a familiar process. For example, suppose it is raining heavily, but you see a man who appears to be completely dry (but out in the rain). You think: Maybe, he is under cover behind glass. Here, you have arrived at a hypothesis that reconciles data that do not prima facie fit together. The cases above are in one way similar to this. In the ventriloquist effect, audition alone locates the source of the voice as the human speaker, while the visual experience suggests that the dummy is the speaker. The cross-modal effect is an experience as of the dummy speaking. In the McGurk case, the facial movements you see are inconsistent with the phonemes heard, and so [vision+audition] modifies the record, and makes you hear something closer to what vision predicts. But these cases are in another way importantly different from the hypothetical dry-man case: in the latter, data reconciliation is performed by the mind, by something like inference or judgement. But in the former cases, the data is reconciled by the senses themselves, and the result is a distinctively altered sensory appearance.


Traditional models can easily accommodate cognitive data reconciliation (the dry-man case), but not data reconciliation by the senses (the ventriloquist, McGurk, and other cases, e.g. the motion-bounce illusion). For example, a simple feed-forward or hierarchical model of sensory processing renders the above phenomena incomprehensible: if auditory and visual processing streams are always separate, then there is no available explanation for the influence of auditory experience on visual experience. So some model of the integration of sensory processing is necessary, one that explains data reconciliation at the level of the senses—that is, one that explains cross-modally effected differences in how we sense (not merely judge) the world. A central task of this research network is to develop models of multisensory integration.




SOME SUBJECTS TO BE INVESTIGATED

GOALS


The investigation of the senses is the most developed area of experimental psychology and cognitive neuroscience. And as conceived by philosophers, the senses play an essential role in human cognition and behaviour, providing rational grounds for knowledge and action. Traditionally, these disciplines have understood the senses as separate channels for environmental information. But recently, it has become clear that the senses together form an integrated knowledge-gathering system. Our starting point is to determine the nature of this integration.


There is currently no agreed upon framework to account for the quantity and variety of new results concerning multisensory integration. But in all contributing disciplines, researchers agree that such a framework is needed to understand the formal, informational, phenomenal, and functional character of the senses and their integrative processes. Our network of philosophers and sensory scientists aims to generate new, genuinely comprehensive, theoretical models of the senses. These models will encompass all the senses: what is common to them, how they interact with one another, how information derived from different senses is integrated, and how this information is used by perceiving subjects to acquire sensory knowledge. This will be a new paradigm for research on the senses.

Network objectives

Touch,Taste and Smell


Touch and taste, traditionally considered single senses, pose challenges for how we understand sense modalities. Touch has receptors that are sensitive to quite different kinds of stimulation: pressure, stretching of the muscles, pain, and temperature. The qualities detected by each of these kinds of receptor stays somewhat separate where the subject is passive. But when a subject actively explores an object by touch—poking, probing, palpating, and stroking it—touch (or haptic perception as it is called when it is active) integrates the information gathered by the different receptors. Passive touch is felt as an event in the perceiver’s body; active touch delivers awareness concerning the character of the things that are touched—things outside the subject’s body. Similarly,  what we ordinarily call taste, is the product of many different receptors: those in the tongue, which are sensitive to a small range of qualities (sweet, sour, bitter, . . . ); those in the nose, which react differently, depending on whether something is sniffed (orthonasal olfaction), or sends volatile chemicals to the nose from the mouth (retronasal olfaction); those that affect the trigeminal nerve (the coolness of mint and the warmth of chillies); sounds produced by chewing, and so on. Flavour (as distinct from taste) is the product of all of these, and is fully sensed only when a subject actively explores the sensations produced by food or drink that is swished, chewed, swallowed, etc. Flavour is temporally extended, since food and drink leave a sensation in the mouth that may actually be different from how they taste when first sampled.


Some questions for investigation include: How does multi-receptor integration work in these cases? Is it similar to multisensory integration? What is the role of active sampling in touch and taste? Is active sampling always necessary for multisensory integration?

Individuating the senses


Since the ancient Greeks, the senses have been distinguished into separate modalities. Aristotle distinguished five senses: vision, audition, olfaction, touch, and taste; and indeed, common sense today echoes this distinction. Philosophers since Aristotle have identified a variety of criteria by which distinctions in modality may be drawn: ordinary language, kinds of properties represented by perception, phenomenal character, type of stimulus, neuro-anatomical structure. It seems that no matter one’s preferred criterion, the multimodal phenomena above cast doubt on a clean demarcation of perceptual modalities. And scientific investigation of “exotic” sensory capacities—for example, echolocation in bats, infrared sensation in pit vipers—suggests that not all senses map readily onto any of the traditional five.


Scepticism regarding the individuation of the senses may be coupled with another challenge. On the assumption that sensory data-streams do communicate with each other, resulting in multimodal experiences as described, one might ask: of what theoretical importance is the distinction between sense modalities? Perhaps there is a single data-pool, rather than distinct data-streams. There are, of course, distinct sensory surfaces: but if they all feed into integrated data-processing, what is the importance of dividing the sensory process by modality? This is a second, and importantly related, strand of our research: How and why should we identify and individuate the senses?


Contact: thesenses.utm@utoronto.ca


Website designed and maintained by Dustin Stokes