Projects /

Crossmodal Paradox

21 Dec 2018

Crossmodal correspondences, Crossmodal Congruency, Perception, Sensorimotor response, Cognitive priming

This page discusses my research relating to Crossmodal Correspondences: perceptual associations between sensory modalities. I am interested in the way we can exploit these correspondences to support user experience, efficiency and safety. Can crossmodal perception be enhanced by cognitive priming? How do crossmodal contradictions affect the integration of information?. You can read the preprinting paper here.

Research Background

In the field of human-automation interaction, the question of how to improve operational efficiency and safety, especially in the time-sensitive and/or safety-critical settings, has always been in the central discussion.

Many system designs, such as monitoring system in factories, power plants, and/or semi-automated driving, requires careful consideration on the display of multi-sensory information.

One strategy of improving the system’s usability is to present multisensory information in a way that is consistent with everyday perceptual regularities. One of the multisensory regularities is called Crossmodal Correspondences (CCs).

Wassily Kandinsky, Delicate tension, NO. 85, 1923, Watercolor and ink on paper.

and from Zhu Da’s spiritual reflection:

Zhu Da, Hua Niao Series, 1626-1705, Calligraphy and ink on paper.

Research Questions and Aim

The phenomenom of CCs has been extensively investigated in the field of cognitive study. With a well-controlled experimental paradigm (conventionally the speeded classificiation paradigm), We discovered more and more associations not only between visual and auditory modality, but also between visual and haptic, as well as autitory and haptic modality.

However, CC implementations with an effort from Human-computer interaction (HCI) have certain limi- tations due to the following reasons:

In this project, we are aiming to tackle following two questions:

Method, Results and Implications for Design

If you are interested in knowing the discoveries and contributions of this project, you can read the published paper here (preprinting).

Title: Exploring crossmodal perceptual enhancement and integration in a sequence reproducing task with cognitive priming

DOI: 10.1007/s12193-020-00326-y