Launched in January 2016 and renewed in December 2019 for the second funding period (2020-2023), the Transregional Collaborative Research Centre on "Crossmodal Learning" (CML) is positioned as an interdisciplinary cooperation between the existing fields of artificial intelligence, psychology and neuroscience, focused on establishing the topic of crossmodal learning as a new discipline. Our aim is therefore to establish our collaborative centre as the primary research vehicle at the focal point of this new discipline. Based on an extensive groundwork of collaborative research between Germany and China, this centre is jointly funded by the DFG (Deutsche Forschungsgemeinschaft) and the NSFC (Natural Science Foundation of China) as a new international collaboration between the University of Hamburg, the Medical Center Hamburg Eppendorf (UKE) and the three top universities in China (Tsinghua, Beijing Normal and Peking University) as well as the Institute of Psychology of the Chinese Academy of Sciences—all located in Beijing, China.
The long-term goal of our research is to develop a framework describing the neural, cognitive and computational mechanisms of crossmodal learning. This framework will allow us to pursue the following primary sub-goals of the research programme: (1) to enrich our current understanding of the multisensory processes underlying the human mind and brain, (2) to create detailed formal models that describe crossmodal learning in both humans and machines, and (3) to build artificial systems for tasks requiring a crossmodal conception of the world.
The term crossmodal learning refers to the adaptive, synergistic integration of complex perceptions from multiple sensory modalities, such that the learning that occurs within any individual sensory modality can be enhanced with information from one or more other modalities. Crossmodal learning is crucial for human understanding of the world, and examples are ubiquitous, such as: learning to grasp and manipulate objects; learning to read and write; learning to understand language; etc. In all these examples, visual, auditory, somatosensory or other modalities have to be integrated.
For details, please read on: