Thematic Area B: Efficient crossmodal generalization and prediction
Whereas the projects in Area A focus on crossmodal learning and integration as dynamic processes, the projects in Area B investigate the way in which crossmodal learning and integration impacts generalization and prediction. Multimodal stimuli generally provide more information than the component unimodal stimuli, yet that extra information is only useful insofar as the stimuli can be integrated.
The integrated percept allows better prediction and enhanced generalization. Projects in Area B investigate the processes by which crossmodal information can enhance generalization and prediction beyond that possible with unimodal information alone. The projects in this Area address research questions such as how biological brains create and exploit crossmodal memories that store multimodal information (B1, 3–4); how to resolve conflicts between unimodal components of crossmodal predictions (B2, 4–5); how humans incorporate signals from one modality to improve generalization and prediction in another (B1–5); and how such insights can be transferred both to statistical models to improve their predictions (B2) and to robots to improve their responses in complex, multisensory environments (B5). A better grasp of these issues will be instrumental in understanding the dynamics of crossmodal learning (Area A) as well as the application of crossmodal learning to human-machine interaction (Area C).
The goal of project B1 (Engel, Hu) is to investigate and model the neural dynamics underlying crossmodal prediction of (auditory and visual) sensory events in the human brain (Arnal & Giraud, 2012). The project will use magnetoencephalography (MEG) for neurophysiological studies and will build neuro-computational models to explain the observed data. The investigation will focus in particular on assessing the extent to which crossmodal prediction is related to oscillatory neural activity in the brain and to large-scale dynamic coupling between brain regions (Engel et al., 2013). This project is a key component of the theory integration initiative (II-T), and the model it produces will contribute to II-M.
The goal of project B2 (Gläscher, Zhu) is to use novel methods of Bayesian analysis to investigate how the integration of crossmodal stimuli in human subjects interacts with learning, semantics and social context. In particular, regularized Bayesian inference (RegBayes, Zhu et al., 2014) provides an elegant way to incorporate modulatory influences into Bayesian models: using this technique, the project will be able to model behavioural and fMRI data and describe not just how learning, semantics and social context can modulate crossmodal integration but also how this integration can impact (facilitate or impede) learning and decision making.
Project B3 (Fu, Gao, Rose) will focus on the distinction between implicit and explicit crossmodal learning. Implicit learning occurs when people incidentally acquire knowledge of the structure of stimuli without awareness of the content of that knowledge (Dienes 2008; Fu et al., 2008). The primary goal of the project is to investigate how humans learn to predict incoming stimuli from crossmodal cues in implicit learning. The plan to achieve this goal makes use of (a) behavioural paradigms, (b) high-resolution ERP, ERO, and fMRI and (c) fMRI- and EEG-based BCI. To the integration initiative II-T (Theory), project B3 will contribute a single, formal behavioural and neuroscientific perspective on crossmodal category and sequence learning; to II-M it will contribute a cognitive model based on ACT-R (Anderson 2005, 2007) or SRN (Cleeremans and Dienes, 2008). When humans make predictions based on the integration of multiple sensory modalities, conflicts often arise among the unimodal components of the prediction. Resolving these conflicts can require additional learning, executive control, or both.
Project B4 (Nolte, Liu) will investigate the time course and neural substrate of crossmodal learning and conflict processing, using behavioural paradigms and EEG, MEG and fMRI recordings. The goal is to identify spatiotemporal patterns of neuronal activity that reflect crossmodal learning and conflict processing. The project will primarily contribute to integration initiative II-T (Theory), and its primary contribution will be its insights into executive control in crossmodal conflict resolution. However, these insights will also be used in the decision-making processes of the learning agents and robots in II-M and II-R, therefore providing opportunities for collaboration with the projects that enlist these processes for neural-inspired decision-making and control, namely A5 (Wermter, Liu) and C4 (Weber, Wermter, Liu); and with B5 (J. Zhang, Sun) and C6 (Steinicke, Fang, Chen) on visual integration and conflict processing for action selection.
Project B5 (J. Zhang, Sun) is particularly critical to the robotics integration initiative, and it will supply the robotics platform for the demonstrator (II-R). Its goal is to explore methods for integrating crossmodal information during robot fine-motor operations such as grasping, dexterous manipulation, and human-robot collaboration. Because many such operations require integration of multiple sources of sensory information (such as visual, positional, tactile, haptic, slip and force information) the project requires methods for crossmodal learning that can be effectively and efficiently implemented on a robot. B5 will also be instrumental to project A4 (C. Zhang, J. Zhang) by providing a common robotics platform. In return, project A4 will provide multimodal classifiers for object-detection in the manipulation experiments (whose results will go back to A4 to help improve the classifiers). In addition, B5 will also collaborate with C6 (Steinicke, Fang, Chen) to develop a system for human guidance of robot activities so as to accelerate robot learning.
Thematic Area A,
Thematic Area C,