Webinars
This webpage lists the CML "Lectures on of Crossmodal Learning", organized as part of the CML teaching and education activities in phase 2 of the project. For an overview of our webinars of phase 1, please click here.
All Principal Investigators of the project participate with lectures about the fundamentals of their research on crossmodal learning and integration. Combining insights from psychology, neuroscience, deep-learning, and robotics, the lectures taken together provide a comprehensive overview of methods, software, and recent research results from the CML project.
All lectures are planned for 60 minutes followed by 30 minutes of moderated discussion. Given the diversity of topics and audience, the talks will introduce and explain any special methods and techniques, so that students and researchers of different fields of research will be able to follow. You are invited to participate in the discussion; don't hesitate to prepare and post your questions in advance. During the discussion phase, the moderator will select from the pending questions.
The lectures will be recorded and broadcasted live, but will also be made available for off-line viewing after the event.
Upcoming Talks
-
Title: Modalities in Conflict: Spatial Interaction in the Metaverse
Speaker: Prof. Dr. Frank Steinicke, Informatics Dept., University of Hamburg
Date: February 02, 2023
Time: 09:00-10:30 CEST, 16:00-17:30 Beijing (相当于下午4点北京时间)
Abstract: The fusion of extended reality (XR) and artificial intelligence (AI) will revolutionize human-computer interaction. XR/AI technologies and methods will enable scenarios with seamless transitions, interactions and transformations between real and virtual objects along the reality-virtuality continuum indistinguishable from corresponding real-world interactions. Yet, todays immersive technology is still decades away from the ultimate display. However, imperfections of the human perceptual, cognitive and motor system when it comes to resolve conflicts in multimodal cues can be exploited to bend reality in such a way that compelling immersive experiences can be achieved. In this talk, we will review some XR illusions, which bring us closer to the ultimate blended reality.
Biography: Frank Steinicke is professor for Human-Computer Interaction at the Department of Informatics at the Universität Hamburg. His research is driven by understanding the human perceptual, cognitive and motor abilities and limitations in order to reform the interaction as well as the experience in computer-mediated realities.
He studied Mathematics with a minor in Computer Science at the University of Münster, from which he received his Ph.D. in 2006, and the Venia Legendi in 2010, both in Computer Science. He published about 300 peer-reviewed scientific publications and served as program chair for several XR and HCI-related conferences. Furthermore, he is chair of the steering committee of the ACM SUI Symposium, and member of the steering committee of GI SIG VR/AR. Furthermore, he is a member of the editorial boards of EEE Transactions on Visualization and Computer Graphics (TVCG) as well as Frontiers Section on Virtual Reality and Human Behaviour.
-
Title: Building mental models of others during social decision-making
Speaker: Prof. Dr. Jan Gläscher, Institute of Systems Neuroscience, Universitätsklinikum Hamburg-Eppendorf
Date: February 03, 2023
Time: 09:00-10:30 CEST, 16:00-17:30 Beijing (相当于下午4点北京时间)
Zoom Meeting Information
Meeting ID: 644 4966 3613 Passcode: CML_HH_BJ
Zoom (Univ. of Hamburg only): https://uni-hamburg.zoom.us/s/64449663613
Phone one-tap: (Germany) +496971049922,,64449663613#
Telephone: (Germany)+49 69 7104 9922 or +49 69 3807 9883 Meeting ID: 644 4966 3613 Passcode: 913523826
Schedule up to March 2023
Our webinar series will start with two lectures a week, one each on Thursday and Friday. There will be two two-week breaks for the Christmas holidays (Dec-Jan) and the Chinese New Year holidays (Jan 2023).
We also selected a timeslot that we hope fits both European and Asian audience, namely 09:00-10:30 CEST (e.g. Hamburg) which corresponds to 16:00-17:30 Beijing time. The preliminary schedule for our webinar series is as follows:
- 03.11.2022: Patrick Bruns (A1), The ventriloquist illusion as a tool to study crossmodal learning.
- 04.11.2022: Ke Zhao and Xiaolan Fu (A1), Sense of Agency in multimodal systems.
- 10.11.2022: Xingshan Li (C7), CRM: A computational model of Chinese reading.
- 11.11.2022: Qingqing Qu (C7), A cross-modality perspective on Chinese language processing.
- 17.11.2022: Michael Rose (B3), Implicit and explicit aspects for human learning and memory.
- 18.11.2022: Qiufang Fu (B3), Implicit learning and unconscious knowledge.
- 25.11.2022: Xun Liu (A5, B4), Domain general/specific mechanisms of cognitive control.
- 01.12.2022: Gui Xue (A3), Cross-modal sequence representations and predictions.
- 02.12.2022: Guido Nolte (B4, C1), Methods to analyze functional coupling between neural populations from noninvasive electrophysiological recording.
- 08.12.2022: Bo Hong (C1), A peek into the core structure and manifold of language in human brain.
- 09.12.2022: Dan Zhang (B1, C1), Taking an inter-brain perspective for understanding emotion and speech.
- 15.12.2022: Claus Hilgetag (A2), On a connectomic basis of human cognitive abilities.
- 16.12.2022: Jisong Guan (A2), Exploring the memory traces and its organization in mouse neocortex.
- Christmas holidays
- 06.01.2023: Kunlin Wei (C8), Sense of agency in virtual reality and its neural substrate: a brief report of our ongoing research
- 12.01.2023: Chris Biemann (C7), Introduction to Natural Language Processing - with a focus on Crossmodal Learning.
- 13.01.2023: Lihan Chen (C8), Ternus motion as a litmus test for perceptual integration
- Chinese New Year
- 02.02.2023: Frank Steinicke (C8), Modalities in Conflict: Spatial Interaction in the Metaverse
- 03.02.2023: Jan Gläscher (C9), Building mental models of others during social decision-making
- 09.02.2023: Cornelius Weber (C4), Crossmodal language learning
- 10.02.2023: Focko Higgen / Fanny Quandt (A3), Neurocomputational Representation
- 16.02.2023: Timo Gerkmann (A6), Combining Machine Learning and Domain Knowledge for Speech Signal Processing.
- 17.02.2023: Xiaolin Hu (A6), Deep learning for audio and video
- 23.02.2023: Simone Frintrop (A6), Deep learning for audio and video
- 24.02.2023: Yizhou Wang (A4), Robust robot behaviour
- 02.03.2023: Stefan Wermter (A5, C4), Joint attention
- 03.03.2023: Changshui Zhang (A4), Robust robot behaviour
- 09.03.2023: Jun Zhu (B2), Crossmodal Infererence
- 10.03.2023: Zhiyuan Liu (C4), Crossmodal language learning
- 16.03.2023: Jianwei Zhang (A4, B5), Robust robot behaviour
- 17.03.2023: Fuchun Sun (B5), Dexterous Manipulation Skills