Projects - Area Z
Three Integration Initiatives: II-T, II-M, II-R
Due to the highly interdisciplinary nature of our research, our planned centre will emphasize collaboration and interdisciplinary cooperation to a high degree. Every project is a proposal for collaborative research including at least two PIs with different, complementary backgrounds. This emphasis on integration is woven deeply into both our overall research strategy and indeed into the focus of the research itself: each project combines multiple modalities, multiple computational or neurocognitive disciplines, and multiple approaches across a range of perspectives in different international research groups from both Hamburg and Beijing.
Integration is therefore an essential part of the centre. To focus, evaluate, and demonstrate the progress of the centre’s research, the project thus includes three overarching Integration Initiatives (IIs). While each project has its own specific goals, the Integration Initiatives channel and organize the collaborative efforts of the participants, advance the state of the art, ensure and check the progress towards our six objectives (see above), and produce the robotics demonstrator. In all, there are three initiatives:
- II-T: Theory,
- II-M: Models,
- II-R: Robotics,
where the third of these will produce a robot demonstrator, with the goal of illustrating many of the scientific advances of the centre on a common robotics platform.
Integration Initiative II-T: A theoretical framework for crossmodal learning
The purpose of II-T is to organize integrative activities and events within the centre that help to develop a shared, interdisciplinary view on crossmodal learning by identifying common concepts and interdisciplinary links between neuroscientific studies and work in artificial systems; to coordinate the dissemination efforts of the centre, and to aid the construction of a shared theoretical framework from the results of the centre. This will lead to a framework for comparing and reconciling the nature and explanatory potential of different theoretical approaches. A particular emphasis will be placed on comparing the different theoretical approaches for crossmodal learning that will be pursued by individual projects from different disciplines.
A key role of II-T is to provide a backbone for the training activities in the Integrated Research Training Group of the TRR. Therefore, II-T is integrated with the central project Z2. However, all projects will contribute to II-T by participating in the training activities and events and contributing to the theoretical framework that will reflect the results of the discussion process. During the first funding period, a large set of activities aimed at discussing theories and concepts on crossmodal learning. Several workshops and webinars were held which focussed on the discussion of concepts and theoretical models; these workshops were held in Hamburg in June 2016 and continued at the summer schools in the same year in Beijing, and again 2017 in Hamburg and 2018 in Beijing. In addition, we have organized several satellite symposia at key meetings of the different communities working on natural and artificial systems (e.g. IMRF, IROS etc.), integrating activities of II-T with II-M and II-R.
In the second funding phase, II-T activities will focus on the next level of theory integration. Work will progress from overviewing and relating concepts on a general level to specific questions which will help to integrate activities across TRR169 projects. In particular, we will focus on conceptual gaps between TRR169 subprojects, on how collaboration across projects can fill these gaps, and on how theories can bridge the different disciplines involved. We will jointly discuss concepts and embed ideas in the collaborations across TRR169 projects. Results will be integrated and summarized in theory workshops, summer schools, and satellite symposia at key meetings and large conferences.
Integration Initiative II-M: Computational models of crossmodal learning
In the second funding phase, II-M has two primary functions: First, it will facilitate the integration of neurocomputational models from all projects over all three areas of the centre. Second, II-M will further develop a model architecture, tailored for a demonstrator platform and task. These two complementary efforts on integration and architecture will realize the centre’s strategic goal for the second phase: to develop, integrate, and evaluate neurocognitive models for crossmodal robot enhancement. II-M will also provide support for facilitating and implementing joint human-robot experimental setups. It will closely collaborate with the other two integration initiatives: The created neurocomputational models will be informed from the theoretical framework. Practical experience with the models will give valuable feedback for refining the underlying theory. On the other side, II-M will aid in the embodiment of the developed neurocognitive models in robotic platforms in tight collaboration with II-R.
In the first funding period, II-M was tasked with the development of an evaluation platform and task; the integration of the software models; and evaluation of the integrated models in the testing environment. For example, II-M developed NICO (Neuro Inspired Companion robotic platform), a child-sized humanoid robot, endowed with human-like sensorimotor abilities and an approachable design for social communication. This platform filled a gap in the state of the art of robotic platforms that are usually designed toward either sensorimotor tasks or social interactions. The development of the platform was accompanied by the development of the laboratory environment for social HRI. For the integration of neurocognitive models II-M developed an API for NICO that encompasses easily accessible basic robotic functionality and control of social cue mechanisms as well as interfaces to common robot control and neurocomputational frameworks. This involved diverse crossmodal learning models for sensorimotor and HRI tasks. These models have been evaluated both in terms of behavioural and performance measures as well as with an array of established HRI methods like questionnaires and structured interviews.
Development of an evaluation platform, task and laboratory environment. Social communication, via natural language, gestures, emotion expression, and joint attention is a key to successful and engaging human-robot collaboration, and therefore, a vital building block for the centre's overarching goal. Therefore, II-M will provide a robotic platform and laboratory environment for learning crossmodal grounded, social communication. II-M will continue the successful development of NICO Training the diverse neurocomputational models embodied in the platform requires an equally unique laboratory setup that allows for controlled and repeatable learning scenarios that can run with minimal supervision over the course of many days, and at the same time offer realistic input based on real robot sensors. To this end, we will further develop the HRI laboratory projection environment. Most neurocognitive models are trained and evaluated with a pre-recorded dataset. In the centre, we adapt methodologies from deep reinforcement learning and use active exploration of the environment to enhance the learning process in a way that takes full advantage of the robotic embodiment. Combining the active, crossmodal exploration of the environment and social interaction abilities will create a developmentally inspired learning scenario where the learning of the embodied models will not only be facilitated by interaction with the environment but also by active requests from human teachers.
Evaluation of models. The social HRI system described above will be evaluated on its ability to take part in a multi-person social interaction, using natural language and crossmodal social cues, to acquire language and communication abilities from interaction with its environment and human teachers and to respond to possibly conflicting auditory and visual cues with biologically plausible head and eye movements. This acquisition of abilities will be evaluated with established methods of neurocomputing, i.e., learning and resulting behaviour will be analysed and evaluated against never-before-seen test data. Beyond these measures, the unique interdisciplinary composition of the centre allows for a behavioural comparison of the computational models to data collected from human participants that perform in the same experimental setup as the robotic embodied model. Using imaging methodology from neuroscience, we will be able to extend the behavioural comparison to a comparative representational analysis. For the integrated model architecture, the resulting robotic behaviour and user experience will also be analysed using established instruments from HRI research.
Integration Initiative II-R: Crossmodal Human-Robot collaboration
In the second funding phase, II-R also has two primary functions: First, it will facilitate the integration of physical collaboration scenarios on robotic demonstrators, as well as provide a laboratory with an advanced tracking system for facilitating joint human-robot experimental setups for collaboration. Second, II-R will further develop the architecture and software for these robotic platforms and the laboratory. These two parallel efforts on integration and architecture will realize the remaining centre’s strategic goals for the second phase: to develop, integrate, and evaluate physical human-robot collaboration tasks. Furthermore, II-R will facilitate applications for human enhancement and support in collaboration with laboratories from neuroscience and psychology.
It should be noted that the goals of the demonstrators to be developed by II-R are not application-oriented but still purely scientific: to provide different platforms for illustrating the scientific achievements of the centre in real-world robots; in particular, the demonstrators are not intended to solve a specific unresolved problem in robotics. The integration scenario from the first phase will be extended; we will continue with the work on collaborative, interactive human-robot scenarios, and expand it with the support and development of applications for rehabilitation scenarios with physical robotic platforms and the tracking laboratory. Essential to the goals of the centre, the robots will not merely be able to integrate crossmodal information but will be able to learn and improve over time. Furthermore, we will implement scenarios together with projects from the TRR, with the focus on the integration and adaptation of auditory, visual, and tactile modalities.