CML Colloquium
This page lists the talks of the joint CML colloquium in Hamburg and Beijing.
Winter Semester 2017/2018
Prof. Dr. Kevan Martin, Institute of Neuroinformatics, Zurich
What are all those wire for? The enigma of cortical circuits
ZMNH PhD - Seminar
Dr. Julia Schiemann,Centre for Integrative Physiology, University of Edinburgh
Neuromodulatory control of simple & skilled motor movements
ZMNH-Seminar
Frank Keller, Professor in the School of Informatics at the University of Edinburgh
Jointly Representing Images and Text: Dependency Graphs, Word Senses, and Multimodal Embeddings
In this presentation, I will argue that we can make progress in language/vision tasks if we represent images in structured ways, rather than just labeling objects, actions, or attributes. In particular, deploying structured representations from natural language processing is fruitful: I will discuss how visual dependency representations (VDRs), which borrow ideas for dependency parsing, can be used to capture how the objects in an scene interact with each other. VDRs are useful for tasks such as image retrieval or image description. Secondly, I will argue that much more fine-grained representations of actions are needed for most language/vision tasks. Again, ideas from NLP are be leveraged: I will introduce algorithms that use multimodal embeddings to perform verb sense disambiguation in a visual context.
Frank Keller is professor of computational cognitive science in the School of Informatics at the University of Edinburgh. His background includes an undergraduate degree from Stuttgart University, a PhD from Edinburgh, and postdoctoral and visiting positions at Saarland University and MIT. His research focuses on how people solve complex tasks such as understanding language or processing visual information. His work combines experimental techniques with computational modeling to investigate reading, sentence comprehension, and language generation, both in isolation and in a visual context. Prof. Keller serves on the management committee of the European Network on Vision and Language, is a member of governing board of the European Association for Computational Linguistics, and recently completed an ERC grant in the area of vision and language.
Prof. Dr. Claudia Bagni, Dept. of Fundamental Neursciences, University of Lausanne
The molecular basis of brain wiring and social behaviour
A feature of a large number of brain disorders, including developmental and neurodegenerative diseases, is malfunctioning synapses and synaptic connections and hence the term “synaptopathies” for these disorders. Our lab has long been interested in understanding how synaptic protein synthesis and the associated actin remodeling that reshape synapses are regulated, with particular emphasis on how this occurs in development. A number of neurodevelopmental disorders such as the Autism Spectrum Disorders and intellectual disability are synaptopathies and there is increasing evidence that these disorders show dysregulated protein synthesis.
Summer Semester 2017
- Monday, 17 July 2017, 17:15, Campus “Informatikum/Stellingen”, Room D-125
Prof. Dr. Andreas Holzinger, Professor at Medical University Graz
Machine Learning and Knowledge Extraction: The challenge is in small amount of data setsThe goal of Machine Learning is to learn from data, to extract and discover knowledge, and to help to make decisions under uncertainty. In automatic machine learning (aML) great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from "big data" with many training sets. However, sometimes we are confronted with a small amount of complex data sets, where aML suffers of insufficient training samples. The application of such aML approaches in complex application domains, e.g. as in health informatics seems elusive in the near future, and a good example are Gaussian processes, where aML (e.g. standard kernel machines) struggle on function extrapolation problems, which are trivial for human learners. In such situations, interactive Machine Learning (iML) can be beneficial where a human-in-the-loop helps in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where the knowledge and experience of human experts can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem reduces greatly in complexity through the input and the assistance of an human agent involved directly into the learning phase. Tackling such challenges needs a concerted effort, fostering integrative ML research between experts ranging from diverse disciplines, from data science to visualization, and both disciplinary excellence and a cross-disciplinary skill-set with international collaboration.
- Tuesday 4.7.2017, 11:00 s.t. (sharp) Room F-132, Department of Informatics
Frederic Alexandre, Research Director, Mnemosyne team, Inria Bordeaux
Predicting values and controling actions: an overview of bio-inspired models of mnemonic synergiesMnemosyne is a team in computational neuroscience in Bordeaux, hosted in the NeuroCampus with neuroscientists and medical people. Our scientific positioning is about Mnemonic Synergy (hence Mnemosyne), the study of interactions between different kinds of learning and their organization in the brain in a system of memories. I will explain in more details this positioning and give some examples of current researches in the team, including modeling pavlovian conditioning by considering the different kinds of afferences of the amygdala and modeling operant conditioning through interactions between the prefrontal cortex and the basal ganglia.
- Thursday, 22 June 2017, 14:00, ZMNH seminar room E.82, Falkenried 94
Prof. Dr. Siegrid Löwel
The dynamic architecture of the adult visual cortex, or how can I keep my brain young? - Monday, 19 June 2017, 17:15, Informatikum Room B-201, Vogt-Kölln-Str. 30
Prof. Dr. Chris Biemann
Adaptive Language TechnologiesAbstract: Automatic natural language understanding enables natural communication with computers and computer-assisted access to the content of large document collections. While classical approaches to artificial intelligence anticipate all possible situations and interactions in form of a fully specified dialogue model or ontology, they are hard to adapt to new domains and do not cope well with language change. In this talk, I will motivate an adaptive, purely data-driven approach to natural language processing. Illustrated by recent research prototypes, three stages of data-driven adaptation will be illustrated: feature/resource induction, induction of processing components and continuous data-driven learning. Finally, I will discuss current research and future directions regarding the integration of symbolic and statistical knowledge, interpretability of language processing components as well as advanced forms of information access.
- Wednesday, 07 June 2017, 14:15, Hörsaal Mollerstrasse 10
Prof. Dr. Ian Apperly
Why are there gaps between mindreading competence and performance?Abstract: We know that adults have the competence to mindread – to represent the beliefs, desires and intentions of others - and there is evidence that at least some mindreading is performed with a significant degree of automaticity. Why, then, do we sometimes appear not to mindread successfully, and why do some people seem better at this than others?
I will argue that much mindreading occurs spontaneously rather than automatically. Spontaneous mindreading does not require explicit prompting, but is conditional on motivation and on the availability of sufficient cognitive resources. I will also argue that whether mental states are inferred automatically, spontaneously, or under instruction, there is no guarantee that this information will be integrated to guide ongoing behaviour, in social interaction or communication. Such integration also requires motivation and cognitive resources. The need for motivation and cognitive resources opens the door to predictable patterns of variable performance in mindreading, both within and between individuals. Finally, I will argue that some mindreading requires uncertain, “abductive” inferences, which are likely to highly dependent on familiarity with the situation in which the inference is made, and therefore variable within and between individuals in different contexts and cultures. - Monday, 29 May 2017, 17:15, Informatikum, B-201
Prof. Dr. Uwe Barthel
Visually Browsing Millions of Images using Image GraphsAbstract: In the past an efficient and satisfactory image search was only possible by using a combation of keywords and low-level visual image features. Recently Convolutional Neural Networks (CNNs) have enabled automatic understanding of images. This results in a multitude of new applications and improved visual image search systems. This talk provides an overview of the different methods for image search, gives an overview of the principle of CNNs and shows how future image search systems could look like. We present a new approach to visually explore very large sets of untagged images. High quality image descriptors are generated using transformed activations of a convolutional neural network. These features are used to model image similarities, from which a hierarchical image graph is build. We show how such a graph can be constructed efficiently. Best user experience for navigating this graph is achieved by projecting sub-graphs onto a regular 2D-image map. This allows users to explore the image graph similar to navigation services.
- Monday, 22 May 2017, 17:15, Informatikum, B-201
Prof. Dr. Timo Gerkmann
Signal Processing for Speech EnhancementSpeech is arguably the most natural and important means for human communication. In recent years, speech has become more and more feasible and important also for human-computer interaction. As many speech communications devices like smartphones and hearing aids are portable or even wearable, they are frequently used in very noisy environments such as crowded restaurants, busy streets, or in a cafeteria. However, the noise and reverberation in such environments may make speech communication difficult or even impossible.
In the Signal Processing (SP) group, we focus on audio signal processing and in particular aim at making speech communication work more robustly in noisy and reverberant environments. To achieve this, we combine prior knowledge about the signal (e.g. speech), the environment (e.g. the room) and the sink (e.g. the human ear) with rigorous mathematical optimization procedures. In this talk, we will introduce the general concepts used for signal enhancement and highlight our recent contributions. - Thursday, 28 April 2017, 14:00, Informatikum, F-334
Disputation Junhu He
Robotic In-hand Manipulation with Push and Support MethodIn-hand manipulation is one of the distinctive skills in anthropomorphic hands. It is a process in which fingers push the object to generate expected manipulations. Although lots of research has been done on this topic, it is still a challenge in robotics. This research focuses on manipulating an 'unknown' object with an anthropomorphic robotic hand. In this research, in-hand manipulation is transferred into a pro-cess where a push finger pushes an unknown object to roll onto an elastic surface (support fingers). The object and the elastic surface are treated as one black box system where action commands are sent as input and the observed visual-haptic feedback is output. Based on this concept, push and support models are proposed, including fixed support model, spring support model, and hybrid support model. With these models, a process called haptic exploration is proposed, in which the robot slightly pushes the ob-ject in different directions and estimates the interaction state from haptic feedback. To verify the feasibil-ity of the proposed method, in-hand manipulation experiments have been conducted successfully on a real anthropomorphic hand platform. Furthermore, reinforcement Learning (RL) has been adopted to learn proper push commands through interacting with an in-hand manipulation simulator, which is con-structed with Radial Basis Function Networks (RBFNs) and trained by real manipulation data. Finally, learning experiments have been conducted based on different rewards: visual only rewards (unimodal) and visual-haptic rewards (multimodal). The experimental results demonstrate that our learning method is feasible; moreover, the use of multimodal rewards speeds up the learning process compared to the result from the use of unimodal rewards.
- Wednesday, 27 April 2017, 14:00, Informatikum, F-132
Habilitation Colloquium Dr. Gerd Bruder
Perceptually-Inspired User Interfaces for Computer-Mediated RealitiesCurrent developments in sensor hardware and display technologies have perhaps forever changed the way we interact with digital information, communicate with each other, or spend our leisure time. I will give an overview of my previous research, which was driven by understanding the human perceptual, cognitive and motor abilities and limitations in order to display virtual objects in our surroundings that appear like real objects. I will present evidence of perceptual differences between computer-mediated realities and the real world and discuss creative approaches to alleviate or exploit these differences in the scope of natural user interfaces. In this talk, I will also discuss some interesting diversions along the way, such as redirected driving or computer-mediation of how humans perceive time and space.
- Monday, 24 April 2017, 12:00, Informatikum, D-220
Disputation Doreen Jirak
Inspection of Echo State Networks for Dynamic GesturesWe investigate Echo State Networks (ESN), which implement a new training paradigm for recurrent neural networks. We first demonstrate their gesture classification performance considering two feature sets with very distinct complexity. Second, we introduce the recurrence analysis for qualitative and quantitative description of the gesture input and the system dynamics of an ESN, and show that the methodology complements classic stability measures. Finally, we address the reservoir itself and propose an algorithm for pruning connectivity in a one-shot learning scenario.
Winter Semester 2016
- Tuesday, 13 December 2016, Seminar room W34, UKE, Martinistraße 52
Bryan Strange, Universidad Politécnica de Madrid
New techniques for modulating memory in humans - Tuesday, 15 September 2016, Institute of Psychology, Chinese Academy of Sciences
Thursday, 17 November 2016, Department of Computer Science and Technology, Tsinghua University
Cornelius Weber, University of Hamburg
Topographic Maps in the Brain and its Models
Abstract: Topographic maps are ubiquitous in the brain, such as in visual, lower auditory, somatosensory and somatomotor cortices. Kohonen's self-organizing maps are a popular class of artificial neural networks, in which topography arises because neighbouring neurons respond and adapt to the same input stimuli. Another principle that leads to topographic ordering is that the brain tries to minimize the total volume of neural connections. Such wiring length minimisation has been proposed to lead to cortical folding on the large scale; on a smaller scale, minimization of horizontal connections within a neural layer can explain topograpic order and other fine structure observed in primate primary visual cortex (V1). This has implications for horizontal neural connections and recurrent computations within cortical areas. Finally, examples of neural topographic map models used for robot navigation will be presented.
- Monday 14 November 2016, 17:15, Raum B-201, Informatikum, Vogt-Kölln-Str. 30
Simone Frintrop, University of Hamburg
Cognitive Computer Vision for Mobile Systems
Abstract: The amount of digital images in our daily live has grown exponentially during the last years, cameras are low-cost sensors which are present everywhere, and billions of images are daily shared on social media. Also the industrial interest in methods for digital image and video processing is increasing strongly. As a consequence, the need for algorithms that automatically improve, analyze, and interpret images also rises more and more. Fortunately, the research field of computer vision has also advanced strongly during the last decade and many things which were not feasible a few years ago are suddenly achievable. However, when it comes to seemingly simple daily live questions such as "how many objects are on this table?", current systems reach their limits and it shows that the human visual system still outperforms machines clearly. - In my research group, we focus on biologically inspired methods for computer vision. That means, we develop algorithms that follow mechanisms of human vision, outgoing from psychophysical and neurobiological findings. Topics of our research include the detection of saliency in images and the discovery of objects. We focus on methods for mobile systems, such as wearable cameras or autonomous service robots.
Summer Semester 2016
- Wednesday 28 September 2016, 10:00, Raum B-201, Informatikum, Vogt-Kölln-Str. 30
Emilia I. Barakova, Eindhoven University of Technology
Learning Robots for Social TherapiesAbstract: Recent developments in robotics promise an attractive solution for emerging societal problems as growing ageing population, rising health costs, and shortages in medical and special-needs care. A crucial gap that still needs to be filled before healthcare robots are integrated in real-world settings is that they must ultimately be accepted by humans in the human social sphere. An essential factor playing a role in robot acceptance is the ability of robots to react appropriately to human social signals and have grounded social and emotional behavior. Learning is a basic mechanism for grounding social behavior on multiple levels. Robot´s constitutive, interactive and societal autonomy require various learning mechanisms and coping strategies such as: (1) recognition and joint manipulation of objects by using models of hands interplay and dominance, (2) using predictive mechanisms in interactive behaviors, (3) developing social strategies for long-term interaction based on game-theoretical approaches and methods from evolutionary computing and (4) using social computing for personalization and development of training scenarios. Finally, the trade-offs of implementing learning behaviors or alternative approaches will be discussed.
Bio: Emilia I. Barakova received the Master's degree in electronics and automation from the Technical University of Sofia, Sofia, Bulgaria, and the Ph.D. degree in mathematics and physics from Groningen University, The Netherlands, in 1999. She has a background in Artificial intelligence (Groningen University), Behavioral robotics (GMD-Japan research laboratory), Brain-inspired robotics (RIKEN Brain Science Institute, Japan), and Social signal processing, social robotics and user-centered interaction design (Eindhoven University of Technology). She is currently with the Department of Industrial Design, Eindhoven University of Technology, Eindhoven, The Netherlands. Her recent research is on modeling social and emotional behavior for applications to social robotics and robots for social training of autistic children. Barakova is an Editor of Personal and Ubiquitous computing journal and Journal of Integrative Neuroscience.
- Tuesday, 20. September 2016, 17:00-18:00, Seminar Roob W34, UKE Campus
Prof. Oliver Wolf from the Ruhr University Bochum
How stress influences our memory - Monday, 19. September 2016, 17:00-18:00, Seminar Room 310/311, N55, UKE Campus
Prof. Dr. Thomas Oertner, Institute for Synaptic Physiology, ZMNH, UKE, Hamburg, Germany
How activity patterns shape hippocampal connectivity - Monday, 12. September 2016, 17:00 – 18:30, Seminar Room 310/311, N55, UKE Campus
Prof. Dr. Christian Ruff, University of Zurich, Laboratory for Social an Neural Systems Research (SNS-Lab), Department of Economics, Blümlisalpstrasse, CH-8006 Zurich
Neural evidence accumulation in perceptual vs value-based decision making - Monday, 18. July 2016, 17:00 – 18:30, Seminar Room 310/311, N55, UKE Campus
Prof. Dr. Peter König, Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
Eye movements as a central part of cognitive processes
- Thursday, 14. July 2016, 14:00 – 15:30, ZMNH Seminar Room E.82, Falkenried 94
Dr. Martin Fuhrmann, Deutsches Zentrum für Neurodegenrative Erkrankungen, Bonn
Cellular and synpatic correlates of learning and memory
www.uke.de/FOR2419
- Wednesday, May 4, 2016, 16.15, ESA WEST room 221
Professor Dr. David Lewkowicz, Northeastern University, Boston, https://www.researchgate.net/profile/David_Lewkowicz
The Development of Multisensory Perception in Infancy & the Role of Experience
Our world is specified by a plethora of multisensory perceptual inputs. Potentially, this could give rise to fragmented perceptual experiences. The fact is that we usually perceive our world as a unitary and meaningful place. For example, we perceive our interlocutors as people communicating specific messages to us rather than as sources of disconnected auditory and visual sensations. Our ability to have such coherent perceptual experiences is due to the fact that multisensory inputs are typically spatiotemporally correlated and crossmodally equivalent and to the fact that our brains have evolved mechanisms to perceive multisensory coherence. In this talk, I will show that the ability to perceive multisensory coherence develops gradually early in life and that it depends critically on early experience.
The talk will be followed by a reception.
- Tuesday, May 03, 2016, 15:00, Informatikum F-334
Prof. Bo XU, Director of Institute of Automation in Beijing, Chinese Academy of Sciences (CASIA)
Brain-inspired intelligence and future applications in roboticsLearning from human brain to improve the machine intelligence is definitely one of the very interesting and potential research areas. Under the framework of CAS Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), a list of brain-inspired computing mechanisms in multi-scales that are under exploring will be presented. Specifically two function models will be emphasized. The first one is a multi-region structured sensing/decision model that could let UAV learn autonomous flying through simply interacting; The second one is RAM (reasoning, attention and memory) model, that could capture the meaning of spoken dialogue through the attention and reasoning. Finally, deep fusion of these models and robotics will be discussed .
-
Monday, May 02, 2016, 17:15 - 18:00, Informatikum, Room D-125
Prof. Dr. Robert Lowe, University of Gothenburg/Sweden, https://www.researchgate.net/profile/Robert_Lowe3
Embodied Affective Decision Making in RobotsThe importance of the role of affect (e.g. drives, motivations, emotions) in decision making has been increasingly recognized by researchers in the fields of neuroscience and psychology in recent years. It has also been of contemporary interest to roboticists with a focus on issues concerning embodiment. In this talk, I will present work carried out over several projects that focuses on affective mechanisms used to guide decision making in robots. This talk will consist of two parts covering past and recent research in the area of embodied affective decision making in robots. In the first part, drawing from examples of my own, and my PhD students work, I will provide examples from evolutionary robotics and human-robot interaction as to how affective mechanisms can be exploited in robotics to produce adaptive behavior and decision making, i.e. that which is not the direct product of learning. In the second part, I will discuss recent work on tactile interaction between humans and robots. The ability to reliably convey and interpret emotional signals through touch (as a form of embodied affective interaction) provides an important source of information for appropriate social decision making. Recent results from a human-robot tactile interaction study will be presented that show how emotions can be expressed according to a number of different dimensions amenable to tactile sensing.