Aktuelles & Termine

27.08.2012

Workshop: Machine Learning for Interactive Systems: Bridging the Gap among Language, Motor Control, and Vision


 

Interactive systems such as multimodal interfaces or robots must perceive, act, and interact in the environment where they are embedded. Naturally, perception, action and interaction are mutually related and affect each other. This is particularly the case in many hands-free and eyes-free mobile applications of interactive systems. Machine learning offers the attractive capability of making interactive systems more adaptive to the user and environment. For any of perception, action, and interaction we find a large number of applications using machine learning techniques. However, holistic approaches that tackle these fields in a unified way are still rare. The question of how to integrate language, motor control and vision in machine learning interfaces in an efficient and effective way has been a long standing problem and is the main topic of the workshop.

This workshop aims to bring people together interested in natural language processing, motor control and computer vision with a unified perspective. This invitation is particularly directed to people designing, building, and evaluating Machine Learning Interactive Systems (MLIS) that interact with their environment, and particularly, the people within. Example research questions to address are the following: (a) how do MLIS integrate multimodal perceptions for action and interaction? (b) How do MLIS exhibit adaptive interactive behaviour given their perceptions? and (c) how do MLIS integrate verbal and non-verbal behaviour for effective interactions?

Date: August, 27 - 28, 2012

Location: Montpellier, France

Homepage: MLIS 2012