News & Events


Visuomotor Learning for Interaction

Prof. Justus Piater Ph.D., Max-Planck-Institut Tübingen, Dept. of Empirical Inference

Closed perception-action loops allow autonomous agents to evaluate the effects of their own actions, and thus to improve their action and/or perception by exploratory learning. I will present two distinct lines of research from my lab that implement this idea in different ways. The first objective is to learn reactive policies by linking visual percepts directly to actions within a reinforcement-learning framework. Here, a challenge is the size of both perceptual and action spaces. In the context of vision-based navigation, we address both by our RLVC and RLJC algorithms that iteratively subdivide both spaces in a task-dependent manner. The second problem concerns learning to grasp familiar objects by guided exploration. We have developed probabilistic visual object representations that can be augmented by "grasp densities", grasp success likelihoods in object-relative gripper pose space. Both object representations and grasp densities can in principle be learned fully autonomously, and allow the inference of suitable grasp parameters under task constraints.

Date: 20.04.2009

Time: 17:00 h