IntellAct

IntellAct addresses the problem of understanding and exploiting the meaning (semantics) of manipulations in terms of objects, actions and their consequences for reproducing human actions with machines. This is in particular required for the interaction between humans and robots in which the robot has to understand the human action and then to transfer it to its own embodiment. IntellAct will provide means to allow for this transfer not by copying movements of the human but by transferring the human action on a semantic level. IntellAct will demonstrate the ability to understand scene and action semantics and to execute actions with a robot in two domains. First, in a laboratory environment (exemplified by a lab in the International Space Station (ISS)) and second, in an assembly process in an industrial context.

IntellAct consists of three building blocks:

  1. Learning: Abstract, semantic descriptions of manipulations are extracted from video sequences showing a human demonstrating the manipulations;
  2. Monitoring: In the second step, observed manipulations are evaluated against the learned, semantic models;
  3. Execution: Based on learned, semantic models, equivalent manipulations are executed by a robot.

The analysis of low-level observation data for semantic content (Learning) and the synthesis of concrete behaviour (Execution) constitute the major scientific challenge of IntellAct. Based on the semantic interpretation and description and enhanced with low-level trajectory data for grounding, two major application areas are addressed by IntellAct: First, the monitoring of human manipulations for correctness (e.g., for training or in high-risk scenarios) and second, the efficient teaching of cognitive robots to perform manipulations in a wide variety of applications.

To achieve these goals, IntellAct brings together recent methods for:

  1. parsing scenes into spatio-temporal graphs and so-called “semantic Event Chains‟.
  2. probabilistic models of objects and their manipulation.
  3. probabilistic rule learning, and
  4. dynamic motion primitives for trainable and flexible descriptions of robotic motor behaviour.

Its implementation employs a concurrent-engineering approach that includes virtual-reality-enhanced simulation as well as physical robots. Its goal culminates in the demonstration of a robot understanding, monitoring and reproducing human action.

Partners:

•  University of Southern Denmark (Coordinator)Odense, Denmark
• Georg-August-Universität Göttingen, Germany
• University of Innsbruck Innsbruck, Austria
• RWTH Aachen University Aachen, Germany
• Jožef Stefan Institute Ljubljana, Slovenia
• Agencia Estatal Consejo Superior de Investigaciones Cientificas, Spain

Web page [link]