ASAP-HRC

Autonomy for Shared Action and Perception in Human-Robot Collaboration

Project info

  • Consortium: AUCTUS@Inria, Interactions@CeRCA, RoBioSS@Pprime
  • Funding: ANR-21-CE10-0001 & AAPR-2022-2021-16819910
  • Duration: 2021 - 2026

This collaborative project started in 2021 between the AUCTUS team at Inria, the RoBioSS team at the Pprime Institute (CNRS), and the Interaction team at the CeRCA laboratory (CNRS). It is co-funded by the ASAP-HRC young-researcher ANR grant and the Perception-HRI Nouvelle-Aquitaine Regional Aid. It aims at rethinking Autonomy for Shared Action and Perception in Human-Robot Collaboration, through transverse studies in robotics and cognitive sciences.

Three scientific axes are studied to develop a human-centered and generic shared-autonomy framework:

  • Shared-autonomy control for improving Human-Robot collaboration in haptic teleoperation.
    Shared controllers are developed to combine the human motions and the robot assistive behavior into a joint action toward the task goal. The human inputs are first analyzed to infer the operator intent on simple tasks (such as the target object in pick-and-place) and planned the robot assistive behavior. An authority level, that gives the impact of each agent on the action, is computed from the task state, the human activity, or the proximity to target. Then, a Model Predictive shared Control (MPC) approach is developped to compute the desired robot motion on a time horizon, by blending both the human and planned trajectories with the given authority distribution. The model predictive controller generates a unified action while respecting robot and human limits and environmental constraints.

  • Multisensory perception and integration mechanisms in haptic teleoperation.
    Exchange of perceptive information (shared perception) between the Human-Robot agents is required to communicate and coordinate together in interaction scenarios. We focus on analyzing and modeling how human agents perceive, combine, and assimilate visual and haptic informations into a probabilistic belief. Individual psychometric models will be identified by relating taken actions to the experimental feedback conditions, which vary quantity, combination, fidelity, and target of the given multisensory signals. The multisensory perceptive models will, then, be used to determine the optimal visuo-haptic feedback mixture to give through the human-robot interface.

  • Legibility, predictability and trust in the robotic agent in Human-Robot Interaction.
    Trust in a robot and in its assistive behavior highly increases the performance of the human-robot dyad. The level of trust depends on multiple antecedents such as the robot’s performance, environmental factors, or the operator’s locus of control. We aim at identifying quantifiable markers of trust in the physical exchanges between the human-robot agents (discontinuous movements, significant human force). Therefore, the robot behavior can be adapted according to the user’s level of trust, to improve the collaboration. We are particularly interested in legibility and predictability of the robot motions, with respect to the context of interaction with the human (observation of robot actions or interaction via a haptic interface). We will study how Bayesian inference models of legibility/predictability can be transferred to different tasks and situations of human-robot interaction.

People involved

  • Elio Jabbour, Margot Vulliez, Jean-Pierre Gazeau, Vincent Padois, Célestin Préault, “Haptic shared control in human-robot collaboration”, Poster at JJCR 2023 (Journée des Jeunes Chercheurs en Robotique), Oct 2023, Moliets et Maâ, France

Related topics #asap-hrc

[see All]