Task AgnostiC Offline Reinforcement Learning (TACO-RL)

Everyday tasks of long-horizon and comprising a sequence of multiple implicit subtasks still impose a major challenge in offline robot control. While a number of prior methods aimed to address this setting with variants of imitation and offline reinforcement learning, the learned behavior is typically narrow and often struggles to reach configurable long-horizon goals. As both paradigms have complementary strengths and weaknesses, we propose a novel hierarchical approach that combines the strengths of both methods to learn task-agnostic long-horizon policies from high-dimensional camera observations. Concretely, we combine a low-level policy that learns latent skills via imitation learning and a high-level policy learned from offline reinforcement learning for skill-chaining the latent behavior priors. Experiments in various simulated and real robot control tasks show that our formulation enables producing previously unseen combinations of skills to reach temporally extended goals by ``stitching'' together latent skills through goal chaining with an order-of-magnitude improvement in performance upon state-of-the-art baselines. We even learn one multi-task visuomotor policy for 25 distinct manipulation tasks in the real world which outperforms both imitation learning and offline reinforcement learning techniques.

Video

Technical Approach

TACO-RL is a self-supervised general-purpose model learned from an offline dataset of robot interactions, it generalizes to a wide variety of long-horizon manipulation tasks.

  1. Low-level policy: Recognizes and organizes a repertoire of behaviors from unlabeled, undirected dataset in a latent plan space.
  2. High-level policy: Hindsight relabeling of sampled windows of experience intoreward-augmented latent plan transitions. Learned with offline RL, this allows the high-level policy to stitch plans together to achieve complex long-horizon tasks.
  3. Inference: the hierarchical model is used to perform goal-conditioned rollouts in robot manipulation tasks.
Network architecture

Experiments

We compared our model against state of the art baselines.

  1. Play-LMP: Imitation learning baseline; Imitates through latent behaviours how to reach a goal image.
  2. CQL+HER:Offline reinforcement learning baseline; Uses the goal sampling approach as us, but the actor needs to take isolated decisions every timestep.

Single task

We evaluate our approach to perform single tasks where the goal image does not contain the end effector performing the action.

Ours

Play-LMP

CQL+HER

2 Sequential Tasks

We also conduct experiments in which the goal image indicates that the robot must perform two tasks sequentially. Both tasks must be deduced by the agent from a single image.

Two Sequential Tasks

Ours

Play-LMP

CQL+HER

5 sequential tasks

To test our model's ability to chain tasks together, we instruct our robot to perform 5 tasks sequentially using challenging intermediate goal images.

Ours

Play-LMP

CQL+HER

Publications

Latent Plans for Task Agnostic Offline Reinforcement Learning
Erick Rosete, Oier Mees, Gabriel Kalweit Joschka Boedecker Wolfram Burgard
Proceedings of the 6th Conference on Robot Learning (CoRL), 2022