GemBench Challenge

GemBench comprises 16 training tasks with 31 variations, covering seven action primitives. The testing set includes 44 tasks with 92 variations, which are organized into four progressively more challenging levels to systematically evaluate generalization capabilities, namely novel placements, novel rigid object, novel articulated objects, and long-horizon tasks.

GemBench

Colosseum Challenge

Colosseum aims to evaluate models' generalization across various scene perturbations. It encompasses 14 perturbation factors within 20 distinct RLBench tasks, categorized into three tiers (simple, intermediate, and complex) according to the number of way-points involved (task horizon). Collectively, Colosseum presents 20,371 unique task perturbations instances.

Colosseum

Real Robot Challenge

We will deploy your models on a real robot platform as shown in image below, to assess their real-world generalization capabilities. The testing tasks are similar to the tasks in GemBench's four generalization levels. We will provide a small set of real robot data for fine-tuning. Submissions follow the same format as the GemBench track using containers. You are welcome to participate only in the real robot track if preferred. This is an independent track and will be awarded separately.

Real robot setup

Evaluation

Simulator-based Evaluation

To participate the GemBench and Colosseum challenges, please register your team using this registration form.

Note that the challenges are evaluated independently. Please review the guidelines for your chosen challenge(s):

Real Robot Evaluation

Please register your team using this registration form. The submission guideline is the same as the GemBench challenge.

Dates

  • GemBench and Colosseum submission deadline: May 12 23:59 CET May 23 23:59 CET
  • GemBench and Colosseum report deadline: May 19 23:59 CET May 30 23:59 CET
  • Real robot challenge deadline: June 1 23:59 CET