GemBench comprises 16 training tasks with 31 variations, covering seven action primitives. The testing set includes 44 tasks with 92 variations, which are organized into four progressively more challenging levels to systematically evaluate generalization capabilities, namely novel placements, novel rigid object, novel articulated objects, and long-horizon tasks.
Colosseum aims to evaluate models' generalization across various scene perturbations. It encompasses 14 perturbation factors within 20 distinct RLBench tasks, categorized into three tiers (simple, intermediate, and complex) according to the number of way-points involved (task horizon). Collectively, Colosseum presents 20,371 unique task perturbations instances.
We will deploy your models on a real robot platform as shown in image below, to assess their real-world generalization capabilities. The testing tasks are similar to the tasks in GemBench's four generalization levels. We will provide a small set of real robot data for fine-tuning. Submissions follow the same format as the GemBench track using containers. You are welcome to participate only in the real robot track if preferred. This is an independent track and will be awarded separately.