This is the code for the paper
MetaMorph: Learning Universal Controllers with Transformers
Agrim Gupta,
Linxi Fan,
Surya Ganguli,
Fei-Fei Li
Multiple domains like vision, natural language, and audio are witnessing tremendous progress by leveraging Transformers for large scale pre-training followed by task specific fine tuning. In contrast, in robotics we primarily train a single robot for a single task. However, modular robot systems now allow for the flexible combination of general-purpose building blocks into task optimized morphologies. However, given the exponentially large number of possible robot morphologies, training a controller for each new design is impractical. In this work, we propose MetaMorph, a Transformer based approach to learn a universal controller over a modular robot design space. MetaMorph is based on the insight that robot morphology is just another modality on which we can condition the output of a Transformer. Through extensive experiments we demonstrate that large scale pre-training on a variety of robot morphologies results in policies with combinatorial generalization capabilities, including zero shot generalization to unseen robot morphologies. We further demonstrate that our pre-trained policy can be used for sample-efficient transfer to completely new robot morphologies and tasks.
The code consists of two main components:
- Metamorph: Code for joint pre-training of different robots.
- Environments and evaluation tasks: Three pre-training environments and two evaluation environments.
We also provide Unimal-100 benchmark. The benchmark consists of 100 train morphologies, 1600 morphologies with dynamics variations, 800 morphologies with kinematics variations, and 100 test morphologies.
# Install gdown
pip install gdown
# Download data
gdown 1LyKYTCevnqWrDle1LTBMlBF58RmCjSzM
# Unzip
unzip unimals_100.zip
We provide Dockerfile for easy installation and development. If you prefer to work without docker please take a look at Dockerfile and ensure that your local system has all the necessary dependencies installed.
# Build docker container. Ensure that MuJoCo license is present: docker/mjkey.txt
./scripts/build_docker.sh
# Joint pre-training. Please change MOUNT_DIR location inside run_docker_gpu.sh
# Finally ensure that ENV.WALKER_DIR points to benchmark files and is accessible
# from docker.
./scripts/run_docker_gpu.sh python tools/train_ppo.py --cfg ./configs/ft.yaml
The default parameters assume that you are running the code on a machine with atlesat 1 GPU.
If you find this code useful, please consider citing:
@inproceedings{
gupta2022metamorph,
title={MetaMorph: Learning Universal Controllers with Transformers},
author={Agrim Gupta and Linxi Fan and Surya Ganguli and Li Fei-Fei},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=Opmqtk_GvYL}
}
This codebase would not have been possible without the following amazing open source codebases: