RoboMP2: A Robotic Multimodal Perception-Planning Framework with Multimodal Large Language Models

ICML 2024

1Harbin Institute of Technology (Shenzhen); 2Great Bay University
Corresponding Author
The left: The detection results of the yellow block with the complex spatial reference using different methods. The right: Different plans for different environment, even if the same instruction.

Abstract

Multimodal Large Language Models (MLLMs) have shown impressive reasoning abilities and general intelligence in various domains. It inspires researchers to train end-to-end MLLMs or utilize large models to generate policies with human-selected prompts for embodied agents. However, these methods exhibit limited generalization capabilities on unseen tasks or scenarios, and overlook the multimodal environment information which is critical for robots to make decisions. In this paper, we introduce a novel Robotic Multimodal Perception-Planning (RoboMP2) framework for robotic manipulation which consists of a Goal-Conditioned Multimodal Preceptor (GCMP) and a Retrieval-Augmented Multimodal Planner (RAMP). Specially, GCMP captures environment states by employing a tailored MLLMs for embodied agents with the abilities of semantic reasoning and localization. RAMP utilizes coarse-to-fine retrieval method to find the k most-relevant policies as in-context demonstrations to enhance the planner. Extensive experiments demonstrate the superiority of RoboMP2on both VIMA benchmark and real-world tasks, with around 10% improvement over the baselines.

Overview of our proposed RoboMP2



The three parts in grey/blue/green represent the input data, planning and perception, respectively. The modules highlighted are trainable, including fusion module and LoRA.

Demo


Simulation

First put the brick block into the red swirl pan then put the object that was previously at its east into the same object.
Put all objects with the same profile as the blue and purple polka dot frame into it.
Put the yellow and green stripe round into the pink pallet then the yellow paisley pallet. Finally restore it into its original container.
Sweeping any yellow and blue stripe blocks into the green and blue stripe three-sided rectangle without exceeding the polka dot line.

Real-World

Put the yellow block into the green plate then the pink bowl. Finally restore it into its original position
Put any vegetables into the pink bowl and any fruits into the green plate.
Hand over the object which can repair the TV.
Put any yellow blocks into the green plate.

Qualitative Results



Experiments


Results on VIMABench tasks and real-world tasks



Ablation Studies


Conclusion

In this paper, we have proposed a novel Robotic Perception and Planning framework (RoboMP2) that consists of the Goal-Conditioned Multimodal Perceptor (GCMP) and the Retrieval-Augmented Multimodal Planner (RAMP). GCMP is introduced to capture multimodal environment information by incorporating a tailored MLLM. RAMP employs a coarse-to-fine retrieval-augmented approach to adapatively select the k most-relevant policies as in-context demonstrations to enhance the generalization. Extensive experiments demonstrate that RoboMP2 outperforms the baselines by a large margin on both VIMABench and real-world tasks.

BibTeX

@inproceedings{lv2024robomp2,
    title     = {RoboMP$2$: A Robotic Multimodal Perception-Planning Framework with Mutlimodal Large Language Models},
    author    = {Qi Lv and Hao Li and Xiang Deng and Rui Shao and Michael Yu Wang and Liqiang Nie},
    booktitle = {International Conference on Machine Learning},
    year      = {2024}
}