RoboInter-VLM: Vision-Language Model for RoboInter Manipulation Suite
This is the flagship model of the RoboInter-VLM series, based on Qwen2.5-VL-7B-Instruct. It delivers the strongest performance among the Qwen2.5-VL variants and is the recommended default checkpoint for general use.
Developed as part of the RoboInter project. The model is fine-tuned on the RoboInter-VQA dataset for intermediate representation understanding and generation in robotic manipulation.
All Available Checkpoints
| Checkpoint | Base Model | Architecture | Parameters | Description | Link |
|---|---|---|---|---|---|
RoboInter-VLM (this repo) |
Qwen2.5-VL-7B-Instruct | Qwen2.5-VL | ~7B | Flagship model, recommended for best performance | https://huggingface.co/InternRobotics/RoboInter-VLM |
RoboInter-VLM_qwenvl25_3b |
Qwen2.5-VL-3B-Instruct | Qwen2.5-VL | ~3B | Lightweight model, suitable for efficient deployment | https://huggingface.co/InternRobotics/RoboInter-VLM_qwenvl25_3b |
RoboInter-VLM_llavaov_7B |
LLaVA-OneVision-Qwen2-7B | LLaVA-OneVision | ~7B | LLaVA-OneVision backbone with SigLIP vision encoder | https://huggingface.co/InternRobotics/RoboInter-VLM_llavaov_7B |
All checkpoints are stored in safetensors format with bfloat16 precision.
Supported Tasks
These models are jointly trained on general VQA and three categories of our curated VQA tasks:
- Generation: Predicting intermediate representations such as trajectory waypoints, gripper bounding boxes, contact points/boxes, object bounding boxes (current & final), etc.
- Understanding: Multiple-choice visual reasoning about contact states, grasp poses, object grounding, trajectory selection, movement directions, etc.
- Task Planning: High-level task planning including next-step prediction, action primitive recognition, success determination, etc.
Usage
Quick Start (This Model)
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
model_path = "InternRobotics/RoboInter-VLM"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path, torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained(model_path)
For detailed usage and inference examples, please refer to the RoboInterVLM-QwenVL codebase.
LLaVA-OneVision Checkpoint
For loading and inference with the LLaVA-OneVision checkpoint, please refer to the RoboInterVLM-LLaVAOV codebase, as it requires custom model classes.
Training & Evaluation
For full training and evaluation pipelines, please refer to:
- Qwen2.5-VL models: RoboInterVLM-QwenVL
- LLaVA-OneVision model: RoboInterVLM-LLaVAOV
- VQA Dataset: RoboInter-VQA
Related Resources
- Project: RoboInter
- Annotation Data: RoboInter-Data
- VQA Dataset: RoboInter-VQA
Citation
If you find RoboInter useful in your research, please consider citing:
@article{li2026robointer,
title={RoboInter: A Holistic Intermediate Representation Suite Towards Robotic Manipulation},
author={Li, Hao and Wang, Ziqin and Ding, Zi-han and Yang, Shuai and Chen, Yilun and Tian, Yang and Hu, Xiaolin and Wang, Tai and Lin, Dahua and Zhao, Feng and Liu, Si and Pang, Jiangmiao},
journal={arXiv preprint arXiv:2602.09973},
year={2025}
}
License
Please refer to the original licenses of RoboInter, Qwen2.5-VL, and LLaVA-OneVision.
- Downloads last month
- 24
Model tree for InternRobotics/RoboInter-VLM
Base model
Qwen/Qwen2.5-VL-7B-Instruct