ZwZ-8B
π Paper | π Project | π€ Collection
Model Summary
ZwZ-8B is a fine-grained multimodal perception model built upon Qwen3-VL-8B. It is trained using Region-to-Image Distillation (R2I) combined with reinforcement learning, enabling superior fine-grained visual understanding in a single forward pass β no inference-time zooming or tool calling required.
ZwZ-8B achieves state-of-the-art performance on fine-grained perception benchmarks among open-source models of comparable size, while also demonstrating strong out-of-distribution generalization on visual reasoning, GUI agent, and AIGC detection tasks.
Key Features
- β‘ Single-Pass Efficiency: Achieves fine-grained perception in one forward pass, eliminating inference-time tool-calling overhead
- π― Superior Accuracy: State-of-the-art on perception benchmarks among open-source models
- π Broad Improvements: Enhances not only perception benchmarks but also out-of-distribution generalization on visual reasoning, GUI agent, and AIGC detection
How It Works
Traditional "Thinking-with-Images" methods zoom into regions of interest during inference, incurring high latency from repeated tool calls and visual re-encoding. ZwZ transforms zooming from an inference-time tool into a training-time primitive:
- Zoom in to micro-cropped regions and let strong teacher models (Qwen3-VL-235B, GLM-4.5V) generate high-quality VQA data
- Distill this region-grounded supervision back to the full image with explicit bounding-box overlays
- Reinforce via RL training to enable single-glance fine-grained perception without tool use
Quickstart
Installation
pip install transformers accelerate torch
Inference
from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
# default: Load the model on the available device(s)
model = Qwen3VLForConditionalGeneration.from_pretrained(
"inclusionAI/ZwZ-8B", dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("inclusionAI/ZwZ-8B")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt"
)
inputs = inputs.to(model.device)
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
Training Data
ZwZ-8B is trained on inclusionAI/ZwZ-RL-VQA, a 74K-sample Region-to-Image distilled VQA dataset synthesized from diverse image pools (SA-1B, LAION, MetaCLIP, Visual Genome, CC12M, STPLS3D).
Citation
@article{wei2026zooming,
title={Zooming without Zooming: Region-to-Image Distillation for Fine-Grained Multimodal Perception},
author={Wei, Lai and He, Liangbo and Lan, Jun and Dong, Lingzhong and Cai, Yutong and Li, Siyuan and Zhu, Huijia and Wang, Weiqiang and Kong, Linghe and Wang, Yue and Zhang, Zhuosheng and Huang, Weiran},
journal={arXiv preprint arXiv:2602.11858},
year={2025}
}
License
This model follows the license of Apache 2.0 License.
- Downloads last month
- -