Do VLMs Need Vision Transformers? Evaluating State Space Models as Vision Encoders
Abstract
State space models demonstrate competitive performance as vision backbones for vision-language models, matching or exceeding transformer-based architectures while operating at smaller scales and requiring stabilization strategies for improved robustness.
Large vision--language models (VLMs) often use a frozen vision backbone, whose image features are mapped into a large language model through a lightweight connector. While transformer-based encoders are the standard visual backbone, we ask whether state space model (SSM) vision backbones can be a strong alternative. We systematically evaluate SSM vision backbones for VLMs in a controlled setting. Under matched ImageNet-1K initialization, the SSM backbone achieves the strongest overall performance across both VQA and grounding/localization. We further adapt both SSM and ViT-family backbones with detection or segmentation training and find that dense-task tuning generally improves performance across families; after this adaptation, the SSM backbone remains competitive while operating at a substantially smaller model scale. We further observe that (i) higher ImageNet accuracy or larger backbones do not reliably translate into better VLM performance, and (ii) some visual backbones are unstable in localization. Based on these findings, we propose stabilization strategies that improve robustness for both backbone families and highlight SSM backbones as a strong alternative to transformer-based vision encoders in VLMs.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- VisionPangu: A Compact and Fine-Grained Multimodal Assistant with 1.7B Parameters (2026)
- ViT-5: Vision Transformers for The Mid-2020s (2026)
- VersaViT: Enhancing MLLM Vision Backbones via Task-Guided Optimization (2026)
- iGVLM: Dynamic Instruction-Guided Vision Encoding for Question-Aware Multimodal Understanding (2026)
- Decoupling Vision and Language: Codebook Anchored Visual Adaptation (2026)
- ConsensusDrop: Fusing Visual and Cross-Modal Saliency for Efficient Vision Language Models (2026)
- Stateful Cross-layer Vision Modulation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper