ProGuard-3B

ProGuard is a proactive multimodal safeguard model. It is designed to identify and reason about unknown risks across both text and visual modalities, moving beyond rigid predefined classification systems.

This model is the official open-source implementation of ProGuard. For deployment instructions, please refer to this link.

Citation

If you find this model helpful, please cite our research:

@article{yu2025proguard,
  title={ProGuard: Towards Proactive Multimodal Safeguard},
  author={Yu, Shaohan and Li, Lijun and Si, Chenyang and Sheng, Lu and Shao, Jing},
  journal={arXiv preprint arXiv:2512.23573},
  year={2025},
  url={https://yushaohan.github.io/ProGuard/}
}

@article{zhang2026deepsight,
  title={DeepSight: An All-in-One LM Safety Toolkit},
  author={Zhang, Bo and Guo, Jiaxuan and Li, Lijun and Liu, Dongrui and Chen, Sujin and Chen, Guanxu and Zheng, Zhijie and Lin, Qihao and Yan, Lewen and Qian, Chen and others},
  journal={arXiv preprint arXiv:2602.12092},
  year={2026}
}
Downloads last month
19
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yushaohan/ProGuard-3B

Finetuned
(670)
this model

Dataset used to train yushaohan/ProGuard-3B

Collection including yushaohan/ProGuard-3B

Papers for yushaohan/ProGuard-3B