The Dragon Hatchling: The Missing Link between the Transformer and Models of the Brain Paper • 2509.26507 • Published Sep 30, 2025 • 538
Sample-efficient Integration of New Modalities into Large Language Models Paper • 2509.04606 • Published Sep 4, 2025 • 8
Bootstrapping World Models from Dynamics Models in Multimodal Foundation Models Paper • 2506.06006 • Published Jun 6, 2025 • 14
Inference-Time Hyper-Scaling with KV Cache Compression Paper • 2506.05345 • Published Jun 5, 2025 • 27
Composable Sparse Fine-Tuning for Cross-Lingual Transfer Paper • 2110.07560 • Published Oct 14, 2021 • 2
MMLongBench: Benchmarking Long-Context Vision-Language Models Effectively and Thoroughly Paper • 2505.10610 • Published May 15, 2025 • 54
The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs Paper • 2504.17768 • Published Apr 24, 2025 • 13