2Mamba2Furious: Linear in Complexity, Competitive in Accuracy Paper • 2602.17363 • Published 6 days ago • 7
Nanbeige4.1-3B: A Small General Model that Reasons, Aligns, and Acts Paper • 2602.13367 • Published 12 days ago • 29
On Surprising Effectiveness of Masking Updates in Adaptive Optimizers Paper • 2602.15322 • Published 8 days ago • 9
SpargeAttention2: Trainable Sparse Attention via Hybrid Top-k+Top-p Masking and Distillation Fine-Tuning Paper • 2602.13515 • Published 11 days ago • 42
Beyond Log Likelihood: Probability-Based Objectives for Supervised Fine-Tuning across the Model Capability Continuum Paper • 2510.00526 • Published Oct 1, 2025 • 10
Dynamic Long Context Reasoning over Compressed Memory via End-to-End Reinforcement Learning Paper • 2602.08382 • Published 16 days ago • 10
When to Memorize and When to Stop: Gated Recurrent Memory for Long-Context Reasoning Paper • 2602.10560 • Published 14 days ago • 28
Composition-RL: Compose Your Verifiable Prompts for Reinforcement Learning of Large Language Models Paper • 2602.12036 • Published 13 days ago • 96
Good SFT Optimizes for SFT, Better SFT Prepares for Reinforcement Learning Paper • 2602.01058 • Published 24 days ago • 41
Hybrid Linear Attention Done Right: Efficient Distillation and Effective Architectures for Extremely Long Contexts Paper • 2601.22156 • Published 26 days ago • 13
iFSQ: Improving FSQ for Image Generation with 1 Line of Code Paper • 2601.17124 • Published Jan 23 • 33
ConceptMoE: Adaptive Token-to-Concept Compression for Implicit Compute Allocation Paper • 2601.21420 • Published 27 days ago • 42
Scaling Embeddings Outperforms Scaling Experts in Language Models Paper • 2601.21204 • Published 27 days ago • 100
Lost in the Prompt Order: Revealing the Limitations of Causal Attention in Language Models Paper • 2601.14152 • Published Jan 20 • 5
The Flexibility Trap: Why Arbitrary Order Limits Reasoning Potential in Diffusion Language Models Paper • 2601.15165 • Published Jan 21 • 72
Elastic Attention: Test-time Adaptive Sparsity Ratios for Efficient Transformers Paper • 2601.17367 • Published Jan 24 • 33