Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
inference-optimization 's Collections
NVIDIA-Nemotron-3-Nano-30B-A3B Quantized Models
Qwen3-Next-80B-A3B Quantized Models
Mixed Precision Models
KV Cache Quantization

Qwen3-Next-80B-A3B Quantized Models

updated about 1 hour ago

FP8-dynamic, FP8-block, NVFP4, INT4, INT8 versions of Qwen3-Next-80B-A3B-Instruct and Qwen3-Next-80B-A3B-Thinking Models

Upvote
-

  • inference-optimization/Qwen3-Next-80B-A3B-Instruct-quantized.w4a16

    Updated 10 days ago • 46

  • inference-optimization/Qwen3-Next-80B-A3B-Instruct-FP8-block

    Text Generation • 80B • Updated 20 minutes ago • 76

  • inference-optimization/Qwen3-Next-80B-A3B-Instruct-FP8-dynamic

    Text Generation • 80B • Updated 11 minutes ago • 80

  • inference-optimization/Qwen3-Next-80B-A3B-Instruct-NVFP4

    Text Generation • Updated 1 minute ago • 31

  • inference-optimization/Qwen3-Next-80B-A3B-Thinking-NVFP4

    Text Generation • Updated less than a minute ago • 22

  • inference-optimization/Qwen3-Next-80B-A3B-Thinking-FP8-block

    Text Generation • 80B • Updated 14 minutes ago • 40

  • inference-optimization/Qwen3-Next-80B-A3B-Thinking-FP8-dynamic

    Text Generation • 80B • Updated 10 minutes ago • 72

  • inference-optimization/Qwen3-Next-80B-A3B-Thinking-quantized.w4a16

    Updated 10 days ago • 47
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs