Qwen3-Coder-Next-GGUF

This model was converted to GGUF format from Qwen/Qwen3-Coder-Next using GGUF Forge.

Quants

The following quants are available: Q3_K_S, Q2_K, Q3_K_M, Q3_K_L, Q4_0, Q4_K_S, Q4_K_M, Q5_0, Q5_K_S, Q5_K_M, Q6_K, Q8_0

Ollama Support

Full Ollama support is provided by merging any sharded GGUF output into a single file after quantization.

Conversion Stats

Metric Value
Job ID 110acb9f-02d0-4c49-9f12-73ac6e47e8f1
GGUF Forge Version v5.8
Total Time 4.4h
Avg Time per Quant 27.2min

Step Breakdown

  • Download: 21.2min
  • FP16 Conversion: 20.6min
  • Quantization: 3.7h

πŸš€ Convert Your Own Models

Want to convert more models to GGUF?

πŸ‘‰ gguforge.com β€” Free hosted GGUF conversion service. Login with HuggingFace and request conversions instantly!

Links

  • 🌐 Free Hosted Service: gguforge.com
  • πŸ› οΈ Self-host GGUF Forge: GitHub
  • πŸ“¦ llama.cpp (quantization engine): GitHub
  • πŸ’¬ Community & Support: Discord

Converted automatically by GGUF Forge v5.8

Downloads last month
449
GGUF
Model size
80B params
Architecture
qwen3next
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Akicou/Qwen3-Coder-Next-GGUF

Quantized
(57)
this model

Collection including Akicou/Qwen3-Coder-Next-GGUF