inference-optimization/Qwen3-Next-80B-A3B-Instruct-quantized.w4a16
Updated
•
46
FP8-dynamic, FP8-block, NVFP4, INT4, INT8 versions of Qwen3-Next-80B-A3B-Instruct and Qwen3-Next-80B-A3B-Thinking Models