This model DeProgrammer/Jan-v3-4B-base-instruct-MNN-Q8 was
converted to MNN format from janhq/Jan-v3-4B-base-instruct
using llmexport.py in MNN version 3.4.0 with --quant_bit 8 but otherwise default settings.
Inference can be run via MNN, e.g., MNN Chat on Android.
- Downloads last month
- 8
Model tree for DeProgrammer/Jan-v3-4B-base-instruct-MNN-Q8
Base model
Qwen/Qwen3-4B-Instruct-2507
Finetuned
janhq/Jan-v3-4B-base-instruct