-
GGUF Editor
🏢86Edit GGUF metadata for Hugging Face models
-
mergekit-gui
🔀290Merge AI models using a YAML configuration file
-
GGUF My Repo
🦙1.81kCreate quantized models from Hugging Face repos
-
SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs
Paper • 2512.04746 • Published • 13
Joe
Joe57005
·
AI & ML interests
None yet
Recent Activity
updated
a collection
5 days ago
Models to try
updated
a collection
14 days ago
Models to try
updated
a collection
15 days ago
Models to try
Organizations
None yet
For MOE 1.5B
Models to try
-
bunnycore/Gemma2-2b-function-calling-lora
Updated • 1 -
NickyNicky/gemma-2b-it_oasst2_all_chatML_function_calling_Agent_v1
Text Generation • 3B • Updated • 3 • 1 -
hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF
Text Generation • 1B • Updated • 393k • 39 -
gorilla-llm/gorilla-openfunctions-v2
Text Generation • Updated • 971 • 244
For finetune
Good for home automation
Large context LLMs that work well with Home Assistant via Llama.cpp server running on CPU with 16GB ram.
-
inclusionAI/Ling-mini-2.0
Text Generation • 16B • Updated • 8.92k • 184 -
Orion-zhen/Qwen3-30B-A3B-Instruct-2507-IQK-GGUF
31B • Updated • 48 • 1 -
Intel/Qwen3-30B-A3B-Instruct-2507-gguf-q2ks-mixed-AutoRound
31B • Updated • 109 • 26 -
Tiiny/SmallThinker-21BA3B-Instruct
Text Generation • 22B • Updated • 87 • 111
LLM Tools
-
Running86
GGUF Editor
🏢86Edit GGUF metadata for Hugging Face models
-
Runtime errorFeatured290
mergekit-gui
🔀290Merge AI models using a YAML configuration file
-
Running on A10G1.81k
GGUF My Repo
🦙1.81kCreate quantized models from Hugging Face repos
-
SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs
Paper • 2512.04746 • Published • 13
For finetune
For MOE 1.5B
Good for home automation
Large context LLMs that work well with Home Assistant via Llama.cpp server running on CPU with 16GB ram.
-
inclusionAI/Ling-mini-2.0
Text Generation • 16B • Updated • 8.92k • 184 -
Orion-zhen/Qwen3-30B-A3B-Instruct-2507-IQK-GGUF
31B • Updated • 48 • 1 -
Intel/Qwen3-30B-A3B-Instruct-2507-gguf-q2ks-mixed-AutoRound
31B • Updated • 109 • 26 -
Tiiny/SmallThinker-21BA3B-Instruct
Text Generation • 22B • Updated • 87 • 111
Models to try
-
bunnycore/Gemma2-2b-function-calling-lora
Updated • 1 -
NickyNicky/gemma-2b-it_oasst2_all_chatML_function_calling_Agent_v1
Text Generation • 3B • Updated • 3 • 1 -
hugging-quants/Llama-3.2-1B-Instruct-Q8_0-GGUF
Text Generation • 1B • Updated • 393k • 39 -
gorilla-llm/gorilla-openfunctions-v2
Text Generation • Updated • 971 • 244