request for Mlx and gguf versions
#2
by
Narutoouz
- opened
Awesome model, but need to add support for mlx and gguf formats.
This comment has been hidden
Yes need in GGUF format
gguf when?
Guys, to convert this to GGUF, it must be implemented in llama.cpp first as it is the tool used for conversion. Stop asking and look at the releases from llama.cpp on Github.
Nobody will use your model if you don't publish a GGUF.
99% of the time, the model creators are not publishing GGUF files. This is done by third parties like myself, unsloth, etc. Whether you are using the model or not, who cares? It is free!!!
Has llama.cpp added support for this? Love this model btw it's pretty good for its size.
@macandchiz
appreciate your efforts!