which toolcalling format for opencode?

#2
by dfsafdsf - opened

i runned your model on llama.cpp with --jinja option, also tryed --chat-template chatml
opencode get
<function=bash>
<parameter=command>ls -la
<parameter=description>List directory contents


and not parse as toolcall

Multilingual-Multimodal-NLP org

Hi, you can try setting --tool-call-parser qwen3_xml. This should work for the tool call parsing format used by this model.

Example:

vllm serve /path/to/your/model \
    --port 8080 \
    --tensor-parallel-size 1 \
    --data-parallel-size 8 \
    --served-model-name InCoder-32B \
    --disable-log-requests \
    --max-model-len 131072 \
    --gpu-memory-utilization 0.9 \
    --trust-remote-code \
    --enable-auto-tool-choice \
    --tool-call-parser qwen3_xml

@zwpride i need on llama.cpp)

Sign up or log in to comment