Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper
•
2203.05482
•
Published
•
7
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
# Linear merge of OLMo-3.1 Math and Code RL models
# Output = 0.5 * Math + 0.5 * Code
#
# Usage:
# modal run modal_merge.py --config examples/olmo3.1-linear-merge.yaml --hf-repo pmahdavi/Olmo-3.1-7B-Math-Code
models:
- model: allenai/Olmo-3.1-7B-RL-Zero-Math
parameters:
weight: 0.5
- model: allenai/Olmo-3.1-7B-RL-Zero-Code
parameters:
weight: 0.5
merge_method: linear
dtype: bfloat16