Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using output/stop_it_nerd as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: output/stop_it_nerd
dtype: bfloat16
merge_method: model_stock
slices:
- sources:
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Llama-3-8B-Abomination-LORA
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B
- layer_range: [0, 32]
model: output/stop_it_nerd+ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Llama-3-LongStory-LORA
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/ANJIR-ADAPTER-128
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Llama3_RP_ORPO_LoRA
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/RP_Format_QuoteAsterisk_Llama3
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Theory_of_Mind_Llama3
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Aura_Llama3
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Luna_Llama3
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/BlueMoon_Llama3
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Smarts_Llama3
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/llama3-8b-hikikomori-v0.4
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Nimue-8B
- layer_range: [0, 32]
model: output/stop_it_nerd+Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B
- layer_range: [0, 32]
model: output/stop_it_nerd