grimjim/Equatorium-v2-12B

This is a merge of pre-trained language models created using mergekit.

I wasn't satisfied with the prompt attention in Equatorium-v1-12B. I attempted repair by going back to original Instruct for the first 10 and final 4 layers, with the goal of improving/restoring overall attention and subsequent coherence. It turned out that the mergekit implementation enabled this by allowing layerwise specification of model weighting. The result in testing (temp=1, minP=0.02) seems sufficiently coherent and varied.

Merge Details

Merge Method

This model was merged using the Task Arithmetic merge method using grimjim/mistralai-Mistral-Nemo-Base-2407 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: grimjim/mistralai-Mistral-Nemo-Base-2407
dtype: bfloat16
merge_method: task_arithmetic
parameters:
  normalize: true
models:
  - model: grimjim/mistralai-Mistral-Nemo-Base-2407
  - layer_range: [0, 10]
    model: grimjim/mistralai-Mistral-Nemo-Instruct-2407
    parameters:
      weight: 1.00
  - layer_range: [10, 36]
    model: grimjim/AbMagnolia-v1-12B
    parameters:
      weight: 0.75
  - layer_range: [8, 36]
    model: grimjim/Magnolia-v3-12B
    parameters:
      weight: 0.25
  - layer_range: [36, 40]
    model: grimjim/mistralai-Mistral-Nemo-Instruct-2407
    parameters:
      weight: 1.00
Downloads last month
7
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for grimjim/Equatorium-v2-12B

Paper for grimjim/Equatorium-v2-12B