unispeech-sat-base

This model was trained from scratch on the minds14 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4374
  • Wer: 0.2311

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 2
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • optimizer: Use adafactor and the args are: No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 3000

Training results

Training Loss Epoch Step Validation Loss Wer
0.3806 4.4267 500 0.2892 0.1793
0.3067 8.8533 1000 0.4070 0.2058
0.3009 13.2756 1500 0.4186 0.2150
0.2842 17.7022 2000 0.6049 0.2434
0.2608 22.1244 2500 0.4818 0.2335
0.2639 26.5511 3000 0.4374 0.2311

Framework versions

  • Transformers 4.51.0
  • Pytorch 2.8.0+cu129
  • Datasets 3.6.0
  • Tokenizers 0.21.4
Downloads last month
1
Safetensors
Model size
94.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results