YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

CoEdIT Base โ€” ONNX

ONNX export of jbochi/coedit-base (250M parameters) with encoder-decoder architecture and KV cache support.

CoEdIT is a T5-based model fine-tuned on the grammarly/coedit dataset for text editing tasks including grammar correction, simplification, coherence, and paraphrasing. This base variant is fine-tuned from google/flan-t5-base.

Converted for use with inference4j, an inference-only AI library for Java.

Original Source

Usage with inference4j

try (var corrector = CoeditGrammarCorrector.coeditBase().build()) {
    System.out.println(corrector.correct("She don't likes swimming."));
    // She doesn't like swimming.
}

Model Details

Property Value
Architecture T5 encoder-decoder (250M parameters)
Base model google/flan-t5-base
Training data grammarly/coedit
Task Grammar correction, text editing
Tokenizer SentencePiece (32,128 tokens)
Original framework PyTorch (transformers)
Export method Hugging Face Optimum (encoder-decoder with KV cache)

License

This model is licensed under the Apache License 2.0. Original model by jbochi, trained on the Grammarly CoEdIT dataset.

Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support