DeBERTa-v3-Small for Natural Questions Classification
This model is a fine-tuned version of microsoft/deberta-v3-small on the Natural Questions dataset. It classifies question-context pairs into three categories: No Answer, Has Answer, or Yes/No, achieving 85.42% accuracy and 82.34% macro F1 score.
Model Details
Model Description
This is a DeBERTa-v3-Small model fine-tuned for question-answering classification. Given a question and context, it predicts whether:
- ๐ด No Answer (Label 0): The context doesn't contain an answer
- ๐ข Has Answer (Label 1): The context contains a specific answer
- ๐ต Yes/No (Label 2): The question requires a YES/NO response
The model was trained on the Natural Questions dataset as part of the TensorFlow 2.0 Question Answering Kaggle competition.
- Developed by: [Your Name]
- Funded by [optional]: Self-funded / Academic Project
- Shared by [optional]: [Your Organization/University]
- Model type: Transformer-based Sequence Classification (DeBERTa-v3)
- Language(s) (NLP): English (en)
- License: MIT
- Finetuned from model: microsoft/deberta-v3-small
Model Sources
- Repository: GitHub
- Paper: DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training
- Demo: Gradio Space
Uses
Direct Use
The model can be used directly for:
- Question Answering System Pre-filtering: Filter out unanswerable questions before expensive processing
- Search Result Classification: Determine if search results contain relevant answers
- Customer Support Routing: Route questions based on answer availability
- Educational Assessment: Evaluate if reading passages can answer questions
- Information Retrieval: Assess document relevance for QA tasks
Downstream Use
The model serves as a foundation for:
- Multi-stage QA Pipelines: First stage before extractive/generative QA models
- Hybrid QA Systems: Combine with span extraction for end-to-end QA
- Dialog Systems: Determine if chatbot has sufficient context
- Domain Adaptation: Fine-tune on domain-specific datasets
Out-of-Scope Use
โ Not suitable for:
- Extractive answer span prediction (only classifies, doesn't extract)
- Generative question answering
- Non-English languages
- Very long documents (>256 tokens without truncation)
- Medical/legal decision-making
- Fact verification
Bias, Risks, and Limitations
Limitations:
- Context limited to 256 tokens
- Wikipedia-biased training data
- Trained on 10,000 examples (subset of full dataset)
- May struggle with complex reasoning questions
Biases:
- Better on factual "what/when/where" questions
- Inherits biases from Wikipedia and base model
- Performance varies across domains
Risks:
- May be overconfident on ambiguous inputs
- False negatives on complex phrasings
- Vulnerable to adversarial examples
Recommendations
Users should:
- โ Implement human review for critical applications
- โ Monitor performance across different domains
- โ Calibrate confidence thresholds for use case
- โ Test on representative samples
- โ Use as one component in multi-model systems
How to Get Started with the Model
Use the code below to get started with the model.
from transformers import DebertaV2Tokenizer, DebertaV2ForSequenceClassification
import torch
# Load model
model_name = "mohamedsa1/deberta-v3-nq-classification"
tokenizer = DebertaV2Tokenizer.from_pretrained(model_name)
model = DebertaV2ForSequenceClassification.from_pretrained(model_name)
# Prepare input
question = "What is the capital of France?"
context = "Paris is the capital and most populous city of France."
text = f"Question: {question} Context: {context}"
# Inference
inputs = tokenizer(text, return_tensors="pt", max_length=256, truncation=True, padding=True)
with torch.no_grad():
outputs = model(**inputs)
probs = torch.nn.functional.softmax(outputs.logits, dim=-1)[0]
prediction = torch.argmax(probs).item()
# Results
labels = ["No Answer", "Has Answer", "Yes/No"]
print(f"Prediction: {labels[prediction]}")
print(f"Confidence: {probs[prediction]:.2%}")
- Downloads last month
- 19
Model tree for mohamedsa1/deberta-v3-nq-classification
Base model
microsoft/deberta-v3-smallEvaluation results
- Accuracy on Natural Questions (Simplified)validation set self-reported85.420
- Macro F1 on Natural Questions (Simplified)validation set self-reported82.340
- Macro Precision on Natural Questions (Simplified)validation set self-reported84.210
- Macro Recall on Natural Questions (Simplified)validation set self-reported83.670