Mitigating a language model's over-confidence with NLI predictions on Multi-NLI hypotheses with random word order using PAWS (paraphrase) and Winogrande (anaphora).
-
Updated
May 28, 2024 - Jupyter Notebook
Mitigating a language model's over-confidence with NLI predictions on Multi-NLI hypotheses with random word order using PAWS (paraphrase) and Winogrande (anaphora).
Fine-tuning RoBERTa sentiment analysis model on tweets about the Coachella 2015 music festival lineup
Add a description, image, and links to the roberta-base topic page so that developers can more easily learn about it.
To associate your repository with the roberta-base topic, visit your repo's landing page and select "manage topics."