Skip to content

BetikuOluwatobi/commonlit-evaluate-student-summaries

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 

Repository files navigation

CommonLit Student Summaries Evaluation: Kaggle Competition Solution

🥇 Achievement: Secured a position of 858 out of 2106 competitors, placing me in the top 41%. With a score of 0.489, I was just 0.038 away from the winning solution's score of 0.451. This competition witnessed a significant shake-up between the public and private leaderboards, reflecting the unpredictability and realistic nature of real-world machine learning challenges.

🚀 Introduction

Summary writing is an essential skill, pivotal for enhancing reading comprehension, especially among second language learners and students with learning disabilities. The goal of this Kaggle competition, hosted by CommonLit, was to develop a model that can effectively evaluate the quality of summaries written by students from grades 3-12.

This repository hosts my solution that achieved a near-top score. Throughout the competition, I navigated through different strategies and, in hindsight, learned immensely from the choices I made.

🎯 Competition Objective

To build a model that evaluates:

  • How well a student represents the main idea and details of a source text.
  • The clarity, precision, and fluency of the language used in the summary.

These evaluations will aid teachers in quality assessment and help educational platforms in offering instant feedback.

💼 Context

While there have been significant strides in automated evaluation for student writings like argumentative or narrative writing, summarization remains a complex area. The key challenge is that models must take into account both the student's summary and the original, longer source text.

🏢 About CommonLit

CommonLit, the competition host, is a nonprofit education tech organization striving to empower students with critical reading, writing, communication, and problem-solving skills. They are joined in their mission by The Learning Agency Lab, Vanderbilt University, and Georgia State University.

📦 Repository Contents

  • solution.ipynb: The Jupyter notebook containing my best leaderboard solution(see on kaggle).
  • notebooks/: Other worthy notebooks (including the unselected notebook with my best score on the private leaderboard. Check Here).
  • README.md: This README file.

💡 Insights & Learnings

The competition taught me a lot:

  • Leaderboard Shake-up: The massive shake-up between public and private leaderboards was an invaluable lesson on the importance of robust model validation.
  • Decision-making: My skepticism in the final solution choice, which could have placed me even higher on the leaderboard, was a significant learning moment. Trusting the process is as crucial as the technicalities.

🙌 Acknowledgments

Huge thanks to CommonLit for hosting this enlightening competition. The journey was filled with learnings and challenges, pushing me closer to my goal of becoming better at machine learning. I hope my solution provides valuable insights to fellow Kagglers and NLP enthusiasts.

🖖 Future Endeavors

With this competition under my belt, I'm optimistic and excited about participating in more such challenges. As always, the aim remains to learn, grow, and contribute to the larger machine learning community.

📬 Contact

For any questions or discussions, feel free to reach out to me directly on Kaggle/Oluwatobi Betiku or LinkedIn/oluwatobi-betiku.


Happy Coding & Kaggle-ing! ✨

About

Top 41% solution for the competition on Kaggle

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published