AR-AES / README.md
Rnghazawi-NLP's picture
Update README.md
67c8cd8 verified
metadata
license: cc-by-4.0
task_categories:
  - text-classification
language:
  - ar

📌 AR-AES: Arabic Automated Essay Scoring Dataset

The AR-AES dataset is the first publicly available resource designed to support research in Automated Essay Scoring (AES) for the Arabic language. It includes 2,046 manually graded essay responses collected from undergraduate students at Umm Al-Qura University in Makkah, Saudi Arabia, across a range of academic disciplines and essay types.

Each essay has been independently annotated by two human graders using structured rubrics, enabling the study of inter-rater reliability and the development of fair and interpretable AES systems. Rich metadata is included for each response, covering variables such as course name, student gender, exam type (online or traditional), typical answers ("gold answers") for each essay prompt in both Arabic and English, and detailed criterion-based scoring.


📄 First File: AR-AES Dataset - Essays and Marks
This file contains all essay responses with their associated human-assigned scores. It also includes metadata about the evaluation context and student groups.

Column explanations:

  • Question_id: ID from 1 to 12.
  • Course_Name: The course associated with the essay.
  • Group_id: Identifier for student groupings within each course.
  • Gender: Gender of the student. (Note: male and female students were taught separately.)
  • Exam_Type: Specifies if the exam was traditional or online.
  • Essay_id: Unique ID for each essay.
  • Essay: Full text of the essay.
  • Rubric_a1–a4: Scores from the first evaluator (course instructor), based on question-specific criteria.
  • Final_Score_A: Sum of the rubric scores from the first evaluator.
  • Rubric_b1–b4: Scores from the second independent evaluator.
  • Final_Score_B: Sum of the rubric scores from the second evaluator.

📄 Second File: AR-AES Dataset - Question List & Rubric (Arabic)
Arabic list of essay questions and their corresponding evaluation criteria.

📄 Third File: AR-AES Dataset - Question List & Rubric (English)
English translation of the essay questions and rubric.

📄 Fourth File: AR-AES Dataset - Typical Answers
A reference set of model answers in Arabic and English for each question.


Licensing & Citation

This dataset is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0).
You are free to use, share, and adapt the dataset—even for commercial purposes—as long as you give appropriate credit to the authors.

If you use or reference this dataset in your work, please cite:

@article{ghazawi2024automated,
  title={Automated essay scoring in Arabic: a dataset and analysis of a BERT-based system},
  author={Ghazawi, Rayed and Simpson, Edwin},
  journal={arXiv preprint arXiv:2407.11212},
  year={2024}
}