Automated Scoring

Without feedback, misconceptions build. Students unknowingly make the same mistakes again and again and can quickly fall behind. Ongoing assessment allows students and teachers to adapt, modify, and innovate within the learning process.

Today’s students are digital natives who expect instant results. Immediate scores for homework, quizzes, and tests give students the guidance they need to adjust their own learning paths. However, grading classroom exams and writing assessments is a time-consuming and often onerous process for teachers. Automated scoring is one way to save teachers time and provide immediate feedback to learners.

Reports

Improving Student Writing through Automated Formative Assessment: Practices and Results

Writing practice is a key component to building mature language skills. However, because hand scoring of writing is time consuming, it is often not possible to provide rapid individualized feedback to students to maximize their writing and language skills. This paper describes the development, use, and results from an implementation of a grade school-level formative writing environment which provides accurate, instant automated feedback to student writers of English essays.

Download: "Improving Student Writing through Automated Formative Assessment: Practices and Results"

Pearson’s Automated Scoring of Writing, Speaking, and Mathematics

The new assessment systems designed to measure critical thinking and other 21st Century will include far more performance-based items and tasks than most of today’s assessments. This 2011 document describes several examples of current item types that Pearson has designed and fielded successfully with automatic scoring. The item examples are presented along with operational reliability and accuracy figures, as well as information of the nature and development of the automated scoring systems used by Pearson.

Download: "Pearson’s Automated Scoring of Writing, Speaking, and Mathematics"

Improving Performance of Automated Scoring through Detection of Outliers and Understanding Model Instabilities

Because the automated scoring model is trained predicated on a representative sample of essays for a prompt, associated with each feature is an expected range of acceptable values based on the distribution in the training set. However, if the value for a particular feature extends beyond the training range, the assumptions of the scoring model may cause instabilities in the model.

Download: "Improving Performance of Automated Scoring through Detection of Outliers and Understanding Model Instabilities"

Detection of Gaming in Automated Scoring of Essays with the IEA

Most students will make a “good-faith” effort on an assessment, but it always remains possible that some students will try to game the system in an attempt to misrepresent their skills. Gaming of essays can take many forms. The challenge to is to detect these strategies while minimizing false alarms. This paper describes a general framework used to detect gaming within essays by the Intelligent Essay Assessor™.

Download: "Detection of Gaming in Automated Scoring of Essays with the IEA"