Ellis Batten Page inMI has focused on incorporating the latest advances in natural language processing, semantic and syntactic analysis, and classification methods to produce a state-of-the art automated scoring engine. Various statistics have been proposed to measure inter-rater agreement.
The intent was to demonstrate that AES can be as reliable as human raters, or more so. This last practice, in particular, gave the machines an unfair advantage by allowing them to round up for these datasets.
It is reported as three figures, each a percent of the total number of essays scored: exact agreement the two raters gave the essay the same scoreadjacent agreement the raters differed by at most one point; this includes exact agreementand extreme disagreement the raters differed by more than two points.
If raters do not consistently agree within one point, their training may be at fault. Ellis Batten Page and his associates inMI has been an active force in AI scoring, also known as automated essay scoring.
Give students practice in giving and receiving peer feedback using the peer editing tool.