Validating holistic scoring for writing assessment

Researchers have established that raters judge texts differently based on their backgrounds and biases external to the texts (Ball, 1997; Pula & Huot, 1993).

Yet less is known about raters’ reading practices while scoring those essays.This small-scale study uses eye-tracking technology and reflective protocols to examine the reading behavior of TESOL teachers who evaluated university-level L1 and L2 writing.Results from the eye-tracking component indicate that the teachers read the rhetorical, organizational, and grammatical features of an L1 text more deliberately while skimming through and then returning to rhetorical features of an L2 text and initially skipping over many L2 grammatical structures.Raters with ESL training were chosen for this study because of their professional familiarity with features of L2 writing (Eckstein, Casper, Chan, & Blackwell, 2018) and a sensitivity to L2 writers’ needs.Results demonstrate specific areas of attentional focus which contribute to research of differences in the reading and rating of L1 and L2 writing.

Search for validating holistic scoring for writing assessment:

validating holistic scoring for writing assessment-29validating holistic scoring for writing assessment-28

It should come as no surprise that L1 writing can differ widely from that of L2 writing [1], even when produced by similarly-skilled writers.

Leave a Reply

Your email address will not be published. Required fields are marked *

One thought on “validating holistic scoring for writing assessment”