Skip Navigation
National Profile on Alternate Assessments Based on Alternate Achievement Standards:

NCSER 2009-3014
August 2009

E. Scoring and Reporting

NCLB requires states to produce "interpretive, descriptive, and diagnostic reports" on individual students' achievement measured against academic achievement standards to help parents, teachers, and principals to address the academic needs of students (20 U.S.C. 6311 § 1111(b)(3)(C)(xii); 34 C.F.R. § 200.8). Scoring criteria for students with the most significant cognitive disability may include elements typically found in general assessments, such as accuracy, and elements selected specifically for this population, such as independence, progress, and generalization across multiple settings.

How many scorers scored the alternate assessment? (E1)

This item asked about the number of scorers used to determine an individual's, score on the alternate assessment. Response categories were mutually exclusive and are presented graphically in figure E1 and for individual states in table E1 in appendix B, NSAA Data Tables.

  • One scorer – Fifty-one percent of states (26 states) reported that one scorer scored the alternate assessment, reflecting a majority of the states and the highest frequency reported.
  • Two scorers – Thirty-nine percent of states (20 states) reported that two scorers scored the alternate assessment.
  • Three or more scorers – Ten percent of states (5 states) reported that three or more scorers scored the alternate assessment.

Top

How were scoring conflicts resolved? (E2)

This item asked about how the state resolved conflicts in scoring if they arose. The following mutually exclusive response categories emerged during coding and are presented graphically in figure E2 and for individual states in table E2 in appendix B, NSAA Data Tables.

  • A third person adjudicated – This response category was coded when a third person helped scorers come to agreement or ruled in favor of one or the other in disputes between two scorers. Twenty-two percent of states (11 states) reported that scoring conflicts were resolved by a third person who adjudicated disputes or negotiated an agreement.
  • A third rater scored the alternate assessment – This response category was coded when a third score replaced the original scores or was combined with the first two scores for a new score. Twenty-seven percent of states (14 states) reported that scoring conflicts were resolved by a third rater who scored the alternate assessment.
  • One person scored, or scores were combined – This response category was coded when the state used only one scorer or different scores were simply averaged or combined. Forty-nine percent of states (25 states) reported that there was only one scorer or the scores were averaged or combined, reflecting the highest frequency reported.

Top

What elements of student performance were used in scoring? (E3)

This multiple-choice item asked about the state's scoring criteria at the student level. Multiple responses were possible and are presented graphically in figure E3 and for individual states in table E3 in appendix B, NSAA Data Tables.

  • Accuracy of student response – This response category was coded when the correctness of a response or the production of student work that reflected the intended response of the assessment item or activity was a component of scoring. Eighty-eight percent of states (45 states) reported that the accuracy of student response was a component of the scoring criteria for the alternate assessment, reflecting a majority of the states and the highest frequency reported.
  • Ability to generalize across settings – This response category was coded when the student's ability to perform a task in multiple settings or under differing conditions was a component of scoring. Forty-five percent of states (23 states) reported that they included the student's ability to generalize across settings as a component of scoring for the alternate assessment.
  • Amount of independence – This response category was coded when the degree of independence of the student's response (or lack of prompting or scaffolding of a response) was a component of scoring. Seventy-six percent of states (39 states) reported that the amount of student independence was a component of scoring, reflecting a majority of the states.
  • Amount of progress – This response category was coded when the degree of change over time in the performance of a task was a component of scoring. Twenty-five percent of states (13 states) included the amount of progress a student made in scoring criteria.

Top

What environmental elements were used in scoring? (E4)

This multiple-choice item asked about the state's scoring criteria at the system level, that is, the environmental elements that were included in the determination of student scores on the alternate assessment. Multiple responses were possible and are presented graphically in figure E4 and for individual states in table E4 in appendix B, NSAA Data Tables.

  • Instruction in multiple settings – This response category was coded when the state reported that the extent of instruction conducted in multiple settings was a component of scoring. Twenty-seven percent of states (14 states) reported that instruction in multiple settings was a component of the scoring criteria for the alternate assessment.
  • Opportunities to plan, monitor, and evaluate work – This response category was coded when the state reported students' engagement in planning, record keepingon their work or progress, and evaluating their own performance were components of scoring. Fourteen percent of states (7 states) included student opportunities to plan, monitor, and evaluate their work as a component of scoring.
  • Work with nondisabled peers – This response category was coded when the state reported that the degree to which the student was placed in settings with nondisabled peers was a component of scoring. Eighteen percent of states (9 states) reported that the student's work with nondisabled peers was a component of scoring.
  • Appropriate human and technological supports – This response category was coded when the state reported that the types of aides or assistive technology used during the assessment were a component of scoring. Thirty-three percent of states (17 states) reported that they included an evaluation of appropriate human and technological supports as a component of scoring.
  • None of the above – Fifty-seven percent of states (29 states) reported that none of the above system-level criteria were used in scoring, reflecting the highest frequency reported.

Top

What types of training were provided for assessment administrators? (E5)

This item asked about the types of training provided to individuals on administering the alternate assessment. Multiple responses were possible and are presented graphically in figure E5 and for individual states in table E5 in appendix B, NSAA Data Tables.

  • Non-face-to-face training – This response category was coded when test administrators were given an administration manual that they used for independent training and/or were given administration training support such as videos, PowerPoint presentations, or written guidance online. Ninety-six percent of states (49 states) reported that an administration manual, guidance, or web-based information was provided for individuals who administered assessments, reflecting a majority of the states and the highest frequency reported.
  • Face-to-face training/events/tutorials – This response category was coded when in person training was offered by the district or the state on the administration of the alternate assessment. Ninety-four percent of states (48 states) reported using face-to-face training, events, or tutorials for assessment administrators, reflecting a majority of the states.
  • Training was mandatory and/or certification was required – This response category was coded when administrators of assessments were required to pass a test and/or participate in a tutorial in order to be certified to administer the alternate assessment. Fifty-three percent of states (27 states) reported that assessment administrator training was mandatory, reflecting a majority of the states.

Top

What types of training were provided for assessment scorers? (E6)

This item asked about the types of training provided to individuals on scoring the alternate assessment. Multiple responses were possible and are presented graphically in figure E6 and for individual states in table E6 in appendix B, NSAA Data Tables.

  • Non-face-to-face training – This response category was coded when scorers were given a scoring manual that they used in independent training and/or were given scoring training support such as videos, PowerPoint presentations, or written guidance online. Seventy-four percent of states (38 states) reported that scoring manuals, written guidance, or web-based information was provided to scorers, reflecting a majority of the states.
  • Face-to-face training – This response category was coded when in-person training was offered by the district or the state on the scoring of the alternate assessment. Eighty eight percent of states (45 states) reported that face-to-face training for scorers was provided by the district or the state, reflecting a majority of the states and the highest frequency reported.
  • Training was mandatory, and/or certification was required – This response category was coded when scoring training was mandatory and scorers were required to pass a scoring test or verify that they had received training or participated in a tutorial in order to be certified to score the alternate assessment. Seventy-three percent of states (37 states) reported that training on scoring was mandatory, reflecting a majority of the states.

Top

Who received individual student reports? (E7)

This multiple-choice item asked about whether individual student reports or other reports were provided to parents and/or schools and teachers. The information is presented graphically in figure E7 and for individual states in table E7 in appendix B, NSAA Data Tables.

  • Parents – This response category was coded when individual student reports were provided to parents. Ninety-eight percent of states (50 states) reported that they provided parents of students who took the alternate assessment with individual student reports, reflecting a majority of the states and the highest frequency reported.
  • Schools and teachers – This response category was coded when the state provided schools and teachers any reports other than what was publicly reported. These additional reports may include greater detail in student-level performance data than that in public reporting. They also may provide data at the benchmark/indicator levels or group students in units helpful for school-level data summary. Ninety percent of states (46 states) reported that they provided schools and teachers of students who took the alternate assessment with individual student reports, reflecting a majority of the states.

Top

How were individual student results on the alternate assessment expressed? (E8)

This item summarized, at the aggregate level, the results included in individual students' reports. This was a multiple-choice item, and multiple responses were possible. The information is presented graphically in figure E8 below and for individual states in table E8 in appendix B, NSAA Data Tables.

  • State's achievement standards – Eighty-eight percent of states (45 states) expressed student results in terms of the state's achievement standards, reflecting a majority of the states and the highest frequency reported, along with scores.
  • Scores – Eighty-eight percent of states (45 states) expressed results using scores (including raw scores, scale scores), reflecting a majority of the states, along with state's achievement standards.
  • Percentiles – Twenty-five percent of states (13 states) expressed results using percentiles.

Top

For whom was interpretive guidance on the alternate assessment developed? (E9)

This item asked about whether interpretive guidance was created for schools, teachers, parents, and/or students to provide for a clear understanding and analysis of student performance. This was a multiple-choice item, and multiple responses were possible. The information is presented graphically in figure E9 below and for individual states in table E9 in appendix B, NSAA Data Tables.

  • School-level administrators – Seventy-five percent of states (38 states) reported that they had developed interpretive guidance for school-level staff, reflecting a majority of the states.
  • Teachers – Seventy-eight percent of states (40 states) reported that they had developed interpretive guidance for the teachers of the students who took the alternate assessment, reflecting a majority of the states.
  • Parents – Ninety percent of states (46 states) reported that they had developed interpretive guidance for the parents of the students who took the alternate assessment, reflecting a majority of the states and the highest frequency reported.
  • Students – Eight percent of states (4 states) reported that it had developed interpretive guidance for the students who took the alternate assessment.

Top

Information included in reports given to parents (E10)

This item asked about the types of information provided to parents about the alternate assessment. Information ranged from student performance level to explanations of descriptors and test items. This was a open-ended item, and multiple responses were possible. The information is presented graphically in figure E10 and for individual states in table E10 in appendix B, NSAA Data Tables.

  • Performance/achievement level – Ninety-two percent of states (47 states) provided evidence that they included performance/achievement level status in the individual student reports for parents of students who took the alternate assessment, reflecting a majority of the states and the highest frequency reported, along with scores.
  • Scores – Ninety-two percent of states (47 states) provided evidence that they included scores (including raw scores, scale scores, percentiles) in the individual student reports for parents of students who took the alternate assessment, reflecting a majority of the states and the highest frequency reported, along with performance/achievement level.
  • Standard/strand breakouts – This response category included information that was more specific than content area performance, such as the subdomain level of each content area. Fifty-three percent of states (27 states) provided evidence that they included standard/strand breakouts in the individual student reports for parents of students who took the alternate assessment, reflecting a majority of the states.
  • Indicator/benchmark breakouts – This response category included information that was more specific than standard/strand performance, such as the level of performance indicators or individual items. Twenty percent of states (10 states) provided evidence that they included indicator/benchmark breakouts in the individual student reports for parents of students who took the alternate assessment.
  • Performance/achievement level descriptors – This response category included descriptors that indicated what it means to perform at a particular performance/achievement level. Sixty-three percent of states (32 states) provided evidence that they included performance/achievement level descriptors in the individual student reports for parents of students who took the alternate assessment, reflecting a majority of the states.
  • Sample test items – Six percent of states (3 states) provided evidence that they included sample test items in the individual student reports for parents of students who took the alternate assessment.

Top