Skip Navigation
National Profile on Alternate Assessments Based on Alternate Achievement Standards:

NCSER 2009-3014
August 2009

D. Eligibility and Administration

The regulations for alternate achievement standards require states to establish guidelines for individualized education program (IEP) teams to use in identifying children with the "most significant cognitive disabilities" who will be assessed based on alternate achievement standards. The regulations do not prescribe a federal definition of "the most significant cognitive disabilities," nor do they set federal guidelines. The regulations require that the state ensure that parents be informed that their child's achievement will be based on alternate achievement standards (34 C.F.R. § 200.1(f)).

States have considerable flexibility in designing their alternate assessments, provided the statutory and regulatory requirements are met. The general alternate assessment approaches the states used in 2006–07 were reported in the Overview section, but states sometimes used these approaches in combination, and each approach could be implemented in varying ways.

What were the guidelines for IEP teams to apply in determining when a child's significant cognitive disability justified alternate assessment? (D1)

This item asked about the eligibility criteria the state established to determine when the alternate assessment was appropriate for a student. This was an open-ended item, and the following response categories emerged during coding. Multiple responses were possible and are presented graphically in figure D1 and for individual states in table D1 in appendix B, NSAA Data Tables.

  • Had a severe cognitive disability (e.g., significant impairment of cognitive abilities, operates at a lower cognitive level) – Ninety-two percent of states (47 states) included this criterion to determine when an alternate assessment was appropriate, reflecting a majority of the states and the highest frequency reported, along with "required modified instruction."
  • Required modified instruction (e.g., student required differentiated, intensive, and individualized instruction) – Ninety-two percent of states (47 states) included this criterion to determine when an alternate assessment was the appropriate assessment, reflecting a majority of the states and the highest frequency reported along with "had a severe cognitive disability."
  • Required extensive support for skill generalization (e.g., needed support to transfer skills to other settings, support to generalize learning to home/work/school/multiple settings) – Eighty-six percent of states (44 states) included this criterion to determine when an alternate assessment was appropriate, reflecting a majority of the states.
  • Required modified curriculum (e.g., student was unable to access general curriculum, general curriculum must be modified or presented at a lower cognitive level) – Ninety percent of states (46 states) included this criterion to determine when an alternate assessment was appropriate, reflecting a majority of the states.
  • Not based on disability category (decisions should not be based solely on disability category or other similar qualities) – Sixty-five percent of states (33 states) included this criterion to determine when an alternate assessment was appropriate, reflecting a majority of the states.

Top

What procedures were in place for informing parents when their child would be assessed using an alternate assessment? (D2)

This item asked about the steps taken to inform parents (or guardians) that the student would be assessed using an alternate assessment, including the consequences of participation in this type of assessment—for example, the implications for graduation and the type of diploma a student would earn. This was an open-ended item, and the following response categories emerged during coding. Multiple responses were possible and are presented graphically in figure D2 below and for individual states in table D2 in appendix B, NSAA Data Tables.

  • Parent signature was required – This response category was coded when the state reported that the signature of a parent or guardian was required specifically for permission for an alternate assessment to be conducted or that a signature on an IEP (that contained reference to an alternate assessment to be conducted) was required. Thirty-seven percent of states (19 states) reported that parent signatures were required for students to participate in the alternate assessment.
  • Parents were provided written materials about the alternate assessment – This response category was coded when the state reported that parents received written materials about the alternate assessment. Fifty-one percent of states (26 states) reported that parents were provided written materials about the alternate assessment, reflecting a majority of the states.
  • Nonspecific information provided – Seventy-five percent of states (38 states) reported that parents were informed about the alternate assessment, but details about the procedure were not specified, reflecting a majority of the states and the highest frequency reported.

Top

How was assessment content selected? (D3)

This item asked about the amount of input the state had in determining assessment content. The following mutually exclusive response categories emerged during coding and are presented graphically in figure D3 below and for individual states in table D3 in appendix B, NSAA Data Tables.

  • All components – State determined academic content areas and strands, standards, benchmarks, and performance indicators/tasks – This response category was coded when the state determined the content areas, standards, benchmarks, or indicators assessed and no local input was allowed. Forty-seven percent of states (24 states) reported that the state determined the academic content areas and strands, benchmarks, and performance indicators or tasks for the alternate assessment, reflecting the highest frequency reported.
  • Most components – State determined academic content areas, strands, and standards, and the IEP team determined performance indicators/tasks – This response category was coded when the IEP team or teacher decided which tasks or academic indicators comprised a student's assessment or when teachers could choose from a task bank or develop their own. Thirty-nine percent of states (20 states) reported that the state determined academic content areas, strands and standards and the IEP team or teacher selected the indicators or tasks on which a student was assessed.
  • Some components – State determined only the academic content areas – This response category was coded when the IEP team or teacher decided which strands, standards, benchmarks, and tasks or indicators were assessed within academic content areas determined by the state. Fourteen percent of states (7 states) reported that the state determined only the academic content areas and that IEP teams or teachers decided which strands, standards, benchmarks, and tasks or indicators were assessed.

Top

How was the administration processed monitored and verified? (D4)

This item asked about how the administration process for the alternate assessment was monitored and verified and who was primarily responsible for the verification process. The following response choices emerged during coding, and multiple responses were possible. Response choices are presented graphically in figure D4 and for individual states in table D4 in appendix B, NSAA Data Tables.

  • An observer/monitor was present – This response category was coded when a monitor was present for all administrations. Twelve percent of states (6 states) verified the administration process with another individual who observed or monitored the alternate assessment.
  • A local or school-level reviewer confirmed proper administration of the assessment – This response category was coded when the assessment was reviewed by school-level staff who did not administer the actual assessment or assemble the portfolio. No monitor was present for the administration, but someone local confirmed that the assessment was administered properly. Fifty-nine percent of states (30 states) reviewed and confirmed the administration process using local or school-level staff who confirmed proper administration of the assessment, reflecting a majority of the states and the highest frequency reported.
  • No independent verification process – Thirty-nine percent of states (20 states) reported that there was no independent verification process for the administration of the alternate assessment.

Top

What procedures were followed in gathering performance evidence? (D5)

This item asked about the flexibility that existed in the gathering of performance evidence — whether the state specified the required types of performance evidence, such as standardized tasks/test items/rating scales, the state provided guidance and instructions, or the teacher/ IEP team made these decisions. The following response choices emerged during coding and multiple responses were possible. Response choices are presented graphically in figure D5 and for individual states in table D5 in appendix B, NSAA Data Tables.

  • State required standardized tasks/test items/rating scales – This response category was coded when the state required evidence in the form of student responses on standardized tasks or test items or teachers were required to provide ratings of student performance. Work samples were not collected or submitted as evidence for scoring, and the scoring was based on performance tasks or teacher ratings of student skills. Forty-three percent of states (22 states) reported that the state required evidence in the form of student responses on standardized tasks or assessment items. Additionally, teachers may have been required to provide ratings.
  • State provided instructions – This response category was coded when the state provided instructions on the types and amounts of evidence or data required from each student, (certain types or formats of performance, such as video, documented student work, data sheets, or captioned photographs). Sixty-one percent of states (31 states) reported that the state provided instructions on the types and amounts of evidence or data required from students, reflecting a majority of the states and the highest frequency reported.
  • Teacher/IEP team decided – This response category was coded when the teacher or IEP team determined the nature of evidence required for scoring, without state guidance, including instances where a local educator decided what could be scored for each indicator. This response category also was coded when the state used checklists with IEP-aligned tasks. Twenty-two percent of states (11 states) reported that the teacher or IEP team decided the nature of evidence required for scoring, without state guidance.

Top

Describe the role of student work (videos, photographs, worksheets/products) in the alternate assessment. (D6)

This item asked about the extent to which the state alternate assessment involved collecting samples of student work (or other evidence of class work performed by students). Specifically, it examined what evidence of student work was considered for scoring the alternate assessment. This was an open-ended item, and the following mutually exclusive response categories emerged during coding and are presented graphically in figure D6 and for individual states in table D6 in appendix B, NSAA Data Tables.

  • Student work samples only – This category was coded when the state reported that portfolios and other collections of student work or bodies of evidence were included in the alternate assessment and submitted for scoring. The alternate assessment consisted entirely of collections of work samples or evidence of work produced by students (e.g., videos, photographs, worksheets, work products). Forty-five percent of states (23 states) reported that the alternate assessment required the collection and submission of full samples of student work, including pieces of student work, videos, captioned photographs, data sheets, and student self-evaluation sheets, reflecting the highest frequency reported.
  • Combination of work samples and other evidence – This category was coded when the state reported that the alternate assessment included a combination of student work samples and other assessment evidence, such as scores on on-demand tasks, checklists, or rating scales. Twenty-four percent of states (12 states) reported that the alternate assessment required the submission of a combination of student work, performance tasks, and/or a checklist or rating scale.
  • No student work samples – This category was coded when the state reported that the alternate assessment used checklists and rating scales or other scoring mechanisms, such as scores on performance tasks, but did not require that evidence of student work be submitted to the state along with scores. Thirty-one percent of states (16 states) reported that the alternate assessment did not require the collection or submission of student work, but instead required students to respond to performance tasks or multiple choice items, or teachers to submit checklists or rating scales.

Top

Did the assessment of student work (tasks or products) take place as part of the day-today instructional activities or were students asked to perform tasks "on demand"? (D7)

This item asked whether the alternate assessment was embedded in daily classroom instruction or was an "on-demand" assessment. An on-demand assessment was one that was administered at an explicitly defined place and time and was separate from instruction, meaning that student performance and products were not derived from the teacher's instructional plan and the classroom routine. On-demand assessments were typically standardized, given in the same format to all test takers, and scheduled in advance such that they supplanted instructional time. The mutually exclusive response categories that follow emerged during coding and are presented graphically in figure D7 and for individual states in table D7 in appendix B, NSAA Data Tables.

  • Part of day-to-day student instruction – This category was coded when the state reported that the alternate assessment involved a variety of activities that took place as part of daily instructional activities, including checklists that assess student work in progress, assessments that gather student work for portfolios, performance tasks embedded in instruction, and other assessments that are designed specifically to be part of the student's daily instructional or learning routine. Fifty-three percent of states (27 states) reported that the alternate assessment took place as part of day-to-day instructional activities, reflecting a majority of the states and the highest frequency reported.
  • Separately from student's daily work – This category was coded when the state reported that the assessment supplanted instructional time and included multiple-choice assessments and standardized performance tasks/events. Twenty-four percent of states (12 states) reported that the alternate assessment did not take place as part of day-today instructional activities.
  • A combination of day-to-day and on-demand approaches – This category was coded when the state reported that the alternate assessment combined both approaches: activities during instructional time and on-demand activities. Twenty-four percent of states (12 states) reported that the assessment combined student work collected during day-to-day instructional activities and on-demand student performance of a task or tasks.
  • Based on teacher recollection of student performance – This category was coded when the state reported that the alternate assessment was a checklist or rubric and was completed without requiring supportive evidence of student work that teachers completed based on their expectations or recollections of student performance. Two percent of states (1 state) reported that the alternate assessment consisted of a checklist, which the classroom teacher completed based on his or her recollections of student performance over the school year.

Top

Describe the role of teacher judgment in the alternate assessment. (D8)

This item asked about how the student's classroom teacher and/or IEP team members were involved in determining (1) what content was assessed, (2) what student work products would be scored, (3) when and how the student would be assessed, and (4) who evaluated the student's performance or scored the state's alternate assessment. This was an open-ended item, and the following response categories emerged during coding. Multiple responses were possible and are presented graphically in figure D8 and for individual states in table D8 in appendix B, NSAA Data Tables.

  • Teacher decided content to be assessed – This response category was coded when the state reported that the teacher determined some or all of the assessment content, including selecting the standards to be assessed or the indicators to be used, determining the level of complexity of assessed tasks, and/or defining specific tasks. Fifty-five percent of states (28 states) reported that the classroom teacher determined some or all of the content assessed on the alternate assessment, reflecting a majority of the states.
  • Teacher selected materials – This response category was coded when the state reported that the teacher was responsible for portfolio construction or assembly; selection of specific evidence, captioned pictures, anecdotal records, or videotape; or writing a student learning profile for the scorers. Sixty-nine percent of states (35 states) reported that the classroom teacher selected materials to be submitted for scoring and/ or constructed individual student portfolios for the alternate assessment, reflecting a majority of the states.
  • Teacher made decisions about administering the assessment – This response category was coded when the state reported that the teacher made decisions about some (but not necessarily all) of the following factors in the administration of the alternate assessment: timing and duration of the assessment, level of support or scaffolding to be provided, and/or when to administer the test within a testing window. Seventy-five percent of states (38 states) reported that the classroom teacher made decisions concerning the administration of the alternate assessment, reflecting a majority of the states and the highest frequency reported.
  • Teacher interpreted/recorded student responses or scores – This response category was coded when the state reported that the teacher used a scoring rubric or checklist to determine the student scores and recorded student responses or scores on a report, including when the teacher was the final determiner of the student's assessment score. Sixty-seven percent of states (34 states) reported that the classroom teacher rated, interpreted, and/or recorded student responses during the alternate assessment, reflecting a majority of the states.

Top