Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: National Research and Development Center on Assessment and Accountability for Special Education
Center: NCSER Year: 2011
Principal Investigator: Tindal, Gerald Awardee: University of Oregon
Program: Special Education Research and Development Centers      [Program Details]
Award Period: $11,677,134.00 Award Amount: 07/01/2011 - 06/30/2016
Goal: R&D Center Award Number: R324C110004

Study Purpose: The increased demand for accountability in education focused on improved student academic performance has led to many questions about the most accurate method for capturing individual student progress, particularly for students with disabilities. In addition, although a substantial amount of research exists about the characteristics of students with disabilities and about assessment of their abilities and skills for purposes of classification and intervention, far less is known about the natural developmental progress in achievement for students with disabilities. The primary objective of this project is to identify academic growth trajectories of students with disabilities and develop and test practical and relevant methods of accurately measuring academic growth for students with disabilities to be used in accountability systems. The ultimate objective of the Center is to develop assessment methods that schools can use to (1) accurately assess the academic progress of students with disabilities and (2) improve the quality of education provided to students with disabilities to lead to improved student outcomes.

Focused Program of Research: The Center's primary research will involve three simultaneous strands of research across Years 1-5: 1) Cornerstone Study; 2) Multi-State Extension Study; and 3) Interim Assessments Study. The team will also complete supplemental studies, including 4) Multiple Measures Validation Study, and 5) Alternate Assessments Study.

Cornerstone Study (Years 1-2 & 5)
The research team will use North Carolina statewide achievement data with approximately 90,000 elementary students without disabilities and 13,000 students with disabilities across 1100 schools and approximately the same number of middle schools students with and without disabilities across 550 schools. The team will compare strategies for characterizing students’ achievement growth. The strategies will include multilevel modeling to model student growth and proficiency (2- and 3-level models) and calculating grade-level growth, achievement gaps between students with disabilities and their peers using an effect size approach, and growth of students with disabilities with a growth percentiles approach. The team will also test the generalizability of the findings across time, cohorts, and editions of the North Carolina test by replicating key analysis from the first year. The team will also analyze the degree to which the reliability and validity of inferences about school effectiveness are affected by missing data or attrition.

Multi-State Extension Study (Years 2 & 5)
The team will repeat the same overall analysis strategy in the Cornerstone Study to examine student growth and the validity and applicability of different options for measuring student growth and school effects using datasets from Arizona, Oregon and Pennsylvania, states that vary in terms of demographic characteristics, educational systems, content standards, assessments, and procedures related to identifying and serving students with disabilities. Data collection will continue during the period 2011-2015 and analyzed during years 4 and 5 using each state’s growth model and promising models identified in earlier analyses.

Interim Assessments Study (Years 1-2)
The Status, Residual Gain Score, and multilevel growth models will be used to examine achievement growth within and across school using interim assessments administered by the four partner states. The growth results will cover grades 1-8, be based on three or four measures of achievement per year, and compared to the growth results based on statewide summative assessment. The team will also estimate the out-of-school changes in student achievement during the summer and examine whether changes are different for students with disabilities compared to their peers without disabilities. Finally, the team will also examine how well growth on the state summative assessments can be predicted using interim assessments and other school context and characteristics variables.

Multiple Measures Validation Study (Years 2-4)
The purpose of this study is to explore use of other measures that supplement or complement statewide assessment. The team will collect opportunity to learn and interim assessments (e.g., curriculum-based measures) 3 to 4 times per year in approximately 200 classrooms in Oregon and Pennsylvania.

Alternate Assessments Study (Years 3-5)
This strand will document status and growth for students with the most significant disabilities who are participating in the alternate assessments for alternate achievement standards. Data sets from Arizona, North Carolina, Oregon, and Pennsylvania will be used to investigate the different types of assessment approaches from the four states. Three models will be considered: NCLB Status, residual gain scores, and transition matrix. In addition, the team will examine multi-level growth models.

Key Personnel: University of Oregon: Gerald Tindal, Joe Stephens, Gina Biancarose, Paul Yovanoff, Keith Zvoch; University of Arizona: Stephen Elliott, Joanna Gorin, Roy Levy, Alexander Kurz; North Carolina State University: Anne Schulte

IES Program Contact: Dr. Jacquelyn Buckley
Telephone: (202) 219-2130


Book chapter

Elliott, S.N. (2017). Achievement Tests. Reference Module in Neuroscience and Biobehavioral Psychology. Elsevier ScienceDirect. doi:10.1016/B978–0–12–809324–5.05457–2

Elliott, S.N., and Kurz, A. (2013). MyiLOGS: Technology for Optimizing Teachers' Instructional Progress Management. In R.K. Atkinson (Ed.), Learning Environments: Technologies, Challenges and Impact Assessment (pp. 1–20). Hauppauge, NY: Nova Science Publishers.

Elliott, S.N., Kettler, R.J., Zigmond, N., and Kurz, A. (2013). Modified Alternate Assessment Participation Screening Consortium: Lessons Learned. In Thurlow, M., Lazarus, S., and Bechard, S. (Eds.), Lessons Learned in Federally Funded Projects That Can Improve the Instruction and Assessment of Low Performing Students With Disabilities (pp. 87–124). Minneapolis: NCEO. Full text

Elliott, S.N., Rodriquez, M.C., Roach, A.T., Beddow, P.A., Kettler, R.J., and Kurz, A. (2013). Consortium for Modified Alternate Assessment Development and Implementation: Lessons Learned. In Thurlow, M., Lazarus, S., and Bechard, S. (Eds.), Lessons Learned in Federally Funded Projects That Can Improve the Instruction and Assessment of Low Performing Students With Disabilities (pp. 167–204). Minneapolis: NCEO. Full text

McKevitt, B.C., Elliott, S.N., and Kettler, R.J. (2013). Testing Accommodations for Children With Disabilities. In C.R. Reynolds (Ed.), The Oxford Handbook of Psychological Assessment of Children and Adolescents (pp. 722–734). New York: Oxford University Press, Inc.

Opfer, J.E., and Siegler, R.S. (2012). Development of Quantitative Thinking. In K.J. Holyoak, and R.G. Morrison (Eds.), Oxford Handbook of Thinking and Reasoning (pp. 585–605). Cambridge, UK: Oxford University Press.

Tindal, G., Alonzo, J., Saez, L., and Nese, J.F.T. (in press). Assessment of Students With Learning Disabilities: Using Students' Performance and Progress to Inform Instruction. In K. Ercikan, and J.W. Pellegrino (Eds.), Validation of Score Meaning in the Next Generation of Assessments.

Zvoch, K., and Stevens, J.J. (2015). The Graphic Representation of Findings From the National Center on Assessment and Accountability for Special Education. In M. McCrudden, G. Schraw, and C. Buckendahl (Eds.), Use of Visual Displays in Research and Testing: Coding, Interpreting, and Reporting Data (pp. 237–264). Charlotte, NC: Information Age Publishing.

Book chapter, edition specified

Elliott, S.N., and Kettler, R.J. (2015). Item and Test Design Considerations for Students With Special Needs. In S. Lane, T. Haladyna, and M. Raymond (Eds.), Handbook of Test Development (2nd ed., pp. 374–391). New York: Routledge, Taylor and Francis.

Tindal, G., and Alonzo, J. (2016). Technology-Based Assessment and Problem Analysis. In S.R. Jimerson, M.K. Burns, and A.M. VanDerHeyden (Eds.), Handbook of Response to Intervention: The Science and Practice of Multi-Tiered Systems of Support (2nd ed., pp. 473–492). New York: Springer Science, Inc. doi:10.1007/978–1–4899–7568–3_28

Journal article, monograph, or newsletter

Anderson, D., Irvin, P., Alonzo, J., and Tindal, G. (2014). Gauging Item Alignment Through Online Systems While Controlling for Rater Effects. Educational Measurement: Issues and Practice. doi:10.1111/emip.12038

Elliott, S. N., Kurz, A., Tindal, G. and Yel, N. (2016). Influence of Opportunity to Learn Indices and Education Status on Students' Mathematics Achievement Growth. Remedial and Special Education, 38(3): 145–158. doi:10.1177/0741932516663000 Full text

Elliott, S.N. (in press). Measuring Opportunity to Learn and Achievement Growth for Students With Disabilities: Future Research Needed. Remedial and Special Education.

Elliott, S.N., Roach, R.T., and Kurz, A. (2014). Evaluating and Advancing the Effective Teaching of Special Educators With a Dynamic Instructional Practices Portfolio. Assessment for Effective Intervention, 39(2): 83–98. doi:10.1177/1534508413511491

Kamata, A., Nese, J.F.T., Patarapichayatham, C., and Lai, C.F. (2013). Modeling Nonlinear Growth With Three Data Points: Illustration With Benchmarking Data. Assessment for Effective Intervention. Assessment for Effective Intervention, 38(2): 105–116.

Kurz, A., Elliott, S.N., and Roach, A.T. (2015). Addressing the Missing Instructional Data Problem: Using a Teacher Log to Document Tier 1 Instruction. Remedial and Special Education, 36(6): 361–373. doi:10.1177/0741932514567365

Nese, J. F., Stevens, J. J., Schulte, A. C., Tindal, G., and Elliott, S. N. (2016). Modeling the Time-Varying Nature of Student Exceptionality Classification on Achievement Growth. Journal of Special Education, 51(1): 38–49. doi:10.1177/0022466916668164 Full text

Nese, J.F.T., Biancarosa, G., Anderson, D., Lai, C.F., Alonzo, J., and Tindal, G. (2012). Within-Year Oral Reading Fluency With CBM: A Comparison of Models. Reading and Writing: An Interdisciplinary Journal, 25(4): 887–915. doi:10.1007/s11145–011–9304–0

Nese, J.F.T., Biancarosa, G., Cummings, K., Kennedy, P., Alonzo, J., and Tindal, G. (2013). In Search of Average Growth: Describing Within-Year Oral Reading Fluency Growth for Grades 1–8. Journal of School Psychology, 51(5): 625–642. doi:10.1016/j.jsp.2013.05.006

Nese, J.F.T., Tindal, G., Stevens, J.J., and Elliott, S.N. (2015). The Influence of Multiple Administrations of a State Achievement Test on Passing Rates for Student Groups. Education Policy Analysis Archives, 23(70): 1–24. Full text

Roach, A.T., Kurz, A., and Elliott, S.N. (2015). Facilitating Opportunity to Learn for Students With Disabilities With Instructional Feedback Data. Preventing School Failure, 59(3): 168–178. doi:10.1080/1045988X.2014.901288

Saven, J.L., Anderson, D., Nese, J.F.T., Farley, D., and Tindal, G. (2015). Patterns of Statewide Test Participation for Students With Significant Cognitive Disabilities. Journal of Special Education, 49(4): 209–220. doi:10.1177/0022466915582213

Schulte, A.C., and Stevens, J.S. (2015). Once, Sometimes, or Always in Special Education: Mathematics Growth and Achievement Gaps. Exceptional Children, 81(3): 370–387. doi:10.1177/0014402914563695

Schulte, A.C., Stevens, J.J. Elliott, S.N., Tindal, J., and Nese, J.F.T. (2016). Achievement Gaps for Students With Disabilities: Stable, Widening, or Narrowing on a State-Wide Reading Comprehension Test?. Journal of Educational Psychology, 108(7): 925–942. doi:10.1037/edu0000107?

Stevens, J.J., and Schulte, A.C. (2017). The Interaction of Learning Disability Status and Student Demographic Characteristics on Mathematics Growth. Journal of Learning Disabilities. doi:10.1177/0022219415618496

Stevens, J.J., Schulte, A.C., Elliott, S.N., Nese, J.F.T., and Tindal, G. (2015). Growth and Gaps in Mathematics Achievement of Students With and Without Disabilities on a Statewide Achievement Test. Journal of School Psychology, 53(1): 45–62. doi:10.1016/j.jsp.2014.11.001

Tindal, G., Nese, J.F.T., Farley, D., Saven, J.L., and Elliot, S. (2016). Documenting Reading Achievement and Growth for Students Taking Alternate Assessments. Exceptional Children, 82(3): 321–336. doi:10.1177/0014402915585492

Tindal, G., Nese, J.F.T., Stevens, J., and Alonzo, J. (2016). Growth on Oral Reading Fluency Measures as a Function of Special Education and Measurement Sufficiency. Remedial and Special Education, 37(1): 28–40. doi:10.1177/0741932515590234