Skip to main content

Breadcrumb

Home arrow_forward_ios Information on IES-Funded Research arrow_forward_ios Evaluating the Impact of the Choice ...
Home arrow_forward_ios ... arrow_forward_ios Evaluating the Impact of the Choice ...
Information on IES-Funded Research
Grant Closed

Evaluating the Impact of the Choice of Test Score Scale on the Measurement of Individual Student Growth

NCER
Program: Statistical and Research Methodology in Education
Program topic(s): Core
Award amount: $273,844
Principal investigator: Andrew Ho
Awardee:
University of Iowa
Year: 2007
Award period: 2 years 11 months (09/01/2007 - 08/31/2010)
Project type:
Methodological Innovation
Award number: R305U070008

Purpose

State and federal educational policies are focusing increasingly on school accountability for individual student growth. As statistical models and policy approaches proliferate, little attention is paid to the serious dependency of growth statistics on the choice of the test score scale. A different score scale, defined by a nonlinear transformation of the original score scale, can theoretically alter growth statistics, reverse aggregate trends, and distort interpretations from so called value-added models. However, the practical consequences of scale choice have not been well described, leaving growth-based education policies with statistics subject to undocumented scale-dependency. The researchers developed a framework for evaluating the impact of scale choice on large-scale, policy-relevant growth statistics. This conceptual and mathematical framework considers families of gentle transformations for vertically scaled, longitudinal data that allow for the quantification of the scale dependency of target statistics.

Project Activities

The researchers developed a framework for evaluating the impact of scale choice on large-scale, policy-relevant growth statistics. Moreover, they determined if this framework has "scale-neutral" properties and can thus be applied to vertically scaled data from any state or local testing program. Theoretical developments were anchored by applications using a state-level, 5-year longitudinal dataset from the state of Iowa. Iowa is currently one of eight states to have approval of its growth model under the Growth Model Pilot Program. The researchers investigated families of plausible transformations whose well-established properties allow for the standardized characterization of the dependency of growth statistics on the choice of vertical scale. Transformations from these families emphasize different regions of the score scale in a systematic and predictable fashion while keeping the degree of emphasis modest. Under this framework, any dataset may be rescaled according to these plausible transformations, and descriptive statistics and growth model parameter estimates may be recalculated according to these transformed data to establish the limits of pliability.

Structured Abstract

Research design and methods

A primary research task in this project was the development of families of reference distributions representing plausible transformations. Three approaches to developing reference families were considered. The established score scale was divided into regions, and different relative linear weights will be given to these regions. Three broad categories of advanced issues were considered to fully explicate the framework: multiple time points, discrete data structures, and measurement error. The full treatment of the framework investigated methods of addressing measurement error in observed score distributions. The researchers evaluated the impact of scale choice on both widely reported statistics and decisions of consequence such as adequate yearly progress (AYP), using a longitudinal database of student scores into the determination of AYP for Iowa schools. Large-scale data from other state programs were gathered, and an analysis of cross-state comparisons of pliability (stability across scale choice) was made in order to illustrate the full potential of this framework: cross-test, cross program, and cross-state comparison of the pliability of widely reported, high-stakes growth statistics.

Data analytic strategy

The researchers subjected state student achievement data to families of possible scale transformations and produce statistical and evaluative reports about the scale dependency of all major achievement growth statistics. The results can inform researchers and policymakers about the stability of both school-level and state-level measures of student achievement growth for states participating in the No Child Left Behind Growth Model Pilot Program.

Key outcomes

This research developed a theoretical framework, implemented the framework on Iowa testing data, and, where appropriate data was available, analyzed the scale dependency of growth statistics—including gains, gaps, and "value added"—for the states selected to implement growth models under the Growth Model Pilot Program.

People and institutions involved

IES program contact(s)

Allen Ruby

Associate Commissioner for Policy and Systems
NCER

Project contributors

Stephen Dunbar

Co-principal investigator

Products and publications

Publications:

Journal article, monograph, or newsletter

Furgol, K.E., Ho, A.D., and Zimmerman, D.L. (2010). Estimating Trends From Censored Assessment Data Under No Child Left Behind. Educational and Psychological Measurement, 70(5): 760-776.

Ho, A.D., Lewis, D.M., and Farris, J.L.M. (2009). The Dependence of Growth-Model Results on Proficiency Cut Scores. Educational Measurement: Issues and Practice, 28(4): 15-26.

Additional project information

This project was submitted to and funded as an Unsolicited application in FY 2007.

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

Tags

Data and AssessmentsPolicies and Standards

Share

Icon to link to Facebook social media siteIcon to link to X social media siteIcon to link to LinkedIn social media siteIcon to copy link value

Questions about this project?

To answer additional questions about this project or provide feedback, please contact the program officer.

 

You may also like

Zoomed in IES logo
Workshop/Training

Data Science Methods for Digital Learning Platform...

August 18, 2025
Read More
Zoomed in IES logo
Workshop/Training

Meta-Analysis Training Institute (MATI)

July 28, 2025
Read More
Zoomed in Yellow IES Logo
Workshop/Training

Bayesian Longitudinal Data Modeling in Education S...

July 21, 2025
Read More
icon-dot-govicon-https icon-quote