|Title:||Addressing Practical Problems in Achievement Gap Estimation: Nonparametric Methods for Censored Data|
|Principal Investigator:||Reardon, Sean||Awardee:||Stanford University|
|Program:||Statistical and Research Methodology in Education [Program Details]|
|Award Period:||2 years||Award Amount:||$697,878|
|Type:||Methodological Innovation||Award Number:||R305D110018|
Co-Principal Investigator: Andrew Ho (Harvard University)
The project will develop and assess a set of methods for estimating achievement gaps and their standard errors in order to address two practical problems affecting the use of achievement gap statistics: (1) the dependence of traditional gap statistics—mean differences and effect sizes—upon scaling decisions, and (2) the reporting of achievement data in categorical terms (e.g., below basic, basic, proficient, and advanced) rather than as test score results and their distributional statistics (e.g., means and standard deviations).
Achievement gaps are used to measure differences in group achievement as well as inequities in educational advantages and opportunities to learn. However, estimated achievement gaps may be sensitive to the choice of test metric and the choice of an achievement gap statistic. In addition, estimating achievement gaps may not be possible in those states and districts that only report proficiency levels. Without access to basic distributional statistics for test scores, research linking changes in state, district, or school policies and practices to levels and changes in achievement gaps may be compromised.
The project will build off a recently developed, nonparametric approach (sometimes called a "metric free" approach) to measuring achievement gaps. This approach is based on the transformation invariant properties of nonparametric statistics and offers a range of theoretical benefits over traditional approaches, but requires information about the shape of the test score distributions in both groups (not merely their means and standard deviations). Moreover, methods are required for estimating standard errors of these metric free gap measures.
The project will develop and test a set of methods for estimating these metric free achievement gap measures from widely available categorical proficiency data. Both simulated and real datasets will be used to identify the methods that minimize bias and maximize efficiency across a range of idealized and real world scenarios. This work will include a strategy for computing standard errors for these nonparametric achievement gap estimates. Simulated data and data from national and state assessment will then be used to assess the different nonparametric methods developed to determine if they produce unbiased estimates of achievement gaps and accurate standard errors. The overall result is a set of practical guidelines for measuring achievement gaps based on categorical proficiency data and the development of free software to enable researchers and stakeholders to estimate these gaps.
Related IES Projects: The Effects of Racial School Segregation on the Black-White Achievement Gap (R205A070377)
Journal article, monograph, or newsletter
Fahle, E.M., and Reardon, S.F. (2018). How Much do Test Scores Vary Among School Districts? New Estimates Using Population Data, 2009–2015. Educational Researcher, 0013189X18759524.
Ho, A.D., and Reardon, S.F. (2012). Estimating Achievement Gaps From Test Scores Reported in Ordinal “Proficiency” Categories. Journal of Educational and Behavioral Statistics, 37(4): 489–517.
Ho, A.D., and Yu, C.C. (2015). Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects. Educational And Psychological Measurement, 75(3), 365–388.
Reardon, S.F. (2016). School Segregation and Racial Academic Achievement Gaps. RSF:The Russell Sage Foundation Journal of the Social Sciences, 2(5): 34–57.
Reardon, S.F., and Ho, A.D. (2015). Practical Issues in Estimating Achievement Gaps From Coarsened Data. Journal of Educational and Behavioral Statistics, 40(2), 158–189.
Reardon, S.F., Kalogrides, D., Fahle, E. M., Podolsky, A., and Zárate, R.C. (2018). The Relationship Between Test Item Format and Gender Achievement Gaps on Math And ELA Tests In Fourth And Eighth Grades. Educational Researcher, 0013189X18762105.
Reardon, S.F., Shear, B.R., Castellano, K.E., and Ho, A.D. (2017). Using Heteroskedastic Ordered Probit Models to Recover Moments of Continuous Test Score Distributions From Coarsened Data. Journal of Educational and Behavioral Statistics, 42(1), 3–45.
Yee, D., and Ho, A. (2015). Discreteness Causes Bias in Percentage-Based Comparisons: A Case Study From Educational Testing. The American Statistician, 69(3), 174–181.