National Center for Teacher Effectiveness: Validating Measures of Effective Math Teaching
Description:National Center for Teacher Effectiveness: Validating Measures of Effective Math Teaching
Topic: Teacher Effectiveness
Purpose: The National Center for Teacher Effectiveness: Validating Measures of Effective Math Teaching will identify practices and characteristics that distinguish between more and less effective teachers and will use this information to develop a suite of empirically validated and practical instruments that can be used by school districts to select, deploy, and retain more effective teachers. While focusing on math instruction in grades 4 and 5, the Center will have four primary goals:
A secondary goal is to work cooperatively with the Institute to formulate and carry out the results of supplementary research studies that are responsive to the needs of education practitioners and policy makers. Topics include determining why teacher effects fade out and examining whether participation in teacher evaluation systems improves teacher effectiveness.
The Center will host a national conference of researchers and practitioners interested in teacher evaluation and teacher effectiveness.
Key Personnel: Thomas J. Kane and Heather Hill, Harvard Graduate School of Education;
Douglas O. Staiger, John French Professor of Economics, Dartmouth College
Journal article, monograph, or newsletter
Blazar, D. (2017). Validating Teacher Effects on Students' Attitudes and Behaviors: Evidence From Random Assignment of Teachers to Students. Education Finance and Policy, (Just Accepted), 1–52.
Blazar, D. (2015). Effective Teaching in Elementary Mathematics: Identifying Classroom Practices That Support Student Achievement. Economics of Education Review, 48, 16–29.
Blazar, D., and Kraft, M.A. (2017). Teacher and Teaching Effects on Students' Attitudes and Behaviors. Educational Evaluation and Policy Analysis, 39(1), 146–170.
Blazar, D., and Pollard, C. (2017). Does Test Preparation Mean Low-Quality Instruction?. Educational Researcher, 46(8), 420–433.
Blazar, D., Braslow, D., Charalambous, C.Y., and Hill, H.C. (2017). Attending to General and Mathematics-Specific Dimensions of Teaching: Exploring Factors Across Two Observation Instruments. Educational Assessment, 22(2), 71–94.
Blazar, D., Litke, E., and Barmore, J. (2016). What Does it Mean to be Ranked a "High" or "Low" Value-Added Teacher? Observing Differences in Instructional Quality Across Districts. American Educational Research Journal, 53(2), 324–359.
Herlihy, C., Karger, E., Pollard, C., Hill, H.C., Kraft, M.A., Williams, M., and Howard, S. (2014). State and Local Efforts to Investigate the Validity and Reliability of Scores From Teacher Evaluation Systems. Teachers College Record, 116(1): 1–28.
Hill, H.C., and Grossman, P. (2013). Learning From Teacher Observations: Challenges and Opportunities Posed by new Teacher Evaluation Systems. Harvard Educational Review, 83(2): 371–384.
Hill, H.C., Beisiegel, M., and Jacob, R. (2013). Professional Development Research: Consensus, Crossroads, and Challenges. Educational Researcher, 42(9), 476–487.
Hill, H.C., Blazar, D., and Lynch, K. (2015). Resources For Teaching: Examining Personal and Institutional Predictors of High-Quality Instruction. AERA Open, 1(4), 2332858415617703.
Hill, H.C., Charalambous, C.Y., and Chin, M. J. (2018). Teacher Characteristics and Student Learning in Mathematics: A Comprehensive Assessment. Educational Policy, 0895904818755468.
Hill, H.C., Charalambous, C.Y., and Kraft, M. (2012). When Rater Reliability is not Enough: Teacher Observation Systems and a Case for the Generalizability Study. Educational Researcher, 41(2): 56–64.
Hill, H.C., Charalambous, C.Y., McGinn, D., Blazar, D., Beisiegel, M., Humez, A. Kraft, M., Litke, E., and Lynch, K. (2012). Validating Arguments for Observational Instruments: Attending to Multiple Sources of Variation. Educational Assessments, 17(2): 88–106.
Jackson, C.K., Rockoff, J.E., and Staiger, D.O. (2014). Teacher Effects and Teacher-related Policies. Annual Review of Economics, 6(1), 801–825.
Kelcey, B., McGinn, D., and Hill, H. (2014). Approximate Measurement Invariance in Cross-Classified Rater-Mediated Assessments. Frontiers in Psychology, 5, 1469.
Kraft, M.A., and Papay, J.P. (2014). Can Professional Environments in Schools Promote Teacher Development? Explaining Heterogeneity in Returns to Teaching Experience. Educational Evaluation and Policy Analysis, 36(4): 476–500.
Staiger, D.O., and Rockoff, J.E. (2010). Searching for Effective Teachers With Imperfect Information. Journal of Economic Perspectives, 24(3): 97–117.
Taylor, E.S., and Tyler, J.H. (2012). The Effect of Evaluation on Teacher Performance. The American Economic Review, 102(7): 3628–3651.
Bacher-Hicks, A., Chin, M.J., Kane, T.J., and Staiger, D.O. (2017). An Evaluation of Bias in Three Measures of Teacher Quality: Value-Added, Classroom Observations, and Student Surveys (No. w23478). National Bureau of Economic Research.
Cascio, E.U., and Staiger, D.O. (2012). Knowledge, Tests, and Fadeout in Educational Interventions (NBER 18038). Cambridge, MA: National Bureau of Economic Research Working Paper.
Taylor, E.S., and Tyler, J.H. (2011). The Effect of Evaluation on Teacher Performance: Evidence From Longitudinal Student Achievement Data of Mid-Career Teachers (NBER 16877). Cambridge, MA: National Bureau of Economic Research Working Paper.