Project Activities
The research team will work with middle school teachers, administrators, and other stakeholders to first develop standards for the assessments and then to develop materials that teachers can use to generate their own assessments that attend to these standards. The development process will be iterative. To determine whether the teacher-generated assessments are as valid as those generated by external agents, the researchers will compare their own assessments to the teachers' and test them for validity and consistency.
Structured Abstract
Setting
The research will take place in schools in the northern region of Florida that serve urban and rural populations.
Sample
Seventh-grade science teachers and their students will be the research subjects.
This research team will develop an assessment strategy that includes three components: (a) a series of performance assessments of problem solving and other cognitively complex competencies that measure selected state-level benchmarks; (b) performance assessment "specifications" that define comparable measures to be developed by teachers, linking teachers' assessments to those administered statewide, which can be used to guide teachers' development of comparable classroom assessments; and (c) information about the use of these performance assessments to generate both summative and formative data. The assessment strategy also includes training that helps teachers create complex assessments and uses those assessments to guide learning through effective formative feedback.
Research design and methods
Year one is devoted to developing and pilot-testing every element of the proposed assessment strategy using middle school students and teachers from two of the participating schools. Benchmarks from Florida's Next Generation Sunshine State Standards that address higher cognitive skills will be identified, and performance specifications and corresponding assessments for a subset of benchmarks and teacher training materials will be developed. These materials will be related to the use of specifications in the development of assessments and effective formative feedback will be developed and pilot-tested. The proposed research occurs in the context of science instruction taught at the middle school level. However, the intent is to establish procedures that are useful at other grade levels and in other subject areas. Teachers, administrators, and other stake-holders will be involved throughout the research project. Teachers will be trained to generate questions and administer tests. They will also give feedback that will aid in the revision of teacher training materials and the assessments. Other stakeholders will help to identify benchmarks and assessment specifications. Their feedback will also inform revisions.
Years two and three are designated for full development, administration, and analysis, during which the systems and procedures developed in year one will be implemented and the collected data analyzed. State review and external advisory teams will be involved at every significant juncture and their feedback incorporated.
Control condition
The results from performance assessments developed and administered by teachers will be compared with the assessments developed and administer by the researchers.
Key measures
An evidence-centered design approach and other analyses of scores will be used to establish the comparability of assessments that were developed independently by teachers and the research team. Classroom observations and interviews of teachers and other educators will establish whether best-practice approaches to formative feedback can be employed and are perceived to be practical.
Data analytic strategy
Analyses will establish whether performance on assessments generalize and whether procedures are scalable to the state level. The evidence-centered design will be used to establish competencies being measured, and analysis of variance techniques will be used to assess generalizability. Descriptive statistics and the Bookmark method will be used to compare score patterns on performance assessments administered to samples of students.
People and institutions involved
IES program contact(s)
Project contributors
Products and publications
Products include an online data management and training system that contains training and teaching resources, incorporating teacher training materials that can be used to help teachers design and administer their own assessments. In addition, the team will develop a scalability framework that addresses the implementation of the performance-based assessment program at the state level. The researchers will also produce scholarly reports of findings.
Publications:
Journal article, monograph, or newsletter
Oosterhof, A. (2011). Upgrading High-Stakes Assessment. Better Evidence-based Education, 3(3): 20-21.
Sherdan, D., Anderson, A., Rouby, A., LaMee, A., Gilmer, P. J., & Oosterhof, A. (2014). Including often-missed knowledge and skills in science assessments. Science Scope, 38(1), 56-62.
Yang, Y., Oosterhof, A., and Xia, Y. (2015). Reliability of Scores on the Summative Performance Assessments. Journal of Educational Research, 108(6): 465-479.
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.