|Title:||Virtual Performance Assessments for Measuring Student Achievement in Science|
|Principal Investigator:||Dede, Christopher||Awardee:||President and Fellows of Harvard College, Graduate School of Education|
|Program:||Education Technology [Program Details]|
|Award Period:||3 years||Award Amount:||$1,164,167|
Purpose: Science inquiry process skills are difficult to assess with multiple choice or constructed-response paper-and-pencil tests. This project will develop three single-user immersive three-dimensional (3-D) environments to assess middle school students' science inquiry skills. The investigators will align these assessments to National Science Education Standards (NSES) and will develop the assessments to serve as a standardized component of an accountability program.
Project: The investigators will develop three virtual assessment environments, which will contain a number of independent tasks. The investigators will modify an existing commercial framework for immersive game development to provide the appropriate authoring platform for the assessments. After conceptual and technological development is complete, the project will conduct validity and reliability studies.
Product: The final product will include three immersive virtual performance assessments that test students' science inquiry skills. The assessments will run on computers currently in schools and will require little preparation for users and no additional paper-based materials.
Purpose: This project will develop and study the feasibility of using virtual performance assessments to assess sixth- and seventh-grade middle grade students' science inquiry process learning in a standardized testing setting. The investigators will develop three assessments that all measure the same construct: scientific inquiry in the context of Life Science.
Setting: Research will take place in Massachusetts and southeastern Wisconsin in urban, suburban, and rural areas. The investigators will select schools to be representative of the full spectrum of U.S. classroom contexts in terms of demographic characteristics such as race/ethnicity, native language, and socioeconomic status (SES).
Population: Participants in this study will include a diverse group of approximately 1,000 sixth-to seventh-grade middle school students, selected from 45 schools.
Intervention: Students being assessed will work individually; the software will guide them through the assessment tasks. In the 3-D space, students will take on the identity of an avatar, a virtual persona that can move around and interact with the environment. Students will be asked to complete a science inquiry problem. The investigators will capture students' actions throughout the tasks, allowing for performance to be continually assessed, as compared to traditional methods that only capture snapshots of student learning. An automated and standardized scoring tool will score each task, thus absolving the potential of human rater-reliability issues. The investigators will design these virtual assessments to be both cost-effective for schools, and easy to administer.
Research Design and Methods: The investigators will develop the assessments using an Evidence Centered Design framework. In this process, the researchers will focus on how students represent knowledge, what skills are to be assessed, what behaviors and performances elicit the knowledge and skills being assessed, and what situations foster those behaviors and evidence. After developing a prototype in Year 1, the investigators will conduct construct validity studies to ensure that the assessments are measuring what they are intended to measure and that inferences made from the assessments are accurate. The team will accomplish this task by conducting alignment studies, comparing the virtual assessments to the National Science Education Standards (NSES). The researchers will also conduct cognitive studies that provide evidence of students' declarative knowledge (knowledge of science content) and procedural knowledge (knowing how to conduct scientific activities). In Year 2, the investigators will conduct a series of pilot tests with sets of approximately 10 teachers and 200 students. Once the assessments are fully functioning in Year 3, the team will conduct a series of generalizability studies to establish the reliability and validity of the measures, using approximately 20 teachers and 400 students.
Key Measures: During the development and piloting of the assessments, the investigators will develop scales based on prior, similar studies to evaluate the alignment of the assessments with the target NSES standards. Also, the investigators will create protocols for use in think-alouds and interviews to evaluate whether aptitude in domains irrelevant to the target constructs impacts student performance. During the final phase of the project, in addition to collecting item-level performance of individual students on the three assessments, the investigators will administer pre-surveys to participating students and teachers to assist in the identification of relevant facets in the generalizability study. Finally, throughout the project, the investigators will record all student interactions with the immersive environment in an event log for data mining and further analysis.
Data Analytic Strategy: The project will employ qualitative and quantitative methods to determine if the virtual assessments are reliable and to measure valid constructs. The investigators will conduct construct validity research involving both alignment and cognitive studies (think-alouds, observations, interviews). The project will conduct a series of generalizability studies to determine the reliability and validity of the assessment. The generalizability studies will allow researchers to measure multiple sources of variation in a single analysis in order to evaluate whether the performance on the assessments could be reasonably equated.
Publications from this project:
Clarke, J., and Dede, C. (2010). Assessment, Technology, and Change. Journal of Research in Teacher Education, 42 (3): 309–328.