Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Center for Research on Evaluation, Standards, and Student Testing (CRESST)
Center: NCER Year: 2005
Principal Investigator: Baker, Eva Awardee: University of California, Los Angeles
Program: Education Research and Development Centers      [Program Details]
Award Period: 5 years Award Amount: $9,968,718
Type: Multiple Goals Award Number: R305A050004
Description:

Co-Principal Investigator: Herman, Joan

Topic: Assessment, Standards, and Accountability

Purpose: The National Center for Research on Evaluation, Standards, and Student Testing (CRESST) has been at the forefront of efforts to improve the quality of education and learning in America. Located within UCLA's Graduate School of Education & Information Studies, CRESST contributes to the development of scientifically based evaluation and testing techniques, encourages the development, validation and use of sound data for improved accountability and decision making, and explores technological applications to improve assessment and evaluation practice.

Projects

Assessment of POWERSOURCE
Building on its own prior research, the Center is developing POWERSOURCE, an assessment intervention emphasizing fundamental mathematical principles essential to student success in first year algebra. The assessments are designed to help students organize their mathematical knowledge into big ideas or schema. The intervention also includes professional development to enable teachers to understand the POWERSOURCE assessments and their underlying mathematical principles, and to use them to improve teaching and learning.

In a randomized study involving 80 schools, Center researchers are examining the impact of POWERSOURCE on student learning. Within schools, all sixth, seventh, and eighth grade mathematics teachers are participating. A cohort of sixth grade students is being followed from the beginning of sixth grade through the end of their eighth grade year. The impact of POWERSOURCE on student learning will be compared with that of schools' standard instruction and test preparation conditions. Participating schools are in Pennsylvania and Hawaii.

Assessment of English Language Learning Students: Review of Literature
The No Child Left Behind Act of 2001 has had a significant impact on states' English language learner assessment policies. In particular, the legislation mandates that English language learning students are: (a) subject to annual assessments of English language development, and (b) included in annual state assessments and yearly progress performance targets. Many states rushing to meet these federally mandated requirements have found that they lack the expertise, time, or resources to systematically document or address many fundamental issues related to the assessment of English language learning students.

To provide guidelines for the appropriate development and use of English language learning assessments, the Center is undertaking a review of the relevant research literature. This literature review: (a) examines issues related to the overall validity and reliability of the instruments; (b) reports the latest research findings, focusing on the construct validity of the assessments; and (c) reviews issues in the assessment of content knowledge.

Providing Support to States to Improve the Assessment of English Language Learners: Test Validity, Designation, and Test Accommodation Use
Although the validity of assessments designed for English language learning students has been a topic of expert recommendation, relatively little rigorous research has been conducted examining the validity of existing instruments. This project aims to: (a) help educators understand and improve the performance of English language learners by investigating the validity of their assessments, and (b) provide states with much needed guidance to improve the validity of their assessments for English language learning students.

In partnership with three to four collaborating states, the Center is analyzing available data and existing documents in order to: (a) understand the nature and validity of states' current English language learner assessment practices, (b) prioritize needs, and (c) provide preliminary recommendations for improving the validity of their assessments and decision making. In addition, the Center is conducting targeted, experimental studies in two states in order to provide scientifically based evidence on which to base decisions about English language learner assessment practices and policies. By the end of the project, the Center will develop, revise, and disseminate specific guidelines that states may use to guide their English language learner assessment policy and practice decisions. In addition, the Center will make available a usable, generalized methodology and tools for examining critical assessment validity issues as they emerge.

Additional Key Personnel: Linn, Robert

Center Website: http://www.cse.ucla.edu/

IES Program Contact: Dr. Allen Ruby
Email: Allen.Ruby@ed.gov
Telephone: (202) 245-8145

Products and Publications

Book chapter

Baker, E.L. (2009). The Influence of Learning Research on the Design and Use of Assessment. In K.A. Ericsson (Ed.), Development of Professional Expertise: Toward Measurement of Expert Performance and Design of Optimal Learning Environments (pp. 333–355). New York: Cambridge University Press.

Baker, E.L., Chung, G.K.W.K., and Delacruz, G.C. (2008). Design and Validation of Technology-Based Performance Assessments. In J.M. Spector, M.D. Merrill, J.J.G. Van Merrinboer, and M.P. Driscoll (Eds.), Handbook of Research on Educational Communications and Technology (3rd ed., pp. 595–604). Mahwah, NJ: Erlbaum.

Baker, E.L., Niemi, D., and Chung, G.K.W.K. (2008). Simulations and the Transfer of Problem Solving Knowledge and Skills. In E.L. Baker, J. Dickieson, W. Wulfeck, and H.F. O'Neil (Eds.), Assessment Of Problem Solving Using Simulations(pp. 1–17). Mahwah, NJ: Erlbaum.

Chung, G.K.W.K., Baker, E.L., Delacruz, G.C., Bewley, W.L., Elmore, J., and Seely, B. (2008). A Computational Approach to Authoring Problem-Solving Assessments. In E.L. Baker, J. Dickieson, W. Wulfeck, and H.F. O'Neil (Eds.), Assessment of Problem Solving Using Simulations (pp. 289–307). Mahwah, NJ: Erlbaum.

Chung, G.K.W.K., O'Neil, H.F., Bewley, W.L., and Baker, E.L. (2008). Computer-Based Assessments to Support Distance Learning. In E. Klieme, J. Hartig, and A. Jurecka (Eds.), Assessment of Competencies in Educational Contexts(pp. 253–276). Göttingen, Germany: Hogrefe and Huber.

Journal article, monograph, or newsletter

Heritage, M., Kim, J., Vendlinski, T., and Herman, J. (2009). From Evidence to Action: A Seamless Process in Formative Assessment?. Educational Measurement, 28(3): 24–31.

Wolf, M.K., Farnsworth, T., and Herman, J.L. (2008). Validity Issues in Assessing English Language Learners' Language Proficiency. Educational Assessment, 13(2): 80–107.

Nongovernment report, issue brief, or practice guide

Herman, J.L. (2007). Accountability and Assessment: Is Public Interest in K‐12 Education Being Served?.Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing.

Ruiz-Primo, M.A., Li, M., Tsai, S., and Schneider, J. (2008). Testing One Premise of Scientific Inquiry in Science Classrooms: A Study That Examines Students' Scientific Explanations.Los Angeles: University of California.

Vendlinski, T.P., Baker, E.L., and Niemi, D. (2008). Templates and Objects in Authoring Problem-Solving Assessments.Los Angeles: University of California.

Vendlinski, T.P., Howard, K.E., Hemberg, B.C., Vinyard, L., Martel, A., Kyriacou, E., Casper, J., Chai, Y., Phelan, J.C., and Baker, E.L. (2008). Using Data and Big Ideas: Teaching Distribution as an Instance of Repeated Addition.Los Angeles: University of California.

Wolf, M.K., Herman, J.L., Kim, J., Abedi, J., Leon, S., Griffin, N., Bachman, P.L., Chang, S.M., Farnsworth, T., Jung, H., Nollner, J., and Shin, H.W. (2008). Providing Validity Evidence to Improve the Assessment of English Language Learners.Los Angeles: University of California, CRESST.

Wolf, M.K., Kao, J., Griffin, N., Herman, J.L., Bachman, P.L., Chang, S.M., and Farnsworth, T. (2008). Issues in Assessing English Language Learners: English Language Proficiency Measures and Accommodation Uses—Practice Review.Los Angeles: University of California, CRESST.

Wolf, M.K., Kao, J., Herman, J.L., Bachman, L.F., Bailey, A.L., Bachman, P.L., Farnsworth, T., and Chang, S.M. (2008). Issues in Assessing English Language Learners: English Language Proficiency Measures and Accommodation Uses—Literature Review.Los Angeles: University of California, CRESST.

Proceeding

Encarnacao, A., Espinosa, P.D., Au, L., Chung, G.K.W.K., Johnson, L., and Kaiser, W.J. (2008). Individualized, Interactive Instruction (3I): An Online Formative Assessment and Instructional Tool. In Proceedings of the Annual Meeting of the American Society of Engineering Education (Session AC 2007–1524) (pp. 1–17). Honolulu, HI: American Society for Engineering Education.

Working paper

Phelan, J., Kang, T., Niemi, D.N., Vendlinski, T., and Choi, K. (2009). Some Aspects of the Technical Quality of Formative Assessments in Middle School Mathematics (CRESST 750). Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing Working Paper.


Back