|Title:||Bridging the Gap: Applying Algebra Cognition Research to Develop and Validate Diagnostic Classroom Algebra Testlet|
|Principal Investigator:||Russell, Michael||Awardee:||Boston College|
|Program:||Cognition and Student Learning [Program Details]|
|Award Period:||3 years||Award Amount:||$1,042,561|
Co-Principal Investigator: Joan Lucariello
Currently used tests of mathematics achievement provide information about whether students solve a problem correctly, but they provide little or no information about how students solve problems, or about what kinds of misconceptions might have led to choosing the wrong answer. Consequently, these tests provide little guidance to teachers about how to focus their teaching to help students overcome their misunderstanding. The purpose of this project is to develop a set of computer-based algebra short tests-or testlets-to provide information about students' misconceptions about or understanding of specific algebraic concepts that will enable teachers to use that information to guide their instruction of those students.
In the first phase of this project, the researchers are developing a series of short testlets designed to identify students' abilities to solve particular types of algebra problems and whether they possess various specific algebraic misconceptions that have been identified by previous research as being fairly common. Phase 2 studies will examine whether testlets identify the misconceptions they are designed to identify. Multiple methods will be used in a series of studies involving eighth and ninth grade algebra students, including studies comparing testlet performance with predictions of teachers and with student classroom work, and pretest-posttest studies examining the effects of targeted feedback or instruction on testlet performance. Among these will be a series of two group studies (one for each targeted algebraic misconception) in which 20 students who are identified with the misconception based on testlet performance will be randomly assigned to one of two groups (Group A or B). Students in Group A will work with an individual tutor who will provide instruction over a short period of time; students in Group B will not receive any instruction during this time. After the instructional intervention for students in Group A is complete, all students Groups A and B will retake the testlet. Students in Group B will then work individually with a tutor and all students will be tested for a third time. Changes in students' testlet performance will then be compared between the two groups.
The final phase of the project will involve an examination of whether teachers can use testlets to identify misconceptions of their eighth and ninth grade students, and to respond appropriately. Researchers will instruct teachers in how to use the testlets, after which pretest-posttest gains in testlet performance will be compared across four groups of students: 1) those who do not demonstrate a specific misconception, 2) those who have the misconception and who receive no targeted instruction, 3) those who have the misconception and receive inappropriately-targeted instruction (i.e., targeted at the wrong misconception), and 4) those who have the misconception and receive appropriately-targeted instruction. The prediction is that students in the fourth group will show greater pretest to posttest improvement in testlet performance than will students in the other three groups.
The overall goal of this project is the development of a diagnostic tool (i.e., a set of computer-based algebra short tests) that can provide teachers with information that they can use to guide instruction for individual students and groups of students who share similar problems with algebraic concepts, thus making classroom assessment a more instructionally useful component of teaching and learning.
Related IES Projects: The Diagnostic Geometry Assessment Project (R305A080231)
Journal article, monograph, or newsletter
Higgins, J., Patterson, M.B., Bozman, M., and Katz, M. (2010). Examining the Feasibility and Effect of Transitioning GED Tests to Computer. Journal of Technology, Learning, and Assessment, 10(2): 1–33.
Lucariello, J., Tine, M.T., and Ganley, C.M. (2014). A Formative Assessment of Students' Algebraic Variable Misconceptions. The Journal of Mathematical Behavior, 33: 30–41.
Masters, J. (2010). Automated Scoring of an Interactive Geometry Item: A Proof-of-Concept. Journal of Technology, Learning and Assessment, 8(7): 1–39.
Russell, M., Higgins, J., and Hoffmann, T. (2009). Meeting the Needs of all Students: A Universal Design Approach to Computer-Based Testing. Innovate: Journal of Online Education, 5(4): 1–6.
Russell, M., Hoffmann, T., and Higgins, J. (2009). NimbleTools: A Universally Designed Test Delivery System. Teaching Exceptional Children, 42(2): 6–12.
Russell, M., O'Dwyer, L.M., and Miranda, H. (2009). Diagnosing Students' Misconceptions in Algebra: Results From an Experimental Pilot Study. Behavior Research Methods, 41(2): 414–424.
Nongovernment report, issue brief, or practice guide
Innovation Lab (formerly Nimble Assessment Systems, Inc.), and the National Center for Educational Outcomes (2008). FCAT Computer Accommodations Pilot Study Final Report. Dover, NH: Innovative Lab Publications.
Measured Progress Innovation Lab and Educational Testing Service (2012). Smarter Balanced Assessment Consortium: Technology-Enhanced Items Written Guidelines. Dover, NH: Innovation Lab Publications.
Russell, M. (2011). Digital Test Delivery: Empowering Accessible Test Design to Increase Test Validity for all Students. Washington, DC: Bill and Melinda Gates Foundation.
Russell, M., and Mattson, D. (2011). APIP Project Response: Assessment Technology Standards Request for Information. Dover, NH: Measured Progress.
Russell, M., Mattson, D., Higgins, J., Hoffmann, T., Bebell, D., and Alcaya, C. (2011). A Primer to the Accessible Portable Item Profile (APIP) Standards. Roseville, MN: Minnesota Department of Education.