Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Developing Enhanced Assessment Tools for Capturing Students' Procedural Skills and Conceptual Understanding in Math
Center: NCSER Year: 2015
Principal Investigator: Bottge, Brian Awardee: University of Kentucky
Program: Cognition and Student Learning in Special Education      [Program Details]
Award Period: 4 years (7/1/2015 – 6/30/2019 Award Amount: $1,599,999
Type: Measurement Award Number: R324A150035
Description:

Co-Principal Investigator: Cohen, Allan

Purpose: The goals of this grant were to develop, test, and refine a set of assessment tools for measuring the conceptual understanding and procedural math skills of middle school students with math difficulties (MD). Traditional measurement methods for assessing problem solving do not effectively capture what students with MD know and are able to do in mathematics. Despite best efforts on word problem-solving items, many are still unsure of what test items are asking. Test formats also do little to improve teachers' understanding of the kinds of errors that students make on fractions computation problems. This project developed mathematics tests to help students with MD gain a deeper understanding of the problem contexts and the questions posed and to uncover conceptual misunderstanding these students make when attempting to compute with fractions.

Project Activities: During the first year, the research team developed an Interactive Computer-Based Test, a Formative Fractions Assessment, and a Diagnostic Fractions Computation Test complete with test administration materials and scoring protocols for teacher use. During the next 2 years, researchers conducted multiple studies to determine the extent to which the new measures assessed student understanding relative to paper-and-pencil measures. Additional activities over the course of the project included developing a scoring guide for use with the oral test and the refinement and extension of psychometric models to aid in the interpretation of test scores and student misconceptions of math concepts.

Key Outcomes: The main findings of this project, as reported by the principal investigator, are as follows:

  • Researchers designed oral assessment methods and found that the oral assessments tapped more sophisticated problem-solving skills of students with math disabilities than the paper-pencil tests.
  • In one condition, teachers used technology-assisted prompts to assess student performance and remediate errors (Fractions at Work, Technology-Assisted Prompts [FAW-R]). In the comparison condition (Fractions at Work, Basic Intervention [FAW-B]), teachers gave students the same items for assessing progress but used their own methods of reteaching. Most student performance scores increased from pretest to posttest, which suggests that the FAW methods were effective.
  • Although there was a wide variation in the performance among students who profited from instruction and those who did not, there were no statistically significant differences between the two experimental conditions.
  • Additional analyses suggest that scores from all students together (regardless of intervention method) showed significant gains on all three mathematics outcome measures, with effect sizes ranging from small to large. This supports the use of the basic instructional program curriculum researchers had used in prior studies.
  • There were no significant differences between the FAW-B and FAW-R groups. Although teachers in the FAW-B group did not have access to the technology-based enhancements in FAW-R, they used their own, informal ways of monitoring each student's progress. The advantage of small class size in the resource rooms afforded teachers opportunities to assess their students' thinking and make appropriate adjustments to their instruction.
  • Formative and summative assessments can help teachers provide more effective instruction for low-performing students in math. Teachers demonstrated the ability to use the item misconceptions identified in the developed assessments to remediate their students' computation skills.

Structured Abstract

Setting: The research took place in middle schools in Kentucky.

Sample: A total of 73 students with disabilities in grades 6 to 11 participated in the research. Eight teachers of resource special education classes also participated.

Research Design and Methods: In the first year, the research team collected usability and feasibility data on the measures using classroom observations, online teacher logs, and informal interviews with teachers and students. Subsequent studies involved quasi-experimental and randomized experimental designs to assess performance of students using computerized, interactive tests versus paper-and-pencil versions. Using these data, the research team created navigation maps to understand how students navigated the assessment items using the software during the test-taking process. They conducted simulation studies to assess the quality of the new psychometric methods for interpreting test scores.

Measures: The research team produced fully developed versions of the Interactive Computer-Based Test (ICBT), a Formative Fractions Assessment (FFA), and a Diagnostic Fractions Computation Test (DFCT). The ICBT assesses students' problem-solving skills and includes interactive item and information clusters, tracking features, and information for teachers. Both the FFA and DFCT are computer-administered measures of students' fraction understanding with interactive features and detailed feedback provided to teachers on student understanding and student errors.

Control Condition: Due to the nature of the research design, there was no control condition.

Key Measures: In addition to the tests developed as part of this project, the research team administered the Iowa Test of Basic Skills to assess fractions computation skills of students with disabilities.

Data Analytic Strategy: The project used predominantly item response theory (IRT) modeling, but also utilized Markov chain Monte Carlo estimation. Differential item function (DIF) analyses were conducted to help detect item bias.

Related IES Projects: Evaluating the Efficacy of Enhanced Anchored Instruction for Middle School Students with Learning Disabilities in Math (R324A090179)

Products

ERIC Citations: Find available citations in ERIC for this award here.

Publicly available data: For sharing of data and the measures, contact Brian Bottge at bbott2@uky.edu.

Additional online resources and information: http://websedge.com/videos/cec_tv/#/

Select Publications

Journal articles

Suh, Y., Cho, S.-J., & Bottge, B. A. (2018). A multilevel longitudinal nested logit model for measuring changes in correct response and error types. Applied Psychological Measurement, 42, 73-88.

Lin, Q., Xing, K., & Park, Y. S. (2020). Measuring skill growth and evaluating change: Unconditional and conditional approaches to latent growth cognitive diagnostic models. Frontiers in Psychology, 11, Article 2205.

Bottge, B. A., Ma, X., Gassaway, L. J., Jones, M., & Gravil, M. (2021). Effects of formative assessment strategies on the fractions computation skills of students with disabilities. Remedial and Special Education, 42(5), 279–289.


Back