Project Activities
During the first year, the research team developed an Interactive Computer-Based Test, a Formative Fractions Assessment, and a Diagnostic Fractions Computation Test complete with test administration materials and scoring protocols for teacher use. During the next 2 years, researchers conducted multiple studies to determine the extent to which the new measures assessed student understanding relative to paper-and-pencil measures. Additional activities over the course of the project included developing a scoring guide for use with the oral test and the refinement and extension of psychometric models to aid in the interpretation of test scores and student misconceptions of math concepts.
Structured Abstract
Setting
The research took place in middle schools in Kentucky.
Sample
A total of 73 students with disabilities in grades 6 to 11 participated in the research. Eight teachers of resource special education classes also participated.
Research design and methods
In the first year, the research team collected usability and feasibility data on the measures using classroom observations, online teacher logs, and informal interviews with teachers and students. Subsequent studies involved quasi-experimental and randomized experimental designs to assess performance of students using computerized, interactive tests versus paper-and-pencil versions. Using these data, the research team created navigation maps to understand how students navigated the assessment items using the software during the test-taking process. They conducted simulation studies to assess the quality of the new psychometric methods for interpreting test scores.
Control condition
Due to the nature of the research design, there was no control condition.
Key measures
In addition to the tests developed as part of this project, the research team administered the Iowa Test of Basic Skills to assess fractions computation skills of students with disabilities.
Data analytic strategy
The project used predominantly item response theory (IRT) modeling, but also utilized Markov chain Monte Carlo estimation. Differential item function (DIF) analyses were conducted to help detect item bias.
Key outcomes
The main findings of this project, as reported by the principal investigator, are as follows:
- Researchers designed oral assessment methods and found that the oral assessments tapped more sophisticated problem-solving skills of students with math disabilities than the paper-pencil tests.
- In one condition, teachers used technology-assisted prompts to assess student performance and remediate errors (Fractions at Work, Technology-Assisted Prompts [FAW-R]). In the comparison condition (Fractions at Work, Basic Intervention [FAW-B]), teachers gave students the same items for assessing progress but used their own methods of reteaching. Most student performance scores increased from pretest to posttest, which suggests that the FAW methods were effective.
- Although there was a wide variation in the performance among students who profited from instruction and those who did not, there were no statistically significant differences between the two experimental conditions.
- Additional analyses suggest that scores from all students together (regardless of intervention method) showed significant gains on all three mathematics outcome measures, with effect sizes ranging from small to large. This supports the use of the basic instructional program curriculum researchers had used in prior studies.
- There were no significant differences between the FAW-B and FAW-R groups. Although teachers in the FAW-B group did not have access to the technology-based enhancements in FAW-R, they used their own, informal ways of monitoring each student's progress. The advantage of small class size in the resource rooms afforded teachers opportunities to assess their students' thinking and make appropriate adjustments to their instruction.
- Formative and summative assessments can help teachers provide more effective instruction for low-performing students in math. Teachers demonstrated the ability to use the item misconceptions identified in the developed assessments to remediate their students' computation skills.
People and institutions involved
IES program contact(s)
Project contributors
Products and publications
ERIC Citations: Find available citations in ERIC for this award here.
Publicly available data: For sharing of data and the measures, contact Brian Bottge at bbott2@uky.edu.
Additional online resources and information: http://websedge.com/videos/cec_tv/#/
Journal articles
Suh, Y., Cho, S.-J., & Bottge, B. A. (2018). A multilevel longitudinal nested logit model for measuring changes in correct response and error types. Applied Psychological Measurement, 42, 73-88.
Lin, Q., Xing, K., & Park, Y. S. (2020). Measuring skill growth and evaluating change: Unconditional and conditional approaches to latent growth cognitive diagnostic models. Frontiers in Psychology, 11, Article 2205.
Bottge, B. A., Ma, X., Gassaway, L. J., Jones, M., & Gravil, M. (2021). Effects of formative assessment strategies on the fractions computation skills of students with disabilities. Remedial and Special Education, 42(5), 279-289.
Related projects
Supplemental information
Measures: The research team produced fully developed versions of the Interactive Computer-Based Test (ICBT), a Formative Fractions Assessment (FFA), and a Diagnostic Fractions Computation Test (DFCT). The ICBT assesses students' problem-solving skills and includes interactive item and information clusters, tracking features, and information for teachers. Both the FFA and DFCT are computer-administered measures of students' fraction understanding with interactive features and detailed feedback provided to teachers on student understanding and student errors.
Questions about this project?
To answer additional questions about this project or provide feedback, please contact the program officer.