
Classroom response systems facilitate student accountability, readiness, and learning.
Jones, S. J., Crandall, J., Vogler, J. S., & Robinson, D. H. (2013). Journal of Educational Computing Research 49(2), 155-171.
-
examining88Students, gradePS
Practice Guide
Review Details
Reviewed: July 2020
- Practice Guide (findings for Classroom Response System (CRS))
- Randomized Controlled Trial
- Meets WWC standards without reservations because it is a randomized controlled trial with low attrition.
This review may not reflect the full body of research evidence for this intervention.
Evidence Tier rating based solely on this study. This intervention may achieve a higher tier when combined with the full body of evidence.
Findings
Outcome measure |
Comparison | Period | Sample |
Intervention mean |
Comparison mean |
Significant? |
Improvement index |
Evidence tier |
---|---|---|---|---|---|---|---|---|
Unit Exam: Experiment 3 |
Classroom Response System (CRS) vs. Business as usual |
0 Days |
Full sample: Experiments 2 and 3;
|
24.41 |
21.39 |
No |
-- | |
Unit Exam: Experiment 2 |
Classroom Response System (CRS) vs. Business as usual |
0 Days |
Full sample: Experiment 2 and 3;
|
22.16 |
19.61 |
No |
-- | |
Unit Exam: Experiment 1 |
Classroom Response System (CRS) vs. Business as usual |
0 Days |
Full sample: Experiment 1 (iClicker CRS);
|
24.42 |
23.96 |
No |
-- |
Evidence Tier rating based solely on this study. This intervention may achieve a higher tier when combined with the full body of evidence.
Sample Characteristics
Characteristics of study sample as reported by study author.
-
Female: 83%
Male: 17% -
- B
- A
- C
- D
- E
- F
- G
- I
- H
- J
- K
- L
- P
- M
- N
- O
- Q
- R
- S
- V
- U
- T
- W
- X
- Z
- Y
- a
- h
- i
- b
- d
- e
- f
- c
- g
- j
- k
- l
- m
- n
- o
- p
- q
- r
- s
- t
- u
- v
- x
- w
- y
South
Study Details
Setting
The study was conducted at a large, south-central public university; students were undergraduates enrolled in an educational psychology course. Experiment 1 took place during the Fall 2009 semester, while Experiments 2 and 3 took place during the Spring 2010 semester.
Study sample
For Experiment 1, 80 percent of the randomized sample was female. For Experiments 2 and 3, 87 percent of the randomized sample was female. The study does not provide further information on student demographics or characteristics of the analytic samples.
Intervention Group
[Classroom Response Systems] For Experiment 1, the iClicker CRS was used, which consisted of two 75-minute lectures that included a total of eight multiple-choice questions. For Experiment 2, iClicker CRS was used along with a Mobile Ongoing Course Assessment (MOCA) response system. For Experiment 3, the MOCA response system was used to allow students to answer questions outside of class time.
Comparison Group
[Business as usual] For Experiment 1, students in the comparison condition could see the same multiple-choice questions during Unit 4 lectures as students in the intervention condition but did not have access to CRS to answer the questions and could not earn bonus points for Unit 4. For Experiment 2, students in the comparison condition were told, prior to Unit 2, that they would not be answering in-class questions until Unit 3. but were also told that when they arrived for the first Unit 2 lecture, they could use the CRS to participate in the in-class questions but would not receive points until the following unit. For Experiment 3, students in the comparison condition could view, using the MOCA CRS, the same 10 pre-lecture questions before each of the two unit 4 lectures as students in the intervention condition, but only students in the intervention condition received points for correct responses.
Support for implementation
The iClicker CRS and MOCA response systems are tools/devices clearly identified as being used in the intervention group. There is no information about support for implementation for course instructors.
An indicator of the effect of the intervention, the improvement index can be interpreted as the expected change in percentile rank for an average comparison group student if that student had received the intervention.
For more, please see the WWC Glossary entry for improvement index.
An outcome is the knowledge, skills, and attitudes that are attained as a result of an activity. An outcome measures is an instrument, device, or method that provides data on the outcome.
A finding that is included in the effectiveness rating. Excluded findings may include subgroups and subscales.
The sample on which the analysis was conducted.
The group to which the intervention group is compared, which may include a different intervention, business as usual, or no services.
The timing of the post-intervention outcome measure.
The number of students included in the analysis.
The mean score of students in the intervention group.
The mean score of students in the comparison group.
The WWC considers a finding to be statistically significant if the likelihood that the finding is due to chance alone, rather than a real difference, is less than five percent.
The WWC reviews studies for WWC products, Department of Education grant competitions, and IES performance measures.
The name and version of the document used to guide the review of the study.
The version of the WWC design standards used to guide the review of the study.
The result of the WWC assessment of the study. The rating is based on the strength of evidence of the effectiveness of the intervention. Studies are given a rating of Meets WWC Design Standards without Reservations, Meets WWC Design Standards with Reservations, or >Does Not Meet WWC Design Standards.
A related publication that was reviewed alongside the main study of interest.
Study findings for this report.
Based on the direction, magnitude, and statistical significance of the findings within a domain, the WWC characterizes the findings from a study as one of the following: statistically significant positive effects, substantively important positive effects, indeterminate effects, substantively important negative effects, and statistically significant negative effects. For more, please see the WWC Handbook.
The WWC may review studies for multiple purposes, including different reports and re-reviews using updated standards. Each WWC review of this study is listed in the dropdown. Details on any review may be accessed by making a selection from the drop down list.
Tier 1 Strong indicates strong evidence of effectiveness,
Tier 2 Moderate indicates moderate evidence of effectiveness, and
Tier 3 Promising indicates promising evidence of effectiveness,
as defined in the
non-regulatory guidance for ESSA
and the regulations for ED discretionary grants (EDGAR Part 77).