Education research is a data-intensive enterprise. Each year, IES awards grants to researchers who propose to conduct secondary data analyses and meta-analyses, develop, implement and evaluate interventions, and develop measurement tools. Almost all of these projects involve data collection.
In recent years, several IES-funded researchers have explored options for more efficient approaches to collecting data from study participants, as well as seeking to reduce the potential for data entry errors. Some of these options were featured during a panel session at a recent IES Principal Investigators' meeting in Washington, DC. Organized by IES's National Center for Education Research (NCER) and the National Center for Special Education Research (NCSER), the panel featured researchers who are using hardware (e.g., iPods, iPads, and tablet computers), computer applications, and computer software to improve their data collection processes. One researcher, for example, is using iPod-based diaries to collect information about teacher stress for a social behavioral intervention program. Another researcher is using an iPad app to conduct classroom observations and a third is using a software program to collect electronic data via iPads and tablet computers. In addition, one researcher is using a virtual classroom to study public speaking and attention in students with higher functioning autism. While three of these four researchers developed tools specifically for use in their research projects, one research team from Vanderbilt University is using a software program that any researcher can purchase and use to collect data using both standardized measures and researcher-developed assessment tools.
Deanna Meador and her colleagues at Vanderbilt University (two NCER PIs, Dale Farran/R305A090533 and Mark Lipsey/R305A090079) are using the FileMaker Pro software program, iPads, and tablet computers to collect child- and classroom-level data in two currently funded NCER research projects—an Efficacy evaluation study and a Measurement study. The direct child assessment battery includes several Woodcock-Johnson III subscales (Applied Problems, Print Vocabulary, Spelling, and Quantitative Concepts) and self-regulation measures (Head-Toes-Knees and Shoulders; Dimensional Change Card Sort task; Copy Designs task; Backward Digit Span; and Peg Tapping). The classroom observation measures include the Child Observation Protocol, Narrative Record, and intervention-specific fidelity measures. In these types of projects, researchers typically administer multiple assessments to each study participant, needing several months to enter and clean data. In these two projects, the transition from pencil-and-paper measures to assess study participants to electronic data collection resulted in decreased costs, improved efficiency, and fewer data entry errors.
Meador told the panel that the cost savings resulted from a reduction in copying fees for assessment materials, lower data entry costs, and less office storage space needed for hardcopy files. Meador calculated that the investment in hardware and software needed for electronic data collection was approximately $3,000 less than the cost of one round of paper data collection. For time savings, the number of assessor errors was reduced because the electronic format included pre-programmed responses and features. In addition, the assessors sent assessment data directly to a file sharing site for processing, and the lag time between data collection and data analysis was reduced from several months to about one week. One additional benefit of electronic data is that researchers can analyze their data in a much shorter period of time and disseminate study findings to other researchers and practitioners.