Search Results: (16-30 of 31 records)
Pub Number | Title | ![]() |
---|---|---|
NCSER 2015002 | The Role of Effect Size in Conducting, Interpreting, and Summarizing Single-Case Research
The field of education is increasingly committed to adopting evidence-based practices. Although randomized experimental designs provide strong evidence of the causal effects of interventions, they are not always feasible. For example, depending upon the research question, it may be difficult for researchers to find the number of children necessary for such research designs (e.g., to answer questions about impacts for children with low-incidence disabilities). A type of experimental design that is well suited for such low-incidence populations is the single-case design (SCD). These designs involve observations of a single case (e.g., a child or a classroom) over time in the absence and presence of an experimenter-controlled treatment manipulation to determine whether the outcome is systematically related to the treatment. Research using SCD is often omitted from reviews of whether evidence-based practices work because there has not been a common metric to gauge effects as there is in group design research. To address this issue, the National Center for Education Research (NCER) and National Center for Special Education Research (NCSER) commissioned a paper by leading experts in methodology and SCD. Authors William Shadish, Larry Hedges, Robert Horner, and Samuel Odom contend that the best way to ensure that SCD research is accessible and informs policy decisions is to use good standardized effect size measures—indices that put results on a scale with the same meaning across studies—for statistical analyses. Included in this paper are the authors' recommendations for how SCD researchers can calculate and report on standardized between-case effect sizes, the way in these effect sizes can be used for various audiences (including policymakers) to interpret findings, and how they can be used across studies to summarize the evidence base for education practices. |
1/7/2016 |
REL 2014052 | Forming a Team to Ensure High-Quality Measurement in Education Studies
This brief provides tips for forming a team of staff and consultants with the needed expertise to make key measurement decisions that will ensure high-quality data for answering the study’s research questions. The brief outlines the main responsibilities of measurement team members. It also describes typical measurement tasks and discusses how the measurement team members can work together to complete the measurement tasks successfully. |
9/16/2014 |
REL 2014064 | Reporting What Readers Need to Know about Education Research Measures: A Guide
This brief provides five checklists to help researchers provide complete information describing (1) their study's measures; (2) data collection training and quality; (3) the study's reference population, study sample, and measurement timing; (4) evidence of the reliability and construct validity of the measures; and (5) missing data and descriptive statistics. The brief includes an example of parts of a report's methods and results section illustrating how the checklists can be used to check the completeness of reporting. |
9/9/2014 |
REL 2014014 | Developing a Coherent Research Agenda: Lessons from the REL Northeast & Islands Research Agenda Workshops
This report describes the approach that REL Northeast and Islands (REL-NEI) used to guide its eight research alliances toward collaboratively identifying a shared research agenda. A key feature of their approach was a two-workshop series, during which alliance members created a set of research questions on a shared topic of education policy and/or practice. This report explains how REL-NEI conceptualized and organized the workshops, planned the logistics, overcame geographic distance among alliance members, developed and used materials (including modifications for different audiences and for a virtual platform), and created a formal research agenda after the workshops. The report includes links to access the materials used for the workshops, including facilitator and participant guides and slide decks. |
7/10/2014 |
NCES 2013046 | U.S. TIMSS and PIRLS 2011 Technical Report and User's Guide
The U.S. TIMSS and PIRLS 2011 Technical Report and User's Guide provides an overview of the design and implementation in the United States of the Trends in International Mathematics and Science Study (TIMSS) 2011 and the Progress in International Reading Literacy Study (PIRLS) 2011, along with information designed to facilitate access to the U.S. TIMSS and PIRLS 2011 data. |
11/26/2013 |
NCES 2013190 | The Adult Education Training and Education Survey (ATES) Pilot Study
This report describes the process and findings of a national pilot test of survey items that were developed to assess the prevalence and key characteristics of occupational certifications and licenses and subbaccalaureate educational certificates. The pilot test was conducted as a computer-assisted telephone interview (CATI) survey, administered from September 2010 to January 2011. |
4/9/2013 |
NCEE 20124025 | Replicating Experimental Impact Estimates Using a Regression Discontinuity Approach
This NCEE Technical Methods Paper compares the estimated impacts of an educational intervention using experimental and regression discontinuity (RD) study designs. The analysis used data from two large-scale randomized controlled trials—the Education Technology Evaluation and the Teach for America Study—to provide evidence on the performance of RD estimators in two specific contexts. More generally, the report presents and implements a method for examining the performance of RD estimators that could be used in other contexts. The study found that the RD and experimental designs produced impact estimates that were meaningful in size, though not significantly different from one another. The study also found that manipulation of the assignment variable in RD designs can substantially influence RD impact estimates, particularly if manipulation is related to the outcome and occurs close to the assignment variable's cutoff value. |
4/25/2012 |
NCES 2011463 | The NAEP Primer
The purpose of the NAEP Primer is to guide educational researchers through the intricacies of the NAEP database and make its technologies more user-friendly. The NAEP Primer makes use of its publicly accessible NAEP mini-sample that is included on the CD. The mini-sample contains real data from the 2005 mathematics assessment that have been approved for public use. Only public schools are included in this subsample that contains selected variables for about 10 percent of the schools and students in this assessment. All students who participated in NAEP in the selected public schools are included. This subsample is not sufficient to make state comparisons. In addition, to ensure confidentiality, no state, school, or student identifiers are included. The NAEP Primer document covers the following topics:
Also note that the NAEP Primer consists of two publications: NCES 2011463 and NCES 2011464 |
8/4/2011 |
NCES 2011464 | NAEP Primer Mini-Sample
The purpose of the NAEP Primer is to guide educational researchers through the intricacies of the NAEP database and make its technologies more user-friendly. The NAEP Primer makes use of its publicly accessible NAEP mini-sample that is included on the CD. The mini-sample contains real data from the 2005 mathematics assessment that have been approved for public use. Only public schools are included in this subsample that contains selected variables for about 10 percent of the schools and students in this assessment. All students who participated in NAEP in the selected public schools are included. This subsample is not sufficient to make state comparisons. In addition, to ensure confidentiality, no state, school, or student identifiers are included. The NAEP Primer document covers the following topics:
Also note that the NAEP Primer consists of two publications: NCES 2011463 and NCES 2011464 |
8/4/2011 |
NCES 2011049 | Third International Mathematics and Science Study 1999 Video Study Technical Report, Volume 2: Science
This second volume of the Third International Mathematics and Science Study (TIMSS) 1999 Video Study Technical Report focuses on every aspect of the planning, implementation, processing, analysis, and reporting of the science components of the TIMSS 1999 Video Study. Chapter 2 provides a full description of the sampling approach implemented in each country. Chapter 3 details how the data were collected, processed, and managed. Chapter 4 describes the questionnaires collected from the teachers in the videotaped lessons, including how they were developed and coded. Chapter 5 provides details about the codes applied to the video data by a team of international coders as well as several specialist groups. Chapter 6 describes procedures for coding the content and the classroom discourse of the video data by specialists. Lastly, in chapter 7, information is provided regarding the weights and variance estimates used in the data analyses. There are also numerous appendices to this report, including the questionnaires and manuals used for data collection, transcription, and coding. |
7/27/2011 |
NCES 2011607 | National Institute of Statistical Sciences Configuration and Data Integration Technical Panel: Final Report
NCES asked the National Institute of Statistical Sciences (NISS) to convene a technical panel of survey and policy experts to examine potential strategies for configuration and data integration among successive national longitudinal education surveys. In particular the technical panel was asked to address two related issues: how could NCES configure the timing of its longitudinal studies (e.g., Early Childhood Longitudinal Study [ECLS], Education Longitudinal Study [ELS], and High School Longitudinal Study [HSLS]) in a maximally efficient and informative manner. The main, but not sole, focus was at the primary and secondary levels; and what could NCES do to support data integration for statistical and policy analyses that cross breakpoints between longitudinal studies. The NISS technical panel delivered its report to NCES in 2009. The principle recommendations included in the report are: 1. The technical panel recommended that NCES should configure K-12 studies as a series of three studies: (i) a K-5 study, followed immediately by (ii) a 6-8 study, followed immediately by (iii) a 9-12 study. One round of such studies, ignoring postsecondary follow-up to the 9-12 study, requires 13 years to complete. 2. The technical panel also recommended that budget permitting; NCES should initiate a new round of K-12 studies every 10 years. This can be done in a way that minimizes the number of years in which multiple major assessments occur. The panel found that there is no universal strategy by means of which NCES can institutionalize data integration across studies. One strategy was examined in detail: continuation of students from one study to the next. Based on experiments conducted by NISS the technical panel found that: 3. the case for continuation on the basis that it supports cross-study statistical inference is weak. Use of high-quality retrospective data that are either currently available or are likely to be available in the future can accomplish nearly as much at lower cost. 4. Continuation is problematic in at least two other senses: first, principled methods for constructing weights may not exist and, second, no matter how much NCES might advise to the contrary, researchers are likely to attempt what is likely to be invalid or uninformative inference on the basis of continuation cases alone. 5. The technical panel urged that, as an alternative means of addressing specific issues that cross studies, NCES consider the expense and benefit of small, targeted studies that target specific components of student’s trajectories. |
3/28/2011 |
NCSER 20103006 | Statistical Power Analysis in Education Research
This paper provides a guide to calculating statistical power for the complex multilevel designs that are used in most field studies in education research. For multilevel evaluation studies in the field of education, it is important to account for the impact of clustering on the standard errors of estimates of treatment effects. Using ideas from survey research, the paper explains how sample design induces random variation in the quantities observed in a randomized experiment, and how this random variation relates to statistical power. The manner in which statistical power depends upon the values of intraclass correlations, sample sizes at the various levels, the standardized average treatment effect (effect size), the multiple correlation between covariates and the outcome at different levels, and the heterogeneity of treatment effects across sampling units is illustrated. Both hierarchical and randomized block designs are considered. The paper demonstrates that statistical power in complex designs involving clustered sampling can be computed simply from standard power tables using the idea of operational effect sizes: effect sizes multiplied by a design effect that depends on features of the complex experimental design. These concepts are applied to provide methods for computing power for each of the research designs most frequently used in education research. |
4/27/2010 |
NCES 2010009 | Early Childhood Longitudinal Study, Birth Cohort (ECLS-B) Preschool--Kindergarten 2007
Psychometric Report
This methodology report documents the design, development, and psychometric characteristics of the assessment instruments used in the preschool and kindergarten waves of the ECLS-B. The assessment instruments measure children's cognitive development in early reading and mathematics, socioemotional functioning, fine and gross motor skills, and physical development (height, weight, middle upper arm circumference, and head circumference). The report also includes information about indirect assessments of the children through questions asked of parents, early care and education providers, and teachers. |
4/16/2010 |
NCES 2009012 | TIMSS 2007 U.S. Technical Report and User Guide
The U.S. TIMSS 2007 Technical Report and User Guide provides an overview of the design and implementation of the Trends in International Mathematics and Science Study (TIMSS) 2007 in the United States, along with information designed to facilitate access to the U.S. TIMSS 2007 data. |
9/23/2009 |
NCES 195426 | High School and Beyond Fourth Follow-Up Methodology Report. Technical Report.
This report describes and evaluates the methods, procedures, techniques, and activities that produced the fourth (1992) follow-up of the High School and Beyond (HS&B) study. HS&B began in 1980 as the successor to the National Longitudinal Study of the High School Class of 1972. The original collection techniques of HS&B were replaced by computer assisted telephone interviews, and other electronic techniques replaced the original methods. HS&B data are more user-friendly and less resource-dependent as a results of these changes. There were 2 components to the fourth follow-up: (1) the respondent survey which was a computer assisted telephone interview (CATI) based on 14,825 members of the 1980 sophomore cohort, and (2) a transcript study based on the 9,064 sophomore cohort members who reported postsecondary attendance. The response to the respondent survey was 85.3%. Response rate for the transcript study varied from 50.4% at private, for-profit institutions to 95.1% at public, four-year institutions. Technical innovations in this survey round included verification and correction of previously collected data through the CATI instrument, online coding applications, and statistical quality control. Survey data and information about the methodology are presented in 49 tables. An appendix contains the transcript request packages. (SLD) |
4/12/1995 |
Page 2
of 3