Skip Navigation

Publications & Products

Search Results: (1-15 of 30 records)

 Pub Number  Title  Date
NCES 2021011 Technical Report and User Guide for the 2018 Program for International Student Assessment (PISA): Data Files and Database with U.S.-Specific Variables
This technical report and user guide is designed to provide researchers with an overview of the design and implementation of PISA 2018, as well as with information on how to access the PISA 2018 data. This information is meant to supplement OECD publications by describing those aspects of PISA 2018 that are unique to the United States.
7/8/2021
NCES 2021029 2012–2016 Program for International Student Assessment Young Adult Follow-up Study (PISA YAFS): How reading and mathematics performance at age 15 relate to literacy and numeracy skills and education, workforce, and life outcomes at age 19
This Research and Development report provides data on the literacy and numeracy performance of U.S. young adults at age 19, as well as examines the relationship between that performance and their earlier reading and mathematics proficiency in PISA 2012 at age 15. It also explores how other aspects of their lives at age 19—such as their engagement in postsecondary education, participation in the workforce, attitudes, and vocational interests—are related to their proficiency at age 15.
6/15/2021
REL 2021075 Evaluating the Implementation of Networked Improvement Communities in Education: An Applied Research Methods Report
The purpose of this study was to develop a framework that can be used to evaluate the implementation of networked improvement communities (NICs) in public prekindergarten (PK)–12 education and to apply this framework to the formative evaluation of the Minnesota Alternative Learning Center Networked Improvement Community (Minnesota ALC NIC), a partnership between Regional Educational Laboratory Midwest, the Minnesota Department of Education, and five alternative learning centers (ALCs) in Minnesota. The partnership formed with the goal of improving high school graduation rates among students in ALCs. The evaluation team developed and used research tools aligned with the evaluation framework to gather data from 37 school staff in the five ALCs participating in the Minnesota ALC NIC. Data sources included attendance logs, postmeeting surveys (administered following three NIC sessions), a post–Plan-Do-Study-Act survey, continuous improvement artifacts, and event summaries. The evaluation team used descriptive analyses for quantitative and qualitative data, including frequency tables to summarize survey data and coding artifacts to indicate completion of continuous improvement milestones. Engagement in the Minnesota ALC NIC was strong, as measured by attendance data and post–Plan-Do-Study-Act surveys, but the level of engagement varied by continuous improvement milestones. Based on postmeeting surveys, NIC members typically viewed the NIC as relevant and useful, particularly because of the opportunities to work within teams and develop relationships with staff from other schools. The percentage of meeting attendees agreeing that the NIC increased their knowledge and skills increased over time. Using artifacts from the NIC, the evaluation team determined that most of the teams completed most continuous improvement milestones. Whereas the post–Plan-Do-Study-Act survey completed by NIC members indicated that sharing among different NIC teams was relatively infrequent, contemporaneous meeting notes recorded specific instances of networking among teams. This report illustrates how the evaluation framework and its aligned set of research tools were applied to evaluate the Minnesota ALC NIC. With slight adaptations, these tools can be used to evaluate the implementation of a range of NICs in public PK–12 education settings. The study has several limitations, including low response rates to postmeeting surveys, reliance on retrospective measures of participation in continuous improvement activities, and the availability of extant data on a single Plan-Do-Study-Act cycle. The report includes suggestions for overcoming these limitations when applying the NIC evaluation framework to other NICs in public PK–12 education settings.
3/8/2021
REL 2021057 Tool for Assessing the Health of Research-Practice Partnerships
Education research-practice partnerships (RPPs) offer structures and processes for bridging research and practice and ultimately driving improvements for K-12 outcomes. To date, there is limited literature on how to assess the effectiveness of RPPs. Aligned to the most commonly cited framework for assessing RPPs, Assessing Research-Practice Partnerships: Five Dimensions of Effectiveness, this two-part tool offers guidance on how researchers and practitioners may prioritize the five dimensions of RPP effectiveness and their related indicators. The tool also provides an interview protocol for RPP evaluators to use as an instrument for assessing the extent to which the RPP demonstrates evidence of the prioritized dimensions and their indicators of effectiveness.
2/2/2021
IES 2020001REV Cost Analysis: A Starter Kit
This starter kit is designed for grant applicants who are new to cost analysis. The kit will help applicants an a cost analysis, setting the foundation for more complex economic analyses.
6/1/2020
NCSER 2020001 An Introduction to Adaptive Interventions and SMART Designs in Education
Educators must often adapt interventions over time because what works for one student may not work for another and what works now for one student may not work in the future for the same student. Adaptive interventions provide education practitioners with a prespecified, systematic, and replicable way of doing this through a sequence of decision rules for whether, how, and when to modify interventions. The sequential, multiple assignment, randomized trial (SMART) is one type of multistage, experimental design that can help education researchers build high-quality adaptive interventions.

Despite the critical role adaptive interventions can play in various domains of education, research about adaptive interventions and the use of SMART designs to develop effective adaptive interventions in education is in its infancy. To help the field move forward in this area, the National Center for Special Education Research (NCSER) and the National Center for Education Evaluation and Regional Assistance (NCEE) commissioned a paper by leading experts in adaptive interventions and SMART designs. This paper aims to provide information on building and evaluating high-quality adaptive interventions and review the components of SMART designs, discuss the key features of the SMART, and introduce common research questions for which SMARTs may be appropriate.
11/25/2019
NCES 2019113 U.S. PIRLS and ePIRLS 2016 Technical Report and User's Guide
The U.S. PIRLS and ePIRLS 2016 Technical Report and User's Guide provides an overview of the design and implementation in the United States of the Progress in International Reading Literacy Study (PIRLS) and ePIRLS 2016, along with information designed to facilitate access to the U.S. PIRLS and ePIRLS 2016 data.
8/27/2019
NCES 2018020 U.S. TIMSS 2015 and TIMSS Advanced 1995 & 2015 Technical Report and User's Guide
The U.S. TIMSS 2015 and TIMSS Advanced 1995 & 2015 Technical Report and User's Guide provides an overview of the design and implementation in the United States of the Trends in International Mathematics and Science Study (TIMSS) 2015 and TIMSS Advanced 1995 & 2015, along with information designed to facilitate access to the U.S. TIMSS 2015 and TIMSS Advanced 1995 & 2015 data.
11/1/2018
NCES 2018121 Administering a Single-Phase, All-Adults Mail Survey: A Methodological Evaluation of the 2013 NATES Pilot Study
This report describes the methodological outcomes from an address-based sampling (ABS) mail survey, the 2013 pilot test of the National Adult Training and Education Survey. The study tested the feasibility of (1) using single-stage sampling, rather than two-stage sampling (with a screener to identify adults within households), and (2) mailing out three individual survey instruments per household versus a composite booklet with three combined instruments.
3/30/2018
NCES 2017095 Technical Report and User Guide for the 2015 Program for International Student Assessment (PISA)
This technical report and user guide is designed to provide researchers with an overview of the design and implementation of PISA 2015 in the United States, as well as information on how to access the PISA 2015 data. The report includes information about sampling requirements and sampling in the United States; participation rates at the school and student level; how schools and students were recruited; instrument development; field operations used for collecting data; detail concerning various aspects of data management, including data processing, scaling, and weighting. In addition, the report describes the data available from both international and U.S. sources, special issues in analyzing the PISA 2015 data, as well as a description of merging data files.
12/19/2017
NCEE 20184002 Asymdystopia: The threat of small biases in evaluations of education interventions that need to be powered to detect small impacts
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may create a new challenge for researchers: the need to guard against smaller inaccuracies (or "biases"). The purpose of this report is twofold. First, the report examines the potential for small biases to increase the risk of making false inferences as studies are powered to detect smaller impacts, a phenomenon the report calls asymdystopia. The report examines this potential for both randomized controlled trials (RCTs) and studies using regression discontinuity designs (RDDs). Second, the report recommends strategies researchers can use to avoid or mitigate these biases. For RCTs, the report recommends that evaluators either substantially limit attrition rates or offer a strong justification for why attrition is unlikely to be related to study outcomes. For RDDs, new statistical methods can protect against bias from incorrect regression models, but these methods often require larger sample sizes in order to detect small effects.
10/3/2017
REL 2016119 Stated Briefly: How methodology decisions affect the variability of schools identified as beating the odds
This "Stated Briefly" report is a companion piece that summarizes the results of another report of the same name. Schools that show better academic performance than would be expected given characteristics of the school and student populations are often described as "beating the odds" (BTO). State and local education agencies often attempt to identify such schools as a means of identifying strategies or practices that might be contributing to the schools' relative success. Key decisions on how to identify BTO schools may affect whether schools make the BTO list and thereby the identification of practices used to beat the odds. The purpose of this study was to examine how a list of BTO schools might change depending on the methodological choices and selection of indicators used in the BTO identification process. This study considered whether choices of methodologies and type of indicators affect the schools that are identified as BTO. The three indicators were (1) type of performance measure used to compare schools, (2) the types of school characteristics used as controls in selecting BTO schools, and (3) the school sample configuration used to pool schools across grade levels. The study applied statistical models involving the different methodologies and indicators and documented how the lists schools identified as BTO changed based on the models. Public school and student data from one midwest state from 2007-08 through 2010-11 academic years were used to generate BTO school lists. By performing pairwise comparisons among BTO school lists and computing agreement rates among models, the project team was able to gauge the variation in BTO identification results. Results indicate that even when similar specifications were applied across statistical methods, different sets of BTO schools were identified. In addition, for each statistical method used, the lists of BTO schools identified varied with the choice of indicators. Fewer than half of the schools were identified as BTO in more than one year. The results demonstrate that different technical decisions can lead to different identification results.
4/6/2016
NCSER 2015002 The Role of Effect Size in Conducting, Interpreting, and Summarizing Single-Case Research
The field of education is increasingly committed to adopting evidence-based practices. Although randomized experimental designs provide strong evidence of the causal effects of interventions, they are not always feasible. For example, depending upon the research question, it may be difficult for researchers to find the number of children necessary for such research designs (e.g., to answer questions about impacts for children with low-incidence disabilities). A type of experimental design that is well suited for such low-incidence populations is the single-case design (SCD). These designs involve observations of a single case (e.g., a child or a classroom) over time in the absence and presence of an experimenter-controlled treatment manipulation to determine whether the outcome is systematically related to the treatment.

Research using SCD is often omitted from reviews of whether evidence-based practices work because there has not been a common metric to gauge effects as there is in group design research. To address this issue, the National Center for Education Research (NCER) and National Center for Special Education Research (NCSER) commissioned a paper by leading experts in methodology and SCD. Authors William Shadish, Larry Hedges, Robert Horner, and Samuel Odom contend that the best way to ensure that SCD research is accessible and informs policy decisions is to use good standardized effect size measures—indices that put results on a scale with the same meaning across studies—for statistical analyses. Included in this paper are the authors' recommendations for how SCD researchers can calculate and report on standardized between-case effect sizes, the way in these effect sizes can be used for various audiences (including policymakers) to interpret findings, and how they can be used across studies to summarize the evidence base for education practices.
1/7/2016
REL 2015077 Comparing Methodologies for Developing an Early Warning System: Classification and Regression Tree Model Versus Logistic Regression
The purpose of this report was to explicate the use of logistic regression and classification and regression tree (CART) analysis in the development of early warning systems. It was motivated by state education leaders' interest in maintaining high classification accuracy while simultaneously improving practitioner understanding of the rules by which students are identified as at-risk or not at-risk readers. Logistic regression and CART were compared using data on a sample of grades 1 and 2 Florida public school students who participated in both interim assessments and an end-of-the year summative assessment during the 2012/13 academic year. Grade-level analyses were conducted and comparisons between methods were based on traditional measures of diagnostic accuracy, including sensitivity (i.e., proportion of true positives), specificity (proportion of true negatives), positive and negative predictive power, and overall correct classification. Results indicate that CART is comparable to logistic regression, with the results of both methods yielding negative predictive power greater than the recommended standard of .90. Details of each method are provided to assist analysts interested in developing early warning systems using one of the methods.
2/25/2015
REL 2015071 How Methodology Decisions Affect the Variability of Schools Identified as Beating the Odds
Schools that show better academic performance than would be expected given characteristics of the school and student populations are often described as "beating the odds" (BTO). State and local education agencies often attempt to identify such schools as a means of identifying strategies or practices that might be contributing to the schools' relative success. Key decisions on how to identify BTO schools may affect whether schools make the BTO list and thereby the identification of practices used to beat the odds. The purpose of this study was to examine how a list of BTO schools might change depending on the methodological choices and selection of indicators used in the BTO identification process. This study considered whether choices of methodologies and type of indicators affect the schools that are identified as BTO. The three indicators were (1) type of performance measure used to compare schools, (2) the types of school characteristics used as controls in selecting BTO schools, and (3) the school sample configuration used to pool schools across grade levels. The study applied statistical models involving the different methodologies and indicators and documented how the lists schools identified as BTO changed based on the models. Public school and student data from one midwest state from 2007-08 through 2010-11 academic years were used to generate BTO school lists. By performing pairwise comparisons among BTO school lists and computing agreement rates among models, the project team was able to gauge the variation in BTO identification results. Results indicate that even when similar specifications were applied across statistical methods, different sets of BTO schools were identified. In addition, for each statistical method used, the lists of BTO schools identified varied with the choice of indicators. Fewer than half of the schools were identified as BTO in more than one year. The results demonstrate that different technical decisions can lead to different identification results.
2/24/2015
   1 - 15     Next >>
Page 1  of  2