Search Results: (1-15 of 41 records)
|NCSER 2020001||An Introduction to Adaptive Interventions and SMART Designs in Education
Educators must often adapt interventions over time because what works for one student may not work for another and what works now for one student may not work in the future for the same student. Adaptive interventions provide education practitioners with a prespecified, systematic, and replicable way of doing this through a sequence of decision rules for whether, how, and when to modify interventions. The sequential, multiple assignment, randomized trial (SMART) is one type of multistage, experimental design that can help education researchers build high-quality adaptive interventions.
Despite the critical role adaptive interventions can play in various domains of education, research about adaptive interventions and the use of SMART designs to develop effective adaptive interventions in education is in its infancy. To help the field move forward in this area, the National Center for Special Education Research (NCSER) and the National Center for Education Evaluation and Regional Assistance (NCEE) commissioned a paper by leading experts in adaptive interventions and SMART designs. This paper aims to provide information on building and evaluating high-quality adaptive interventions and review the components of SMART designs, discuss the key features of the SMART, and introduce common research questions for which SMARTs may be appropriate.
|NPEC 2018023||The History and Origins of Survey Items for the Integrated Postsecondary Education Data System
This report updates the 2011–12 Integrated Postsecondary Education Data System (IPEDS) survey components report—The History and Origins of Survey Items for the Integrated Postsecondary Education Data System—in order to reflect the 2016–17 data collection. The report was developed to document the sources of current IPEDS data items as background information for interested parties and to provide guidance when NCES, technical review panels, and others are considering potential changes to the IPEDS data collection.
|NCEE 20184002||Asymdystopia: The threat of small biases in evaluations of education interventions that need to be powered to detect small impacts
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may create a new challenge for researchers: the need to guard against smaller inaccuracies (or "biases"). The purpose of this report is twofold. First, the report examines the potential for small biases to increase the risk of making false inferences as studies are powered to detect smaller impacts, a phenomenon the report calls asymdystopia. The report examines this potential for both randomized controlled trials (RCTs) and studies using regression discontinuity designs (RDDs). Second, the report recommends strategies researchers can use to avoid or mitigate these biases. For RCTs, the report recommends that evaluators either substantially limit attrition rates or offer a strong justification for why attrition is unlikely to be related to study outcomes. For RDDs, new statistical methods can protect against bias from incorrect regression models, but these methods often require larger sample sizes in order to detect small effects.
|NCEE 20174026||Comparing Impact Findings from Design-Based and Model-Based Methods: An Empirical Investigation
This report compares empirical results from different approaches to analyzing data from randomized controlled trials (RCTs). It focuses on how impact estimates compare between recently-developed design-based methods and traditional model-based methods. Design-based methods use the potential outcomes framework and known features of study designs to connect statistical methods to the building blocks of causal inference. They differ from model-based methods that have commonly been used in education research, including hierarchical linear model (HLM) methods and robust cluster standard error (RCSE) methods for clustered designs. This study re-analyzes nine past RCTs in the education area using both design- and model-based methods. The study finds that model-based and design-based methods yield very similar impact estimates and levels of statistical significance, especially when the underlying analytic assumptions (e.g., weights used to aggregate clusters and blocks) are aligned.
|NCES 2017092||A Quarter Century of Changes in the Elementary and Secondary Teaching Force: From 1987 to 2012
This report looks at changes in several key characteristics of the teaching force between the 1987-88 and 2011-12 school years, including the number of teachers, the level of teaching experience, and the racial/ethnic diversity of the teaching force. The report focuses on how these demographic changes varied across different types of teachers and schools.
Among the findings about changes in the teacher workforce over this 25 year period:
This report utilizes data from the Schools and Staffing Survey (SASS), a large-scale sample survey of elementary and secondary teachers and schools in the United States. SASS has been conducted seven times—in school years 1987-88, 1990-91, 1993-94, 1999-2000, 2003-04, 2007-08, and 2011-12.
|NCER 20162003||Synthesis of IES-Funded Research on Mathematics: 2002–2013
This synthesis reviews published papers on IES-supported research from projects awarded between 2002 and 2013. The authors identified 28 specific contributions that IES-funded research made to support mathematics learning and teaching from kindergarten through secondary school. The publication organizes the contributions by topic and grade level and each section describes the contributions IES-funded researchers are making in these areas and discusses the projects behind the contributions.
|NCER 20142000||Partially Nested Randomized Controlled Trials in Education Research: A Guide to Design and Analysis
In some tests of educational interventions, individual students are randomized directly to the treatment or control group, and both intervention and control protocols are administered in an individual setting. Such an experiment is an Individual-Level Randomized Controlled Trial (I-RCT). In other tests, clusters of students (e.g., classrooms) are randomized. This sort of experiment is called a Cluster Randomized Controlled Trial (C-RCT). However, in some designs, students in the treatment group are clustered like those in a C-RCT, but students in the control group are unclustered, like students in an I-RCT. This design is called a Partially Nested Randomized Controlled Trial (PN-RCT). It is partially nested because students in the treatment group are nested in some higher level unit, such as a tutoring group or class, but students in the control group are not nested as part of the experimental design.
This paper, commissioned by the National Center for Education Research, provides readers with an introduction to PN-RCTs and ways to design and analyze the results from them. This paper was written primarily for applied education researchers with introductory knowledge of quantitative impact evaluation methods. However, those with more advanced knowledge will also benefit from some of the technical examples and appendices.
|NCSER 20143000||Improving Reading Outcomes for Students with or at Risk for Reading Disabilities: A Synthesis of the Contributions from the Institute of Education Sciences Research Centers
The report describes what has been learned regarding the improvement of reading outcomes for children with or at risk for reading disabilities through research funded by the Institute's National Center for Education Research and National Center for Special Education Research and published in peer-reviewed outlets through December 2011. The synthesis describes contributions to the knowledge base produced by IES-funded research across four focal areas:
|NCSER 20133001||Synthesis of IES Research on Early Intervention and Early Childhood Education
The report describes what has been learned from research grants on early intervention and early childhood education funded by the Institute's National Center for Education Research and National Center for Special Education Research, and published in peer-reviewed outlets through June 2010. This synthesis describes contributions to the knowledge base produced by IES-funded research across four focal areas:
* Early childhood classroom environments and general instructional practices;
* Educational practices designed to impact children's academic and social outcomes;
* Measuring young children's skills and learning; and
* Professional development for early educators.
Research supported by IES has made significant contributions to the evidence base in these areas. The authors also raise important questions for education research in the future, including:
* What are the crucial features of high-quality early childhood education?
* Which instruction is most effective for which children and under what circumstances?
* How do we effectively and efficiently support teachers in improving their instruction?
|NPEC 2012835||Defining and Reporting Subbaccalaureate Certificates in IPEDS
Subbaccalaureate certificates, postsecondary awards conferred as the result of successful completion of a formal program of study below the baccalaureate level, have become more prominent in higher education over the last decade. Institutions of all sectors offer subbaccalaureate certificates, which can range in length from a few months to more than 2 years. Subbaccalaureate certificates provide individuals with a means for gaining specific skills and knowledge that can be readily transferred to the workforce. As part of its mission to promote the quality, comparability, and utility of postsecondary data, the National Postsecondary Education Cooperative (NPEC) convened a working group to examine subbaccalaureate certificates and how they are reported in the U.S. Department of Education's Integrated Postsecondary Education Data System (IPEDS).
|NPEC 2012831||Information Required to Be Disclosed Under the Higher Education Act of 1965: Suggestions for Dissemination – A Supplemental Report
In 2009, the National Postsecondary Education Cooperative (NPEC) issued a report that provided suggestions on how postsecondary institutions could meet disclosure requirements under the Higher Education Act of 1965 (HEA), as amended by the Higher Education Opportunity Act (HEOA) of 2008. This paper was commissioned by NPEC to determine if institutions were implementing the suggestions in its 2009 report. This paper identifies how institutions have implemented the NPEC’s 2009 report suggestions on presenting disclosure requirements. Additionally, this report identifies other resources and tools that could be used by institutions to present disclosure requirements in a consumer-friendly manner.
|NCSER 20123000REV||Secondary School Programs and Performance of Students With Disabilities: A Special Topic Report of Findings From the National Longitudinal Transition Study-2 (NLTS2)
Secondary School Programs and Performance of Students With Disabilities: A Special Topic Report of Findings From the National Longitudinal Transition Study-2 uses data from the National Longitudinal Transition Study-2 dataset to provide a national picture of what courses students with disabilities took in high school, in what settings, and with what success in terms of credits and grades earned.
This report has been revised to reflect the updated NLTS2 dataset released in 2013.
|NPEC 2012833||The History and Origins of Survey Items for the Integrated Postsecondary Education Data System
This project was conducted to determine the origin of items in the 2011-12 Integrated Postsecondary Education Data System (IPEDS) survey components. The report was developed to document the sources of current IPEDS data items as background information for interested parties and to provide guidance when NCES, technical review panels, and others are considering potential changes to the IPEDS data collection.
|NPEC 2012834||Suggestions for Improvements to the Collection and Dissemination of Federal Financial Aid Data
Several offices within the U.S. Department of Education collect and disseminate data about student financial aid. However, limitations of these data sources may make it difficult for consumers, policymakers, and researchers to gain a complete picture of the sources, types, and amounts of aid going to students at institutions of higher education and the relationship between aid and policy goals such as access and success. This report presents the findings and recommendations of the National Postsecondary Education Cooperative (NPEC) Working Group on Financial Aid Data, which sought to identify potential improvements to the collection and dissemination of federal financial aid data.
|NCEE 20124015||Whether and How to Use State Tests to Measure Student Achievement in a Multi-State Randomized Experiment: An Empirical Assessment Based on Four Recent Evaluations
An important question for educational evaluators is how best to measure academic achievement, the outcome of primary interest in many studies. In large-scale evaluations, student achievement has typically been measured by administering a common standardized test to all students in the study (a “study-administered test”). In the era of No Child Left Behind (NCLB), however, state assessments have become an increasingly viable source of information on student achievement. Using state tests scores can yield substantial cost savings for the study and can eliminate the burden of additional testing on students and teaching staff. On the other hand, state tests can also pose certain difficulties: their content may not be well aligned with the outcomes targeted by the intervention and variation in the content and scale of the tests can complicate pooling scores across states and grades.
This NCEE Reference Report, Whether and How to Use State Tests to Measure Student Achievement in a Multi-State Randomized Experiment: An Empirical Assessment Based on Four Recent Evaluations, examines the sensitivity of impact findings to (1) the type of assessment used to measure achievement (state tests or a study-administered test); and (2) analytical decisions about how to pool state test data across states and grades. These questions are examined using data from four recent IES-funded experimental design studies that measured student achievement using both state tests and a study-administered test. Each study spans multiple states and two of the studies span several grade levels.