Search Results: (16-30 of 104 records)
|Sidestepping the Box: Designing a Supplemental Poverty Indicator for School Neighborhoods
School and neighborhood poverty indicators are a familiar feature in educational research, but the scope and specificity of available indicators is limited. As a result, researchers frequently rely on data from proxy neighborhood geographies out of operational necessity rather than analytic choice. This study examines common constraints of neighborhood data used for educational research and proposes the use of school-centered neighborhood poverty estimates based on data from the U.S. Census Bureau’s American Community Survey (ACS) and estimation techniques borrowed from spatial statistics. This study tested the feasibility of producing the proposed indicator by developing neighborhood poverty estimates for 1,793 Ohio elementary schools. Initial results suggest that the proposed indicator may provide a useful supplement to existing school-level poverty indicators and offer additional clarity about economic conditions in neighborhoods where schools are located.
|The Investing in Innovation Fund: Summary of 67 Evaluations
The Investing in Innovation (i3) Fund is a tiered-evidence program that aligns the amount of funding awarded to grantees with the strength of the prior evidence supporting the proposed intervention. One of the goals of i3 is to build strong evidence for effective interventions at increasing scale. The i3 program requires grantees to conduct an independent impact evaluation. This report, from the National Center for Education Evaluation and Regional Assistance (NCEE), assesses the quality of the 67 i3 grant evaluations completed by May 2017 and summarizes the findings of the evaluations. The report found that 49 of the first 67 completed i3 grant evaluations were implemented consistent with What Works Clearinghouse (WWC) evidence standards and 12 of the evaluations found a positive impact on at least one student academic outcome.
|School Attendance Boundary Survey (SABS) File Documentation: 2015-2016
The School Attendance Boundaries Survey (SABS) was an experimental survey conducted by the National Center for Education Statistics (NCES) with assistance from the U.S. Census Bureau to collect school attendance boundaries for regular schools in the 50 states and the District of Columbia. Attendance boundaries, sometimes known as school catchment areas, define the geographic extent served by a local school for the purpose of student assignments. School district administrators create attendance areas to help organize and plan district-wide services, and districts may adjust individual school boundaries to help balance the physical capacity of local schools with changes in the local school-age population. This document summarizes the final cycle of the experimental boundary collection. The 2015-16 SABS collection was intended to update boundaries collected during the 2013-2014 cycle and to supplement boundaries from additional districts not included in the previous collection.
|Regional Educational Laboratory researcher-practitioner partnerships: Documenting the research alliance experience
This report provides a detailed account of the Regional Educational Laboratory (REL) Program's experience establishing and supporting research-practice partnerships (called "research alliances") during its 2012–17 contract cycle. The report adds to the growing literature base on researcher-practitioner partnerships by sharing how the RELs reported creating, engaging, and maintaining multiple partnerships, with the purpose of informing future collaborative efforts for researchers and practitioners and for those who wish to support research-practice partnerships. It addresses questions about: how REL research alliances fit within the broader context of research-practice partnerships; what characteristics existed among REL research alliances and how they evolved over time; and what challenges RELs reported experiencing while establishing and supporting research alliances and the strategies RELs employed to address those challenges. Finally, the paper discusses the implications of the REL research alliance experience for other networks of research-practice partnerships.
|The Forum Guide to Collecting and Using Attendance Data
The Forum Guide to Collecting and Using Attendance Data is designed to help state and local education agency staff improve their attendance data practices – the collection, reporting, and use of attendance data to improve student and school outcomes. The guide offers best practice suggestions and features real-life examples of how attendance data have been used by education agencies. The guide includes a set of voluntary attendance codes that can be used to compare attendance data across schools, districts, and states. The guide also features tip sheets for a wide range of education agency staff who work with attendance data.
|Program for International Student Assessment (PISA) 2015 United States Restricted-use Data File
This CD-ROM contains PISA 2015 restricted-use data for the United States. The CD-ROM includes the data file, a codebook, instructions on how to merge with the U.S. PISA 2015 public-use dataset (NCES 2017-120), and a cross-walk to assist in merging with other public datasets, such as the Common Core of Data (CCD) and Private School Survey (PSS). As these data files can be used to identify respondent schools, a restricted-use license must be obtained before access to the data is granted. Click on the restricted-use license link below for more details.
|Asymdystopia: The threat of small biases in evaluations of education interventions that need to be powered to detect small impacts
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts may create a new challenge for researchers: the need to guard against smaller inaccuracies (or "biases"). The purpose of this report is twofold. First, the report examines the potential for small biases to increase the risk of making false inferences as studies are powered to detect smaller impacts, a phenomenon the report calls asymdystopia. The report examines this potential for both randomized controlled trials (RCTs) and studies using regression discontinuity designs (RDDs). Second, the report recommends strategies researchers can use to avoid or mitigate these biases. For RCTs, the report recommends that evaluators either substantially limit attrition rates or offer a strong justification for why attrition is unlikely to be related to study outcomes. For RDDs, new statistical methods can protect against bias from incorrect regression models, but these methods often require larger sample sizes in order to detect small effects.
|Workshop on Survey Methods in Education Research: Facilitator's guide and resources
Regional Educational Laboratory Midwest has developed a tool for state and local education agencies to use to organize and conduct training for their staff members who design and conduct surveys. Surveys are often used by education agencies to collect data to assess needs, inform policy decisions, evaluate programs, or respond to legislative mandates. The workshop presentation materials draw from evidence-based research from the field of survey research methodology and offer guidance on designing and administering high-quality surveys. The materials provide practical advice and examples drawn from experiences in developing surveys for local, state, and national education applications. The workshop includes eight modules that describe the steps of survey design and administration—from planning to data collection—and covers the following topics: planning for a survey, exploring existing item sources, writing items, pretesting survey items, sampling, data collection methods, response rates, and focus groups. The facilitator's guidebook includes the goals for each module, considerations for adapting the materials for various purposes, an annotated agenda, and participant handouts (slide decks and accompanying notes, activities, and handouts). Individuals and groups who are developing surveys can use these materials to facilitate workshops, guide a survey project, or ensure that they are adhering to best practices for designing and conducting surveys. Although this guide is intended to help survey researchers in state and local education agencies organize and conduct a training for their staff, the materials also can be used as a stand-alone resource for anyone wishing to learn the basics about survey design and administration in education settings.
|Puerto Rico school characteristics and student graduation: Implications for research and policy
The purpose of the study is to examine the relationship between Puerto Rico’s high school characteristics and student graduation rates. The study examines graduation rates for all public high schools for students who started grade 10 in 2010/11 (in Puerto Rico high school begins in grade 10) and were expected to graduate at the end of the 2012/13 school year, which were the most recent graduation data available. Using data provided by the Puerto Rico Department of Education as well as publicly available data, this study first examined the correlational relationships between graduation rates and two types of variables: student composition characteristics, which are not amenable to change or intervention but help to improve the description of graduation trends in Puerto Rico (for example, the percentage of students who are living in poverty); and school characteristics, which are amenable to change or intervention by officials (for example, the ratio of students per teacher). Regression analyses were used to estimate the conditional association between various characteristics and on-time graduation in Puerto Rico high schools after controlling for other factors. The percentage of students proficient in Spanish language arts was associated with higher graduation rates, after controlling for other school characteristics both overall and by subgroup (males, females, students below poverty, and special education students). After controlling for other characteristics, the percentage of students proficient in mathematics was not associated with graduation rates. Lower student-to-teacher ratios were associated with higher graduation rates for males, students living in poverty, and special education students, after controlling for other school characteristics. The percentage of highly qualified teachers was associated with lower graduation rates overall and for all subgroups except females, after controlling for other school characteristics. Correlations between each school characteristic and graduation rates are also presented in the report. The findings from this study provide a starting point for stakeholders in Puerto Rico who are interested in addressing the low rates of graduation in their high schools and communities through the use of data-driven decision-making.
|Establishing and sustaining networked improvement communities: Lessons from Michigan and Minnesota
The purpose of this report is to share lessons learned by Regional Educational Laboratory (REL) Midwest researchers as they worked with educators in Michigan and Minnesota to establish and sustain two networked improvement communities (NICs). A NIC is a type of collaborative research partnership that uses principles of improvement science within networks to learn from variation across contexts. At the request of the Michigan Department of Education, REL Midwest worked with educators at the school, district, intermediate school district, and state levels to establish the Michigan Focus NIC, with the goal of reducing disparities in student achievement within schools. At the request of the Minnesota Department of Education, REL Midwest worked with educators at the state and regional levels to establish the Minnesota Statewide System of Support NIC. This NIC aimed to improve the supports that the Minnesota Department of Education provides to its six Regional Centers of Excellence, which implement school improvement strategies in the schools in the state with the lowest performance and largest achievement gaps. Although there is practical guidance for how NICs should structure their work, few published accounts describe the process of forming a NIC. Through its experience working with educators to form two NICs, REL Midwest learned that it is important to: build a cohesive team with members representing different types of expertise; reduce uncertainty by clarifying what participation would entail; build engagement by aligning work with ongoing efforts; use activities that are grounded in daily practice to narrow the problem of practice to one that is high leverage and actionable; and embed capacity building into NICs to build additional expertise for using continuous improvement research to address problems of practice. This report offers guidance to researchers and educators as they work to establish and sustain NICs. The lessons learned come from efforts to establish NICs in two specific contexts and therefore may not be generalizable to other contexts.
|Descriptive analysis in education: A guide for researchers
Whether the goal is to identify and describe trends and variation in populations, create new measures of key phenomena, or describe samples in studies aimed at identifying causal effects, description plays a critical role in the scientific process in general and education research in particular. Descriptive analysis identifies patterns in data to answer questions about who, what, where, when, and to what extent. This guide describes how to more effectively approach, conduct, and communicate quantitative descriptive analysis. The primary audience for this guide includes members of the research community who conduct and publish both descriptive and causal studies, although it could also be useful for policymakers and practitioners who are consumers of research findings. The guide contains chapters that discuss the important role descriptive analysis plays; how to approach descriptive analysis; how to conduct descriptive analysis; and how to communicate descriptive analysis findings.
|Summary of research on online and blended learning programs that offer differentiated learning options
This report presents a summary of empirical studies of K-12 online and blended instructional approaches that offer differentiated learning options. In these approaches, instruction is provided in whole or in part online. This report includes studies that examine student achievement outcomes and summarizes the methodology, measures, and findings used in the studies of these instructional approaches. Of the 162 studies that were reviewed, 17 met all inclusion criteria and are summarized in this report. The majority of the studies examined blended instructional approaches, while all approaches provided some means to differentiate the content, difficulty level, and/or pacing of the online content. Among the blended instructional approaches, 45 percent were designed to support differentiation of the in-class component of instruction. The majority of studies examining these approaches compared student performance on common standardized achievement measures between students receiving the instructional approach and those in comparison classrooms or schools. Among the most rigorous studies, statistically significant positive effects were found for four blended instructional approaches.
|NCES-Barron's Admissions Competitiveness Index Data Files: 1972, 1982, 1992, 2004, , 2008, 2014
The NCES−Barron’s Admissions Competitiveness Index Data Files: 1972, 1982, 1992, 2004, 2008, 2014 (NCES 2015-332) contain the Barron’s college admissions competitiveness selectivity ratings for 1972, 1982, 1992, 2004, 2008, 2014 along with the NCES Higher Education Information System (HEGIS) FICE ID and Integrated Postsecondary Education Data System (IPEDS) UNITID codes and the Office of Postsecondary Education OPEID codes of each postsecondary institution included. Also included are the city and state of each institution included in the Barron’s lists. The years selected correspond to the years that students in the longitudinal studies (NLS-72, HS&B, NELS:88, ELS-2000,HSLS:09, and BPS) initially attended the 4-year postsecondary institutions. Each of the six NCES−Barron’s index files is available in a separate worksheet in an Excel workbook file that is in Excel 1997–2003 compatible format.
|Stated Briefly Benchmarking education management information systems across the Federated States of Micronesia
The chief state school officers of the Federated States of Micronesia (FSM) have called for the improvement of the education management information system (EMIS) in each of the four FSM states (Chuuk, Kosrae, Pohnpei, and Yap). To assist the FSM, Regional Educational Laboratory Pacific conducted separate assessments of the quality of the current EMIS in Chuuk, Kosrae, Pohnpei, and Yap. This report integrates the findings from all four states, thereby providing an opportunity for comparison on each aspect of quality. As part of a focus group interview, knowledgeable data specialists in each of the four states of the FSM responded to 46 questions covering significant areas of their EMIS. The interview protocol, adapted by Regional Educational Laboratory Pacific from the World Bank's System Assessment and Benchmarking for Education Results assessment tool, provides a means for rating aspects of an EMIS system using four benchmarking levels: latent (the process or action required to improve the aspect of quality is not in place), emerging (the process or action is in progress of implementation), established (the process or action is in place and it meets standards), and mature (the process or action is an example of best practice). Overall, data specialists in all four states rated their systems as either emerging or established.
|What does it mean when a study finds no effects?
This short brief for education decisionmakers discusses three main factors that may contribute to a finding of no effects: failure of theory, failure of implementation, and failure of research design. It provides readers with questions to ask themselves to better understand 'no effects' findings, and describes other contextual factors to consider when deciding what to do next.
Page 2 of 7