Search Results: (16-30 of 96 records)
|REL 2017266||Puerto Rico school characteristics and student graduation: Implications for research and policy
The purpose of the study is to examine the relationship between Puerto Rico’s high school characteristics and student graduation rates. The study examines graduation rates for all public high schools for students who started grade 10 in 2010/11 (in Puerto Rico high school begins in grade 10) and were expected to graduate at the end of the 2012/13 school year, which were the most recent graduation data available. Using data provided by the Puerto Rico Department of Education as well as publicly available data, this study first examined the correlational relationships between graduation rates and two types of variables: student composition characteristics, which are not amenable to change or intervention but help to improve the description of graduation trends in Puerto Rico (for example, the percentage of students who are living in poverty); and school characteristics, which are amenable to change or intervention by officials (for example, the ratio of students per teacher). Regression analyses were used to estimate the conditional association between various characteristics and on-time graduation in Puerto Rico high schools after controlling for other factors. The percentage of students proficient in Spanish language arts was associated with higher graduation rates, after controlling for other school characteristics both overall and by subgroup (males, females, students below poverty, and special education students). After controlling for other characteristics, the percentage of students proficient in mathematics was not associated with graduation rates. Lower student-to-teacher ratios were associated with higher graduation rates for males, students living in poverty, and special education students, after controlling for other school characteristics. The percentage of highly qualified teachers was associated with lower graduation rates overall and for all subgroups except females, after controlling for other school characteristics. Correlations between each school characteristic and graduation rates are also presented in the report. The findings from this study provide a starting point for stakeholders in Puerto Rico who are interested in addressing the low rates of graduation in their high schools and communities through the use of data-driven decision-making.
|REL 2017264||Establishing and sustaining networked improvement communities: Lessons from Michigan and Minnesota
The purpose of this report is to share lessons learned by Regional Educational Laboratory (REL) Midwest researchers as they worked with educators in Michigan and Minnesota to establish and sustain two networked improvement communities (NICs). A NIC is a type of collaborative research partnership that uses principles of improvement science within networks to learn from variation across contexts. At the request of the Michigan Department of Education, REL Midwest worked with educators at the school, district, intermediate school district, and state levels to establish the Michigan Focus NIC, with the goal of reducing disparities in student achievement within schools. At the request of the Minnesota Department of Education, REL Midwest worked with educators at the state and regional levels to establish the Minnesota Statewide System of Support NIC. This NIC aimed to improve the supports that the Minnesota Department of Education provides to its six Regional Centers of Excellence, which implement school improvement strategies in the schools in the state with the lowest performance and largest achievement gaps. Although there is practical guidance for how NICs should structure their work, few published accounts describe the process of forming a NIC. Through its experience working with educators to form two NICs, REL Midwest learned that it is important to: build a cohesive team with members representing different types of expertise; reduce uncertainty by clarifying what participation would entail; build engagement by aligning work with ongoing efforts; use activities that are grounded in daily practice to narrow the problem of practice to one that is high leverage and actionable; and embed capacity building into NICs to build additional expertise for using continuous improvement research to address problems of practice. This report offers guidance to researchers and educators as they work to establish and sustain NICs. The lessons learned come from efforts to establish NICs in two specific contexts and therefore may not be generalizable to other contexts.
|NCEE 20174023||Descriptive analysis in education: A guide for researchers
Whether the goal is to identify and describe trends and variation in populations, create new measures of key phenomena, or describe samples in studies aimed at identifying causal effects, description plays a critical role in the scientific process in general and education research in particular. Descriptive analysis identifies patterns in data to answer questions about who, what, where, when, and to what extent. This guide describes how to more effectively approach, conduct, and communicate quantitative descriptive analysis. The primary audience for this guide includes members of the research community who conduct and publish both descriptive and causal studies, although it could also be useful for policymakers and practitioners who are consumers of research findings. The guide contains chapters that discuss the important role descriptive analysis plays; how to approach descriptive analysis; how to conduct descriptive analysis; and how to communicate descriptive analysis findings.
|REL 2017228||Summary of research on online and blended learning programs that offer differentiated learning options
This report presents a summary of empirical studies of K-12 online and blended instructional approaches that offer differentiated learning options. In these approaches, instruction is provided in whole or in part online. This report includes studies that examine student achievement outcomes and summarizes the methodology, measures, and findings used in the studies of these instructional approaches. Of the 162 studies that were reviewed, 17 met all inclusion criteria and are summarized in this report. The majority of the studies examined blended instructional approaches, while all approaches provided some means to differentiate the content, difficulty level, and/or pacing of the online content. Among the blended instructional approaches, 45 percent were designed to support differentiation of the in-class component of instruction. The majority of studies examining these approaches compared student performance on common standardized achievement measures between students receiving the instructional approach and those in comparison classrooms or schools. Among the most rigorous studies, statistically significant positive effects were found for four blended instructional approaches.
|NCES 2016332||NCES-Barron's Admissions Competitiveness Index Data Files: 1972, 1982, 1992, 2004, , 2008, 2014
The NCES−Barron’s Admissions Competitiveness Index Data Files: 1972, 1982, 1992, 2004, 2008, 2014 (NCES 2015-332) contain the Barron’s college admissions competitiveness selectivity ratings for 1972, 1982, 1992, 2004, 2008, 2014 along with the NCES Higher Education Information System (HEGIS) FICE ID and Integrated Postsecondary Education Data System (IPEDS) UNITID codes and the Office of Postsecondary Education OPEID codes of each postsecondary institution included. Also included are the city and state of each institution included in the Barron’s lists. The years selected correspond to the years that students in the longitudinal studies (NLS-72, HS&B, NELS:88, ELS-2000,HSLS:09, and BPS) initially attended the 4-year postsecondary institutions. Each of the six NCES−Barron’s index files is available in a separate worksheet in an Excel workbook file that is in Excel 1997–2003 compatible format.
|REL 2017209||Stated Briefly Benchmarking education management information systems across the Federated States of Micronesia
The chief state school officers of the Federated States of Micronesia (FSM) have called for the improvement of the education management information system (EMIS) in each of the four FSM states (Chuuk, Kosrae, Pohnpei, and Yap). To assist the FSM, Regional Educational Laboratory Pacific conducted separate assessments of the quality of the current EMIS in Chuuk, Kosrae, Pohnpei, and Yap. This report integrates the findings from all four states, thereby providing an opportunity for comparison on each aspect of quality. As part of a focus group interview, knowledgeable data specialists in each of the four states of the FSM responded to 46 questions covering significant areas of their EMIS. The interview protocol, adapted by Regional Educational Laboratory Pacific from the World Bank's System Assessment and Benchmarking for Education Results assessment tool, provides a means for rating aspects of an EMIS system using four benchmarking levels: latent (the process or action required to improve the aspect of quality is not in place), emerging (the process or action is in progress of implementation), established (the process or action is in place and it meets standards), and mature (the process or action is an example of best practice). Overall, data specialists in all four states rated their systems as either emerging or established.
|REL 2017265||What does it mean when a study finds no effects?
This short brief for education decisionmakers discusses three main factors that may contribute to a finding of no effects: failure of theory, failure of implementation, and failure of research design. It provides readers with questions to ask themselves to better understand 'no effects' findings, and describes other contextual factors to consider when deciding what to do next.
|REL 2016178||Summary of 20 years of research on the effectiveness of adolescent literacy programs and practices
This literature review searched the peer-reviewed studies of reading comprehension instructional practices conducted and published between 1994 and 2014 and summarizes the instructional practices that have demonstrated positive or potentially positive effects in scientifically rigorous studies employing experimental designs. Each study was rated by the review team utilizing the What Works Clearinghouse standards. The review of the literature resulted in the identification of 7,144 studies. Of these studies, only 111 met eligibility for review. Thirty-three of these studies were determined by the study team to have met What Works Clearinghouse standards. The 33 studies represented 29 different interventions or classroom practices. Twelve of these studies demonstrated positive or potentially positive effects. These 12 studies are described and the commonalities among the studies are summarized.
|NCER 20162003||Synthesis of IES-Funded Research on Mathematics: 2002–2013
This synthesis reviews published papers on IES-supported research from projects awarded between 2002 and 2013. The authors identified 28 specific contributions that IES-funded research made to support mathematics learning and teaching from kindergarten through secondary school. The publication organizes the contributions by topic and grade level and each section describes the contributions IES-funded researchers are making in these areas and discusses the projects behind the contributions.
|REL 2016138||Summary of research on the association between state interventions in chronically low-performing schools and student achievement
This report presents a summary of research on the associations between state interventions in chronically low-performing schools and student achievement. The majority of the research focused on one type of state intervention: working with a turnaround partner. In this type of intervention, states assign an individual or team to work with a school to identify strengths and weaknesses, develop a school improvement plan, and provide technical assistance as the school implements the plan. In some cases, additional funding is also provided to support implementation of the school improvement efforts. Most of the studies were descriptive, which limits conclusions about the effectiveness of the interventions. Results of studies of turnaround partner interventions were mixed, and suggested that student achievement was more likely to improve when particular factors were in place in schools such as strong leadership, use of data to guide instruction, and a positive school culture characterized by trust and increased expectations for students. Although researchers sought to include research on a variety of state intervention types, few studies were identified that examined other types of interventions such as school closure, charter conversion, and school redesign.
|NCSER 2015002||The Role of Effect Size in Conducting, Interpreting, and Summarizing Single-Case Research
The field of education is increasingly committed to adopting evidence-based practices. Although randomized experimental designs provide strong evidence of the causal effects of interventions, they are not always feasible. For example, depending upon the research question, it may be difficult for researchers to find the number of children necessary for such research designs (e.g., to answer questions about impacts for children with low-incidence disabilities). A type of experimental design that is well suited for such low-incidence populations is the single-case design (SCD). These designs involve observations of a single case (e.g., a child or a classroom) over time in the absence and presence of an experimenter-controlled treatment manipulation to determine whether the outcome is systematically related to the treatment.
Research using SCD is often omitted from reviews of whether evidence-based practices work because there has not been a common metric to gauge effects as there is in group design research. To address this issue, the National Center for Education Research (NCER) and National Center for Special Education Research (NCSER) commissioned a paper by leading experts in methodology and SCD. Authors William Shadish, Larry Hedges, Robert Horner, and Samuel Odom contend that the best way to ensure that SCD research is accessible and informs policy decisions is to use good standardized effect size measures—indices that put results on a scale with the same meaning across studies—for statistical analyses. Included in this paper are the authors' recommendations for how SCD researchers can calculate and report on standardized between-case effect sizes, the way in these effect sizes can be used for various audiences (including policymakers) to interpret findings, and how they can be used across studies to summarize the evidence base for education practices.
|NCER 20162000||A Compendium of Math and Science Research Funded by NCER and NCSER: 2002–2013
Between 2002 and 2013, the Institute of Education Sciences (Institute) funded over 300 projects focused on math and science through the National Center for Education Research (NCER) and the National Center for Special Education Research (NCSER). Together, researchers funded by NCER and NCSER have developed or tested more than 215 instructional interventions (e.g., packaged curricula, intervention frameworks, and instructional approaches), 75 professional development programs, 165 educational technologies, and 65 assessments in math and science. NCER commissioned the development of this compendium with the intent to present information in a structured, accessible, and usable manner. This compendium organizes information on the NCER and NCSER projects into two main sections: Mathematics and Science. Within each section, projects are sorted into chapters based on content area, grade level, and intended outcome. The compendium also includes multiple appendices and an index to help readers locate specific types of information (e.g., projects that focus on English language learners, specific interventions).
|NCES 2015118||Documentation for the School Attendance Boundary Survey (SABS): School Year 2013-2014
The School Attendance Boundary Survey (SABS) data file contains school attendance boundaries for regular schools with grades kindergarten through twelfth in the 50 states and the District of Columbia for the 2013-2014 school year. Prior to this survey, a national fabric of attendance boundaries was not freely available to the public. The geography of school attendance boundaries provides new context for researchers who were previously limited to state and district level geography.
|NCEE 20154013||A Guide to Using State Longitudinal Data for Applied Research
State longitudinal data systems (SLDSs) promise a rich source of data for education research. SLDSs contain statewide student data that can be linked over time and to additional data sources for education management, reporting, improvement, and research, and ultimately for informing education policy and practice.
Authored by Karen Levesque, Robert Fitzgerald, and Joy Pfeiffer of RTI International, this guide is intended for researchers who are familiar with research methods but who are new to using SLDS data, are considering conducting SLDS research in a new state environment, or are expanding into new topic areas that can be explored using SLDS data. The guide also may be useful for state staff as background for interacting with researchers and may help state staff and researchers communicate across their two cultures. It highlights the opportunities and constraints that researchers may encounter in using state longitudinal data systems and offers approaches to addressing some common problems.
|REL 2015061||Stated Briefly: What Does the Research Say About Increased Learning Time and Student Outcomes?
REL Appalachia conducted a systematic review of the research evidence on the effects of increased learning time. After screening more than 7,000 studies, REL Appalachia identified 30 that met the most rigorous standards for research. A review of those 30 studies found that increased learning time does not always produce positive results. However, some forms of instruction tailored to the needs of specific types of students were found to improve their circumstances. Findings suggest that the impacts of these programs depend on the settings, implementation features, and types of students targeted. This “Stated Briefly” report is a companion piece that summarizes the results of another report entitled The effects of increased learning time on students’ academic and nonacademic outcomes, released on July 9, 2014.
Page 2 of 7