IES Blog

Institute of Education Sciences

How to Help Low-performing Schools Improve

By Thomas Wei, Evaluation Team Leader

NOTE: Since 2009, the Department of Education has invested more than $6 billion in School Improvement Grants (SIG)SIG provided funds to the nation’s persistently lowest-achieving schools to implement one of four improvement models. Each model prescribed a set of practices, for example: replacing the principal, replacing at least 50 percent of teachers, increasing learning time, instituting data-driven instruction, and using “value-added” teacher evaluations.

Other than outcomes, how similar are our nation’s low-performing schools? The answers to this question could have important implications for how best to improve these, and other, schools. If schools share similar contexts, it may be more sensible to prescribe similar improvement practices than if they have very different contexts.

This is one of the central questions the National Center for Education Evaluation and Regional Assistance is exploring through its Study of School Turnaround. The first report (released in May 2014) described the experiences of 25 case study schools in 2010-2011, which was their first year implementing federal School Improvement Grants (SIG).

The report found that even though the 25 SIG schools all struggled with a history of low performance, they were actually quite different in their community and fiscal contexts, their reform histories, and the root causes of their performance problems. Some schools were situated in what the study termed “traumatic” contexts, with high crime, incarceration, abuse, and severe urban poverty. Other schools were situated in comparatively “benign” contexts with high poverty but limited crime, homes in good repair, and little family instability. All schools reported facing challenges with funding and resources, but some felt it was a major barrier to improvement while others felt it was merely a nuisance. Some schools felt their problems were driven by student behavior, others by poor instruction or teacher quality, and still others by the school’s external context such as crime or poverty.

Given how diverse low-performing schools appear to be, it is worth wondering whether they need an equally diverse slate of strategies to improve. Indeed, the report found that the 25 case study schools varied in their improvement actions even with the prescriptive nature of the SIG models (see the chart above, showing school improvement actions used by sample schools).

It is important to note that this study cannot draw any causal conclusions and that it is based on a small number of schools that do not necessarily reflect the experiences of all low-performing schools. Still, policymakers may wish to keep this finding in mind as they consider how to structure future school improvement efforts.

The first report also found that all but one of the 25 case study schools felt they made improvements in at least some areas after the first year of implementing SIG. Among the issues studied in the second report, released April 14, 2016, is whether these schools were able to build on their improvements in the second and third year of the grant. Read a blog post on the second report.

UPDATED APRIL 18 to reflect release of second report.

Responding to the Needs of the Field

By Chris Boccanfuso Education Research Analyst, NCEE

One of the most commonly asked questions about the Regional Educational Laboratory, or REL, program is how we choose the applied research and evaluation studies, analytic technical assistance, and dissemination activities we provide for free to stakeholders every year. The answer is simple – we don’t!

Instead, the REL staff at the Institute of Education Sciences (IES) and our contractors who run the nation’s ten regional labs listen to the voices of teachers, administrators, policymakers, and students to identify and address high-leverage problems of practice and build the capacity of stakeholders. In other words, these groups determine where the RELs should use their resources to make the largest, most lasting impact possible on practice, policy and, ultimately, student achievement.

How do the RELs do this? Through a variety of activities we collectively refer to as needs sensing. The following are a few examples of how the RELs engage in needs sensing:

Research Alliances: Research Alliances are a type of researcher–practitioner partnership where a group of education stakeholders convene around a specific topic of concern to work collaboratively to investigate a problem and build capacity to address it. Alliances can be made up of many types of stakeholders, such as teachers, administrators, researchers and members of community organizations. Alliances can vary in size and scope and address a variety of topics. For instance, alliances have formed to address topics as broad as dropout prevention and as specific as Hispanic students’ STEM performance. The vast majority of the RELs’ work is driven by these research alliances.

While the RELs’ 79 research alliances are incredibly diverse, one thing each alliance has in common is that they collectively develop a research agenda. These agendas can change, as the alliance continually weighs the questions and needs of various groups against the types of services and the resources available to address these needs. Not every need has to be addressed through a multi-year research study. Sometimes, it can be addressed through a workshop, a literature review, or a “Bridge Event”, where national experts on certain topic work with practitioners to provide the information that alliance members need, when they need it. Sometimes, a need is state or district-specific, is related to the impact of a specific program, or covers a topic where the existing research literature is thin. In these cases, a research study may be most appropriate.

Governing Boards: Another way that RELs determine their work is through their Governing Boards.  By law, each REL is required to have a Governing Board that consists of the Chief State School Officers (or their designee) for each state, territory, or freely associated state in the region. The Board also includes carefully selected members who equitably represent each state, as well as a broad array of regional interests, such as educating rural and economically disadvantaged populations. (A 2013 REL Northeast and Islands Governing Board meeting is pictured here.)
 
Governing Boards typically include a mix of people with experience in research, policy and teaching practice. Each Governing Board meets two to three times per year to discuss, direct, advise, and approve each REL project that occurs in that region. The intent is to ensure that the work being done by the REL is timely, high-leverage, equitably distributed across the region, and not redundant with existing efforts.                          

“Ask a REL”: A third way in which the RELs engage in needs sensing is through the Ask a REL service. Ask a REL is a publicly available reference desk service that functions much like a technical reference library. It provides requestors with references, referrals, and brief responses in the form of citations on research-based education questions. RELs are able to examine trends in the topics of Ask a REL requests to verify needs determined through other methods, as well as identify new topics that may warrant additional attention.   

RELs use many additional ways to explore the needs of their region, including scans of regional education news sites, reviews of recently published research, and Stakeholder Feedback Surveys that are filled out by alliance members and attendees at REL events.

It’s a thorough and ongoing process that RELs are engaging in to address authentic, high-leverage problems of practice in a variety of ways. In the coming months, we will share stories of the many projects that were informed by this needs sensing process. Stay tuned!

 

The Institute of Education Sciences at AERA

The American Educational Research Association (AERA) will hold its annual meeting April 8 through April 12 in Washington, D.C.—the largest educational research gathering in the nation. This will be a special meeting for AERA, as it is celebrating 100 years of advocating for the development and use of research in education. The program includes hundreds of sessions, including opportunities to learn about cutting edge education research and opportunities to broaden and deepen the field. 

About 30 sessions will feature staff from the Institute of Education Sciences (IES) discussing IES-funded research, evaluation, and statistics, as well as training and funding opportunities.

On Saturday, April 9, at 10:35 a.m., attendees will have a chance to meet the Institute’s leadership and hear about the areas of work that IES will be focusing on in the coming year. Speakers include Ruth Curran Neild, IES’ delegated director, and the leaders of the four centers in IES: Thomas Brock, commissioner of the National Center for Education Research (NCER); Peggy Carr, acting commissioner of the National Center for Educational Statistics (NCES); Joy Lesnick, acting commissioner of the National Center for Education Evaluation and Regional Assistance (NCEE), and Joan McLaughlin, commissioner of the National Center for Special Education Research (NCSER).

On Monday, April 11, at 9:45 a.m., attendees can speak to one of several IES staffers who will be available at the Research Funding Opportunities—Meet Your Program Officers session. Program officers from NCER, NCSER, and NCEE will be on hand to answer questions about programs and grant funding opportunities. Several IES representatives will also be on hand Monday afternoon, at 4:15 p.m. for the Federally Funded Data Resources: Opportunities for Research session to discuss the myriad datasets and resources that are available to researchers.

NCES staff will lead sessions and present on a variety of topics, from The Role of School Finance in the Pursuit of Equity (Saturday, 12:25 p.m.) to Understanding Federal Education Policies and Data about English Learners (Sunday, April 10, 8:15 a.m.) and what we can learn from the results of PIAAC, a survey of adult skills (also Sunday, 8:15 a.m.). Dr. Carr will be a part of several sessions, including one on Sunday morning (10:35 a.m.) about future directions for NCES longitudinal studies and another on Monday morning (10 a.m.) entitled Issues and Challenges in the Fair and Valid Assessment of Diverse Populations in the 21st Century

On Monday, at 11:45 a.m., you can also learn about an IES-supported tool, called RCT-YES, that is designed to reduce barriers to rigorous impact studies by simplifying estimation and reporting of study results (Dr. Lesnick will be among those presenting). And a team from the IES research centers (NCER/NCSER) will present Sunday morning (10:35 a.m.) on communication strategies for disseminating education research (which includes this blog!).

IES staff will also participate in a number of other roundtables and poster sessions. For instance, on Tuesday, April 12, at 8:15 a.m., grab a cup of coffee and attend the structured poster session with the Institute’s 10 Regional Educational Laboratories (RELs). This session will focus on building partnerships to improve data use in education.  REL work will also be featured at several other AERA sessions.  

Did you know that the National Library of Education (NLE) is a component of IES? On Friday and Monday afternoon, attendees will have a unique opportunity to go on a site visit to the library. You’ll learn about the library’s current and historical resources – including its collection of more than 20,000 textbooks dating from the mid-19th century. The Library offers information, statistical, and referral services to the Department of Education and other government agencies and institutions, and to the public.

If you are going to AERA, follow us on Twitter to learn more about our sessions and our work.  And if you are tweeting during one of our sessions, please include @IESResearch in your tweet. 

By Dana Tofig, Communications Director, IES

C-SAIL: Studying the Impact of College- and Career-Readiness Standards

The nationwide effort to implement college- and career-ready standards is designed to better prepare students for success after high school, whether that means attending a postsecondary institution, entering the work force, or some combination of both. But there is little understanding about how these standards have been implemented across the country or the full impact they are having on student outcomes.  

To fill that void, the Institute of Education Sciences (IES) funded a new five-year research center, the Center on Standards, Alignment, Instruction, and Learning (C-SAIL). The center is studying the implementation of college- and career-ready standards and assessing how the standards are related to student outcomes. The center is also developing and testing an intervention that supports standards-aligned instruction.

Andy Porter (pictured right), of the University of Pennsylvania’s Graduate School of Education, is the director of C-SAIL and recently spoke with James Benson, the IES project officer for the center. Here is an edited version of that conversation.

You have been studying education standards for over 30 years. What motivated you to assemble a team of researchers and state partners to college- and career-readiness standards?

Standards-based reform is in a new and promising place with standards that might be rigorous enough to close achievement gaps that advocates have been fighting to narrow for the last 30 years. And with so many states implementing new standards, researchers have an unprecedented opportunity to learn about how standards-based reform is best done. We hypothesize that the only modest effects of standards-based reform thus far are largely due to the fact that those reforms stalled at the classroom door, so a focus of the Center will be how implementation is achieved and supported among teachers.

What are the main projects within the Center, and what are a few of the key questions that they are currently addressing?

We have four main projects. The first, an implementation study, asks, “How are state, district, and school-level educators making sense of the new standards, and what kinds of guidance and support is available to them?” We’re comparing and contrasting implementation approaches in four states—Kentucky, Massachusetts, Ohio and Texas. In addition to reviewing state policy documents, we’re surveying approximately 280 district administrators, 1,120 principals, and 6,720 teachers across (the same) four states, giving special attention to the experiences of English language learners and students with disabilities.

The second project is a longitudinal study that asks, “How are college- and career-readiness standards impacting student outcomes across all 50 states?” and “How are English language learners and students with disabilities affected by the new standards?” We’re analyzing data from the National Assessment of Education Progress (NAEP) and other sources to estimate the effects of college- and career-readiness standards on student achievement, high school completion, and college enrollment. Specifically, we’re examining whether implementing challenging state academic standards led to larger improvements in student outcomes in states with lower prior standards than in states with higher prior standards.

The third project is the Feedback on Alignment and Support for Teachers (FAST) intervention study, in which we are building an original intervention designed to assist teachers in providing instruction aligned to their state’s standards. FAST features real-time, online, personalized feedback for teachers, an off-site coach to assist teachers in understanding and applying aligned materials, and school-level collaborative academic study teams in each school.

The fourth project is a measurement study to determine the extent to which instruction aligns with college- and career-readiness standards. C-SAIL is developing new tools to assess alignment between teachers' instruction and state standards in English language arts and math.

How do you envision working with your partner states in the next few years? How do you plan to communicate with states beyond those partnering with the Center?

We’ve already collaborated with our partner states–Kentucky, Massachusetts, Ohio, and Texas–on our research agenda, and the chief state school officer from each state, plus a designee of their choice, sits on our advisory board. Additionally, we’re currently working with our partner states on our implementation study and plan to make our first findings this summer on effective implementation strategies immediately available to them.

All states, however, will be able to follow our research progress and access our findings in myriad ways, including through our website (pictured left). Our Fact Center features downloadable information sheets and the C-SAIL blog offers insights from our researchers and network of experts. We also invite practitioners, policymakers, parents and teachers to stay up-to-date on C-SAIL activities by subscribing to our newsletter, following us on Twitter, or liking us on Facebook.

Looking five years into the future, when the Center is finishing its work, what do you hope to understand about college- and career-readiness standards that we do not know now?

Through our implementation study, we will have documented how states are implementing new, challenging state academic standards; how the standards affect teacher instruction; what supports are most valuable for states, districts, and schools; and, how the new standards impact English language learners and students with disabilities.

Through our longitudinal study, we will have combined 50-state NAEP data with high school graduation rates, and college enrollment in order to understand how new standards impact student learning and college- and career-readiness.

Through our FAST Intervention, we will have created and made available new tools for teachers to monitor in real-time how well-aligned the content of their enacted curriculum is to their states’ college- and career-readiness standards in ELA and math.

Finally, but not least, we will have led policymakers, practitioners and researchers in a national discussion of our findings and their implications for realizing the full effects of standards-based reform. 

 

Statistical concepts in brief: How and why does NCES use sample surveys?

By Lauren Musu-Gillette

EDITOR’S NOTE: This is the first in a series of blog posts about statistical concepts that NCES uses as a part of its work. 

The National Center for Education Statistics (NCES) collects survey statistics in two main ways—universe surveys and sample surveys.

Some NCES statistics, such as the number of students enrolled in public schools or postsecondary institutions, come from administrative data collections. These data represent a nearly exact count of a population because information is collected from all potential respondents (e.g., all public schools in the U.S.). These types of data collections are also known as universe surveys because they involve the collection of data covering all known units in a population. The Common Core of Data (CCD), the Private School Survey (PSS) and the Integrated Postsecondary Education Data System (IPEDS) are the key universe surveys collected by NCES.

While universe surveys provide a wealth of important data on education, data collections of this magnitude are not realistic for every potential variable or outcome of interest to education stakeholders. That is why, in some cases, we use sample surveys, which select smaller subgroups that are representative of a broader population of interest. Using sample surveys can reduce the time and expense that would be associated with collecting data from all members of a particular population of interest. 


Example of selecting a sample from a population of interest

The example above shows a simplified version of how a representative sample could be drawn from a population. The population shown here has 60 people, with 2/3 males and 1/3 females. The smaller sample of 6 individuals is drawn from this larger population, but remains representative with 2/3 males and 1/3 females included in the sample.


For instance, the National Postsecondary Student Aid Study (NPSAS), Baccalaureate and Beyond (B&B), and the Beginning Postsecondary Study (BPS) select institutions from the entire universe of institutions contained in the Integrated Postsecondary Education Data System (IPEDS) database. Then, some students within those schools are selected for inclusion in the study.

Schools and students are selected so that they are representative of the entire population of postsecondary institutions and students. Some types of institutions or schools can be sampled at higher rates than their representation in the population to ensure additional precision for survey estimates of that population. Through scientific design of the sample of institutions and appropriate weighting of the sample respondents, data from these surveys are nationally representative without requiring that all schools or all students be included in the data collection.

Many of the NCES surveys are sample surveys. For example, NCES longitudinal surveys include nationally representative data for cohorts of students in the elementary grades (Early Childhood Longitudinal Survey), the middle grades (Middle Grades Longitudinal Study), as well as at the high school (High School Longitudinal Study), and college levels (Beginning Postsecondary Students). The National Household Education Survey gathers information on parental involvement in education, early childhood programs, and other topics using household residences rather than schools as the population. The National Postsecondary Student Aid Survey gathers descriptive information on all college students and their participation in student aid programs. Additionally, characteristics of teachers and principals and the schools in which they teach are obtained through the Schools and Staffing Survey, and the National Teacher and Principal Survey.

By taking samples of the population of interest, NCES is able to study trends on a national level without needing to collect data from every student or every school. However, the structure and the size of the sample can affect the accuracy of the results for some population groups. This means that statistical testing is necessary to make inferences about differences between groups in the population. Stay tuned for future blogs about how this testing is done, and how NCES provides the data necessary for researchers or the public to do testing of their own.