IES Blog

Institute of Education Sciences

Responding to the Needs of the Field

By Chris Boccanfuso Education Research Analyst, NCEE

One of the most commonly asked questions about the Regional Educational Laboratory, or REL, program is how we choose the applied research and evaluation studies, analytic technical assistance, and dissemination activities we provide for free to stakeholders every year. The answer is simple – we don’t!

Instead, the REL staff at the Institute of Education Sciences (IES) and our contractors who run the nation’s ten regional labs listen to the voices of teachers, administrators, policymakers, and students to identify and address high-leverage problems of practice and build the capacity of stakeholders. In other words, these groups determine where the RELs should use their resources to make the largest, most lasting impact possible on practice, policy and, ultimately, student achievement.

How do the RELs do this? Through a variety of activities we collectively refer to as needs sensing. The following are a few examples of how the RELs engage in needs sensing:

Research Alliances: Research Alliances are a type of researcher–practitioner partnership where a group of education stakeholders convene around a specific topic of concern to work collaboratively to investigate a problem and build capacity to address it. Alliances can be made up of many types of stakeholders, such as teachers, administrators, researchers and members of community organizations. Alliances can vary in size and scope and address a variety of topics. For instance, alliances have formed to address topics as broad as dropout prevention and as specific as Hispanic students’ STEM performance. The vast majority of the RELs’ work is driven by these research alliances.

While the RELs’ 79 research alliances are incredibly diverse, one thing each alliance has in common is that they collectively develop a research agenda. These agendas can change, as the alliance continually weighs the questions and needs of various groups against the types of services and the resources available to address these needs. Not every need has to be addressed through a multi-year research study. Sometimes, it can be addressed through a workshop, a literature review, or a “Bridge Event”, where national experts on certain topic work with practitioners to provide the information that alliance members need, when they need it. Sometimes, a need is state or district-specific, is related to the impact of a specific program, or covers a topic where the existing research literature is thin. In these cases, a research study may be most appropriate.

Governing Boards: Another way that RELs determine their work is through their Governing Boards.  By law, each REL is required to have a Governing Board that consists of the Chief State School Officers (or their designee) for each state, territory, or freely associated state in the region. The Board also includes carefully selected members who equitably represent each state, as well as a broad array of regional interests, such as educating rural and economically disadvantaged populations. (A 2013 REL Northeast and Islands Governing Board meeting is pictured here.)
 
Governing Boards typically include a mix of people with experience in research, policy and teaching practice. Each Governing Board meets two to three times per year to discuss, direct, advise, and approve each REL project that occurs in that region. The intent is to ensure that the work being done by the REL is timely, high-leverage, equitably distributed across the region, and not redundant with existing efforts.                          

“Ask a REL”: A third way in which the RELs engage in needs sensing is through the Ask a REL service. Ask a REL is a publicly available reference desk service that functions much like a technical reference library. It provides requestors with references, referrals, and brief responses in the form of citations on research-based education questions. RELs are able to examine trends in the topics of Ask a REL requests to verify needs determined through other methods, as well as identify new topics that may warrant additional attention.   

RELs use many additional ways to explore the needs of their region, including scans of regional education news sites, reviews of recently published research, and Stakeholder Feedback Surveys that are filled out by alliance members and attendees at REL events.

It’s a thorough and ongoing process that RELs are engaging in to address authentic, high-leverage problems of practice in a variety of ways. In the coming months, we will share stories of the many projects that were informed by this needs sensing process. Stay tuned!

 

The Institute of Education Sciences at AERA

The American Educational Research Association (AERA) will hold its annual meeting April 8 through April 12 in Washington, D.C.—the largest educational research gathering in the nation. This will be a special meeting for AERA, as it is celebrating 100 years of advocating for the development and use of research in education. The program includes hundreds of sessions, including opportunities to learn about cutting edge education research and opportunities to broaden and deepen the field. 

About 30 sessions will feature staff from the Institute of Education Sciences (IES) discussing IES-funded research, evaluation, and statistics, as well as training and funding opportunities.

On Saturday, April 9, at 10:35 a.m., attendees will have a chance to meet the Institute’s leadership and hear about the areas of work that IES will be focusing on in the coming year. Speakers include Ruth Curran Neild, IES’ delegated director, and the leaders of the four centers in IES: Thomas Brock, commissioner of the National Center for Education Research (NCER); Peggy Carr, acting commissioner of the National Center for Educational Statistics (NCES); Joy Lesnick, acting commissioner of the National Center for Education Evaluation and Regional Assistance (NCEE), and Joan McLaughlin, commissioner of the National Center for Special Education Research (NCSER).

On Monday, April 11, at 9:45 a.m., attendees can speak to one of several IES staffers who will be available at the Research Funding Opportunities—Meet Your Program Officers session. Program officers from NCER, NCSER, and NCEE will be on hand to answer questions about programs and grant funding opportunities. Several IES representatives will also be on hand Monday afternoon, at 4:15 p.m. for the Federally Funded Data Resources: Opportunities for Research session to discuss the myriad datasets and resources that are available to researchers.

NCES staff will lead sessions and present on a variety of topics, from The Role of School Finance in the Pursuit of Equity (Saturday, 12:25 p.m.) to Understanding Federal Education Policies and Data about English Learners (Sunday, April 10, 8:15 a.m.) and what we can learn from the results of PIAAC, a survey of adult skills (also Sunday, 8:15 a.m.). Dr. Carr will be a part of several sessions, including one on Sunday morning (10:35 a.m.) about future directions for NCES longitudinal studies and another on Monday morning (10 a.m.) entitled Issues and Challenges in the Fair and Valid Assessment of Diverse Populations in the 21st Century

On Monday, at 11:45 a.m., you can also learn about an IES-supported tool, called RCT-YES, that is designed to reduce barriers to rigorous impact studies by simplifying estimation and reporting of study results (Dr. Lesnick will be among those presenting). And a team from the IES research centers (NCER/NCSER) will present Sunday morning (10:35 a.m.) on communication strategies for disseminating education research (which includes this blog!).

IES staff will also participate in a number of other roundtables and poster sessions. For instance, on Tuesday, April 12, at 8:15 a.m., grab a cup of coffee and attend the structured poster session with the Institute’s 10 Regional Educational Laboratories (RELs). This session will focus on building partnerships to improve data use in education.  REL work will also be featured at several other AERA sessions.  

Did you know that the National Library of Education (NLE) is a component of IES? On Friday and Monday afternoon, attendees will have a unique opportunity to go on a site visit to the library. You’ll learn about the library’s current and historical resources – including its collection of more than 20,000 textbooks dating from the mid-19th century. The Library offers information, statistical, and referral services to the Department of Education and other government agencies and institutions, and to the public.

If you are going to AERA, follow us on Twitter to learn more about our sessions and our work.  And if you are tweeting during one of our sessions, please include @IESResearch in your tweet. 

By Dana Tofig, Communications Director, IES

C-SAIL: Studying the Impact of College- and Career-Readiness Standards

The nationwide effort to implement college- and career-ready standards is designed to better prepare students for success after high school, whether that means attending a postsecondary institution, entering the work force, or some combination of both. But there is little understanding about how these standards have been implemented across the country or the full impact they are having on student outcomes.  

To fill that void, the Institute of Education Sciences (IES) funded a new five-year research center, the Center on Standards, Alignment, Instruction, and Learning (C-SAIL). The center is studying the implementation of college- and career-ready standards and assessing how the standards are related to student outcomes. The center is also developing and testing an intervention that supports standards-aligned instruction.

Andy Porter (pictured right), of the University of Pennsylvania’s Graduate School of Education, is the director of C-SAIL and recently spoke with James Benson, the IES project officer for the center. Here is an edited version of that conversation.

You have been studying education standards for over 30 years. What motivated you to assemble a team of researchers and state partners to college- and career-readiness standards?

Standards-based reform is in a new and promising place with standards that might be rigorous enough to close achievement gaps that advocates have been fighting to narrow for the last 30 years. And with so many states implementing new standards, researchers have an unprecedented opportunity to learn about how standards-based reform is best done. We hypothesize that the only modest effects of standards-based reform thus far are largely due to the fact that those reforms stalled at the classroom door, so a focus of the Center will be how implementation is achieved and supported among teachers.

What are the main projects within the Center, and what are a few of the key questions that they are currently addressing?

We have four main projects. The first, an implementation study, asks, “How are state, district, and school-level educators making sense of the new standards, and what kinds of guidance and support is available to them?” We’re comparing and contrasting implementation approaches in four states—Kentucky, Massachusetts, Ohio and Texas. In addition to reviewing state policy documents, we’re surveying approximately 280 district administrators, 1,120 principals, and 6,720 teachers across (the same) four states, giving special attention to the experiences of English language learners and students with disabilities.

The second project is a longitudinal study that asks, “How are college- and career-readiness standards impacting student outcomes across all 50 states?” and “How are English language learners and students with disabilities affected by the new standards?” We’re analyzing data from the National Assessment of Education Progress (NAEP) and other sources to estimate the effects of college- and career-readiness standards on student achievement, high school completion, and college enrollment. Specifically, we’re examining whether implementing challenging state academic standards led to larger improvements in student outcomes in states with lower prior standards than in states with higher prior standards.

The third project is the Feedback on Alignment and Support for Teachers (FAST) intervention study, in which we are building an original intervention designed to assist teachers in providing instruction aligned to their state’s standards. FAST features real-time, online, personalized feedback for teachers, an off-site coach to assist teachers in understanding and applying aligned materials, and school-level collaborative academic study teams in each school.

The fourth project is a measurement study to determine the extent to which instruction aligns with college- and career-readiness standards. C-SAIL is developing new tools to assess alignment between teachers' instruction and state standards in English language arts and math.

How do you envision working with your partner states in the next few years? How do you plan to communicate with states beyond those partnering with the Center?

We’ve already collaborated with our partner states–Kentucky, Massachusetts, Ohio, and Texas–on our research agenda, and the chief state school officer from each state, plus a designee of their choice, sits on our advisory board. Additionally, we’re currently working with our partner states on our implementation study and plan to make our first findings this summer on effective implementation strategies immediately available to them.

All states, however, will be able to follow our research progress and access our findings in myriad ways, including through our website (pictured left). Our Fact Center features downloadable information sheets and the C-SAIL blog offers insights from our researchers and network of experts. We also invite practitioners, policymakers, parents and teachers to stay up-to-date on C-SAIL activities by subscribing to our newsletter, following us on Twitter, or liking us on Facebook.

Looking five years into the future, when the Center is finishing its work, what do you hope to understand about college- and career-readiness standards that we do not know now?

Through our implementation study, we will have documented how states are implementing new, challenging state academic standards; how the standards affect teacher instruction; what supports are most valuable for states, districts, and schools; and, how the new standards impact English language learners and students with disabilities.

Through our longitudinal study, we will have combined 50-state NAEP data with high school graduation rates, and college enrollment in order to understand how new standards impact student learning and college- and career-readiness.

Through our FAST Intervention, we will have created and made available new tools for teachers to monitor in real-time how well-aligned the content of their enacted curriculum is to their states’ college- and career-readiness standards in ELA and math.

Finally, but not least, we will have led policymakers, practitioners and researchers in a national discussion of our findings and their implications for realizing the full effects of standards-based reform. 

 

Statistical concepts in brief: How and why does NCES use sample surveys?

By Lauren Musu-Gillette

EDITOR’S NOTE: This is the first in a series of blog posts about statistical concepts that NCES uses as a part of its work. 

The National Center for Education Statistics (NCES) collects survey statistics in two main ways—universe surveys and sample surveys.

Some NCES statistics, such as the number of students enrolled in public schools or postsecondary institutions, come from administrative data collections. These data represent a nearly exact count of a population because information is collected from all potential respondents (e.g., all public schools in the U.S.). These types of data collections are also known as universe surveys because they involve the collection of data covering all known units in a population. The Common Core of Data (CCD), the Private School Survey (PSS) and the Integrated Postsecondary Education Data System (IPEDS) are the key universe surveys collected by NCES.

While universe surveys provide a wealth of important data on education, data collections of this magnitude are not realistic for every potential variable or outcome of interest to education stakeholders. That is why, in some cases, we use sample surveys, which select smaller subgroups that are representative of a broader population of interest. Using sample surveys can reduce the time and expense that would be associated with collecting data from all members of a particular population of interest. 


Example of selecting a sample from a population of interest

The example above shows a simplified version of how a representative sample could be drawn from a population. The population shown here has 60 people, with 2/3 males and 1/3 females. The smaller sample of 6 individuals is drawn from this larger population, but remains representative with 2/3 males and 1/3 females included in the sample.


For instance, the National Postsecondary Student Aid Study (NPSAS), Baccalaureate and Beyond (B&B), and the Beginning Postsecondary Study (BPS) select institutions from the entire universe of institutions contained in the Integrated Postsecondary Education Data System (IPEDS) database. Then, some students within those schools are selected for inclusion in the study.

Schools and students are selected so that they are representative of the entire population of postsecondary institutions and students. Some types of institutions or schools can be sampled at higher rates than their representation in the population to ensure additional precision for survey estimates of that population. Through scientific design of the sample of institutions and appropriate weighting of the sample respondents, data from these surveys are nationally representative without requiring that all schools or all students be included in the data collection.

Many of the NCES surveys are sample surveys. For example, NCES longitudinal surveys include nationally representative data for cohorts of students in the elementary grades (Early Childhood Longitudinal Survey), the middle grades (Middle Grades Longitudinal Study), as well as at the high school (High School Longitudinal Study), and college levels (Beginning Postsecondary Students). The National Household Education Survey gathers information on parental involvement in education, early childhood programs, and other topics using household residences rather than schools as the population. The National Postsecondary Student Aid Survey gathers descriptive information on all college students and their participation in student aid programs. Additionally, characteristics of teachers and principals and the schools in which they teach are obtained through the Schools and Staffing Survey, and the National Teacher and Principal Survey.

By taking samples of the population of interest, NCES is able to study trends on a national level without needing to collect data from every student or every school. However, the structure and the size of the sample can affect the accuracy of the results for some population groups. This means that statistical testing is necessary to make inferences about differences between groups in the population. Stay tuned for future blogs about how this testing is done, and how NCES provides the data necessary for researchers or the public to do testing of their own.

Should ESSA Evidence Definitions and What Works Study Ratings be the Same? No, and Here's Why!

By Joy Lesnick, Acting Commissioner, NCEE

The Every Student Succeeds Act (ESSA), the new federal education law, requires education leaders to take research evidence into account when choosing interventions or approaches. ESSA  defines three “tiers” of evidence—strong, moderate, and promising—based on the type and quality of study that was done and its findings.  

Are the ESSA definitions the same as those of Institute of Education Sciences’ What Works Clearinghouse (WWC)?  Not exactly.  ESSA definitions and WWC standards are more like cousins than twins.

Like ESSA, the WWC has three ratings for individual studies – meets standards without reservations, meets standards with reservations, and does not meet standards. The WWC uses a second set of terms to summarize the results of all studies conducted on a particular intervention. The distinction between one study and many studies is important, as I will explain below.

You may be wondering: now that ESSA is the law of the land, should the WWC revise its standards and ratings to reflect the tiers and terminology described in ESSA?  Wouldn’t the benefit of making things nice and tidy between the two sets of definitions outweigh any drawbacks?

The short answer is no.

The most basic reason is that the WWC’s standards come from a decision-making process that is based in science and vetted through scholarly peer review, all protected by the independent, non-partisan status of the Institute of Education Sciences (IES). This fact is central to the credibility of the WWC’s work.  We like to think of the WWC standards as an anchor representing the best knowledge in the field for determining whether a study has been designed and executed well, and how much confidence we should have in its findings.

WWC Standards Reflect the Most Current Scientific Knowledge – and are Always Evolving

WWC standards were developed by a national panel of research experts. After nearly two years of meetings, these experts came to a consensus about what a research study must demonstrate to give us confidence that an intervention caused the observed changes in student outcomes.

Since the first WWC standards were developed over a decade ago, there have been many methodological and conceptual advances in education research. The good news is that the WWC is designed to keep up with these changes in science. As science has evolved, the WWC standards have evolved, too.

One example is the WWC’s standards for reviewing regression discontinuity (RD) design studies.  The first version of RD standards was developed by a panel of experts in 2012.  Since then, the science about RD studies has made so much progress that the WWC recently convened another panel of experts to update the RD standards. The new RD standards are now on the WWC website to solicit scholarly comment.  

When it Comes to Evidence, More is Better

The evidence tiers in ESSA set a minimum bar, based on one study, to encourage states, districts, and schools to incorporate evidence in their decision making. This is a very important step in the right direction.  But a one-study minimum bar is not as comprehensive as the WWC’s approach.

In science, the collective body of knowledge on a topic is always better than the result of a single study or observation. This is why the primary function of the WWC is to conduct systematic reviews of all of the studies on a program, policy, practice, or approach (the results of which are published in Intervention Reports like the one pictured here).

The results of individual studies are important clues toward learning what works. But multiple studies, in different contexts, with different groups of teachers and students, in different states, and with different real-world implementation challenges tell us much more about how well a program, policy, practice or approach works. And that, really, is what we’re trying to find out.

An Improved WWC Search Tool and Ongoing Support for States and Districts

One area where WWC will make changes is in how users find studies that have certain characteristics described in ESSA’s evidence tiers.  For the past 16 months, the WWC team has been hard at work behind the scenes to develop, code, and user-test a dramatically improved Find What Works tool.  We expect to release this tool, along with other changes to the WWC website, in fall 2016. (More on that in another post, but the picture below offers a sneak preview!)

These changes should further increase the utility of the WWC website, which already gets more than 300,000 hits each month and offers products that are downloaded hundreds of thousands of times each year.

We know that providing information on a website about evidence from rigorous research is just a first step.  States and districts may need additional, customized support to incorporate evidence into their decision-making processes in ways that are much deeper than a cursory check-box approach.

To meet that need, other IES programs are ready to help. For example, IES supports 10 Regional Educational Laboratories (RELs) that provide states and districts with technical support for using, interpreting, and applying research. At least two researchers at every REL are certified as WWC reviewers (meaning they have in-depth knowledge of the WWC standards and how the standards are applied), and every REL has existing relationships with states and districts across the nation and outlying regions. Because the RELs are charged with meeting the needs of their regions, every chief state school officer (or designee) sits on a REL Governing Board, which determines the annual priorities of the REL in that area.

As states prioritize their needs and identify ways to incorporate evidence in their decisions according to the new law, the WWC database of reviewed studies will provide the information they need, and the RELs will be ready to help them use that information in meaningful ways.