NCEE Blog

National Center for Education Evaluation and Regional Assistance

Exploring the Growing Impact of Career Pathways

Career pathways programs for workforce development are spreading across the country at both the secondary and postsecondary levels. Based on a synthesis of studies examining career pathways programs that integrate postsecondary career-technical education (CTE), the What Works Clearinghouse (WWC)’s Designing and Delivering Career Pathways at Community Colleges practice guide presents five recommendations for implementing evidence-based practices:

Cover of advising practice guide
  1. Intentionally design and structure career pathways to enable students to further their education, secure a job, and advance in employment.
  2. Deliver contextualized or integrated basic skills instruction to accelerate students’ entry into and successful completion of career pathways.
  3. Offer flexible instructional delivery schedules and models to improve credit accumulation and completion of non-degree credentials along career pathways.
  4. Provide coordinated comprehensive student supports to improve credit accumulation and completion of non-degree credentials along career pathways.
  5. Develop and continuously leverage partnerships to prepare students and advance their labor market success.

Led by the WWC’s postsecondary contractor, Abt Associates, this practice guide was created by an expert panel of researchers and practitioners to provide examples of career pathways strategies and components and guidance to implement them; advise on strategies to overcome potential obstacles; and summarize evidence associated with rigorous research studies that met WWC standards.

As a long-time researcher of postsecondary CTE and many other important aspects of community college education, I welcome the opportunity to reflect on these five recommendations. I hope that my blog will help readers understand how this new practice guide fits into a larger landscape of research focusing on programs, policies, and practices aligned with the career pathways framework. Far from new, the notion of career pathways goes back several decades; thus, it is not surprising that we see an evolution in research to measure students’ education and employment outcomes. And still, there is a need for more rigorous studies of career pathways.

The Abt team located about 16,000 studies that were potentially relevant to the practice guide. Those studies used a wide variety of methods, data (quantitative and qualitative), and analysis procedures. Only 61 of them were eligible for review against the WWC standards, however; and only 21 of those met the WWC standards. Interestingly, most of those 21 studies focused on non-degree postsecondary credentials, rather than on college degrees, with policies and programs associated with workforce development and adult education well represented. Thus, lessons from the practice guide speak more directly to career pathways programs that culminate in credentials below the associate degree level than about those programs leading to the associate or baccalaureate degree level.

This dearth of rigorous career pathways research is problematic, as educational institutions of all types, including community colleges, seek to deliver positive, equitable outcomes to students during and beyond the COVID-19 pandemic.

Focus on Career Pathways

After examining the evidence from the studies that met the WWC standards, it was clear that the evidence converged around career pathways programs following requirements in the Strengthening Career and Technical Education for the 21st Century Act and Workforce Innovation and Opportunity Act (WIOA). In alignment with the WIOA definition of career pathways, the set of studies in the practice guide examine a “combination of rigorous and high-quality education, training, and other services” that align with the skill needs of industries in the region or state and accelerate participants’ educational and career advancement, to the extent practicable.

As defined by WIOA, career pathways support learners in pursuing their education and career goals, lead to at least one postsecondary credential, and provide entry or advancement in a particular occupation or occupational cluster. Because a growing number of community colleges employ a career pathways approach, as advocated by the federal legislation, it made sense to focus the practice guide on rigorous results and evidence-based recommendations that may help to move career pathway design and implementation forward.

The Five Recommendations

Recommendation 1: Intentionally design and structure career pathways to enable students to further their education, secure a job, and advance in employment. Our panel advocated for the intentional design and structure of career pathways for good reason. Whereas all educational institutions enroll students in courses and programs, career pathways prioritize the student’s entire educational experience, from access and entry, to completion and credentialing, and on to employment and career advancement. This purposeful approach to supporting student attainment is theorized to lead to positive student outcomes.

Applying the meta-analysis process required by the WWC, we determined from the 21 studies whether career pathways were achieving this crucial goal. We found nine of the studies showed overall statistically significant, positive results on industry-recognized credential attainment. Of the 12 studies supporting this recommendation, most  measured non-degree credentials; only two measured degree attainment—an important point to recognize, because these are the studies that have been conducted thus far.

This very small number of rigorous studies measuring degree attainment leaves open the question of whether career pathways increase postsecondary degree attainment—specifically the predominant credential in the community college context, the associate degree—and calls for greater investment in research on student completion of associate degrees (as well as baccalaureate degrees, a growing phenomenon in the United States).

Recommendation 2. Deliver contextualized or integrated basic skills instruction to accelerate students’ entry into and successful completion of career pathways. Studies that met WWC standards showed a positive impact of career pathways on college credit accumulation and industry-recognized credential attainment. Only one study measured postsecondary degree attainment relative to contextualized and basic skills instruction and it reported statistically significant and negative results. However, descriptive and correlational studies suggest that contextualized and basic skills instruction contribute to positive educational outcomes for students enrolled in Adult Basic Education in addition to postsecondary CTE and workforce training.

That results of rigorous research complement descriptive studies, some of which provide rich details on program implementation, is information useful for scaling up community college career pathways. Having said this, we still need to know more about how contextualized, basic skills instruction—and other applied instructional interventions—affect the outcomes of students, especially those from racial minoritized groups, with low incomes, and who are the first generation to attend college, all purported to be well served by the career pathways approach.

Recommendation 3: Offer flexible instructional delivery schedules and models to improve credit accumulation and completion of non-degree credentials along career pathways. Studies supporting this recommendation focused on five education outcomes: industry-recognized credential attainment, academic performance, technical skill proficiency, credit accumulation, and postsecondary degree attainment. As seen with the previous two recommendations, results on industry-recognized credential attainment were statistically significant and positive. Results on academic performance, technical skill proficiency, and credit accumulation were indeterminate, meaning findings could be positive or negative but were not statistically significant.

What is important to reiterate here is that nearly all the studies that met the WWC standards focused on non-degree credentials, providing limited information about results on the education outcome of postsecondary degree attainment. To be clear, our panel is not saying career pathways should focus exclusively on non-degree credentials; rather that results on postsecondary degree attainment are not definitive. Even so, that findings linking flexible scheduling and non-degree credential attainment are positive is important to know now, when the country is dealing with the pandemic.

Community colleges nationwide are rethinking instructional delivery to better meet students’ dire health, family, and employment needs. Rigorous research on career pathways interventions, such as flexible delivery, is needed, particularly studies involving diverse student populations. In times of economic and social struggle, it is essential that community college career pathways produce the equitable outcomes they purport to provide.

Recommendation 4: Provide coordinated comprehensive student supports to improve credit accumulation and completion of non-degree credentials along career pathways. The rigorous studies meeting WWC standards and measuring outcomes relative to comprehensive student supports focused on the education outcome domain only. Similar to the previous recommendation on flexible scheduling, findings on industry-recognized credential attainment were statistically significant and positive. However, on supports, findings on credit accumulation were statistically significant and positive, reinforcing findings generated by other studies showing holistic supports improve student outcomes. For example, a meta-analysis of studies of the Trade Adjustment Assistance Community College and Career Training grants that used rigorous evaluation designs reported favorable results for holistic supports in counseling and advising, case management, and various other support services and educational outcomes.

Consistent with the recommendations in this practice guide, a growing body of evidence favors integrating comprehensive student supports with career pathways. These supports are intended to meet the needs of the diverse population of students who attend community colleges; so, they should demonstrate equitable results on educational outcomes. More rigorous research is needed to measure whether and how career pathways provide access, opportunity, and outcomes for racially minoritized, low-income, and other underserved student groups. These studies should ascertain the impact of student supports on both education and employment outcomes, recognizing that students seek a high-quality credential and a good job that offers economic security and career mobility.

Recommendation 5: Develop and continuously leverage partnerships to prepare students and advance their labor market success. This recommendation specifically emphasizes labor market success, based on studies that examine labor market outcomes only. Supporting this recommendation were findings from studies of four labor market outcomes: short-term employment, short-term earnings, medium-term employment, and medium-term earnings. (The studies did not include long-term findings.)

Overall, statistically significant and positive outcomes were found in the meta-analysis for short-term employment, short-term earnings, and medium-term earnings. However, for medium-term employment, the meta-analysis results were indeterminate. To clarify, this does not mean employment-focused partnerships do not lead to labor market success; instead it points to a dearth of research that tracks students through training and into employment for long enough to measure long-term outcomes.

Even so, these initial findings from the meta-analysis are promising and suggest that developing and leveraging such partnerships may help move the needle on short- and medium-term employment outcomes. Longitudinal research that tracks students for periods sufficient to know whether long-term employment and earnings are affected should be a priority in the future.

Moving Forward

As I reflect on the research that I have conducted on career pathways over the years, I am gratified to see mounting evidence of positive student outcomes. As a first-generation college student myself, it has always made sense to me to demystify the college education process. Helping learners understand the entire educational journey, from start to finish, is bound to help them see how what they are learning may contribute to future education and career choices. I went to college not knowing what it would be like or whether I would be able to succeed, and I benefited from faculty and advisors who helped me see how my future could progress.

For other students like me who enter college without the benefit of family members sharing their stories of college-going, and for those who have to balance school with work and family care-taking responsibilities, it is important to know how a college education, including postsecondary CTE, can lead to positive educational and employment outcomes. Student groups underserved by postsecondary education deserve our most resolute and far-reaching efforts.

To this end, additional rigorous evidence on the impact of postsecondary CTE on college degree attainment could help to inform career pathways design, funding, and implementation. Also, as I reflected on the five recommendations, I was struck by the modest amount of research on medium-term labor market outcomes and the lack of any studies of long-term labor market outcomes. When the focus of career pathways is creating a path to living-wage employment and career advancement over the long term, it isn’t enough to know that students’ immediate employment outcomes were improved. When many students attending community colleges are already working, it isn’t even clear what immediate employment means.

If the outcome of interest for the majority of community college students who are adults and working is whether they get a better job and higher salary than they were getting pre-education, more nuanced measures and longer follow-up periods are needed than those provided by any of the research reviewed for this practice guide. It seems to me that finding more evidence of medium- and long-term outcomes could also provide more useful evidence of how career pathways work for diverse learner groups who are under-studied at the present time.

I was honored to help develop the practice guide with Hope Cotner, Grant Goold, Eric Heiser, Darlene Miller, and Michelle Van Noy. What an enormously gratifying experience it was to work with these professionals, the WWC team at Abt, and the Institute of Education Sciences staff. Working on this practice guide has left me feeling more optimistic about what we could learn with a more sizeable federal investment in research on postsecondary CTE in general, and on career pathways specifically. Rigorous evidence is needed to test models, explore interventions, and understand results for the plethora of learner groups who attend community colleges.

As the nation struggles to pull out of the pandemic that continues to rage in pockets across the country, it is the right time to invest in research that helps prepare students for good jobs that advance living-wage careers over a lifetime. A true commitment to equity in CTE programming is necessary for the nation, and now is the time to invest.

_____________________________________________________________________________________________________________

Debra D. Bragg, PhD, is president of Bragg & Associates, Inc., and the founder of research centers focusing on community college education at the University of Illinois at Urbana-Champaign and the University of Washington. She spent the first 15 years of her career in academe studying postsecondary CTE for federally funded research centers, having devoted her entire research agenda to improving education- and employment-focused policies, programs, and practices to create more equitable outcomes for community college students. She served as an expert panelist for the What Works Clearinghouse (WWC)’s Designing and Delivering Career Pathways at Community Colleges practice guide.

 

 

An open letter to Superintendents, as summer begins

If the blockbuster attendance at last month’s Summer Learning and Enrichment Collaborative convening is any sign, many of you are in the midst of planning—or have already started to put in place—your plans for summer learning. As you take time to review resources from the Collaborative and see what’s been learned from the National Summer Learning Project, I’d like to add one just one more consideration to your list: please use this summer as a chance to build evidence about “what works” to improve outcomes for your students. In a word: evaluate!

Given all the things that need to be put in place to even make summer learning happen, it’s fair to ask why evaluation merits even a passing thought.   

I’m urging you to consider building evidence about the outcomes of your program through evaluation because I can guarantee you that, in about a year, someone to whom you really want to give a fulsome answer will ask “so, what did we accomplish last summer?” (Depending upon who they are, and what they care about, that question can vary. Twists can include “what did students learn” or business officers’ pragmatic “what bang did we get for that buck.”)

When that moment comes, I want you to be able to smile, take a deep breath, and rattle off the sort of polished elevator speech that good data, well-analyzed, can help you craft. The alternative—mild to moderate panic followed by an unsatisfying version of “well, you know, we had to implement quickly”—is avoidable. Here’s how.

  1. Get clear on outcomes. You probably have multiple goals for your summer learning programs, including those that are academic, social-emotional, and behavioral. Nonetheless, there’s probably a single word (or a short phrase) that completes the following sentence: “The thing we really want for our students this summer is …” It might be “to rebuild strong relationships between families and schools,” “to be physically and emotionally safe,” or “to get back on track in math.” Whatever it is, get clear on two things: (1) the primary outcome(s) of your program and (2) how you will measure that outcome once the summer comes to an end. Importantly, you should consider outcome measures that will be available for both program participants and non-participants so that you can tell the story about the “value add” of summer learning. (You can certainly also include measures relevant only to participants, especially ones that help you track whether you are implementing your program as designed.)
  2. Use a logic model. Logic models are the “storyboard” of your program, depicting exactly how its activities will come together to cause improvement in the student outcomes that matter most. Logic models force program designers to be explicit about each component of their program and its intended impact. Taking time to develop a logic model can expose potentially unreasonable assumptions and missing supports that, if added, would make it more likely that a program succeeds. If you don’t already have a favorite logic model tool, we have resources available for free!  
  3. Implement evidence-based practices aligned to program outcomes. A wise colleague (h/t Melissa Moritz) recently reminded me that a summer program is the “container” (for lack of a better word) in which other learning experiences and educationally purposeful content are packaged, and that there are evidence-based practices for the design and delivery of both. (Remember: “evidence-based practices” run the gamut from those that demonstrate a rationale to those supported by promising, moderate, or strong evidence.) As you are using the best available evidence to build a strong summer program, don’t forget to ensure you’re using evidence-based practices in service of the specific outcomes you want those programs to achieve. For example, if your primary goal for students is math catch-up, then the foundation of your summer program should be an evidence-based Tier I math curriculum. If it is truly important that students achieve the outcome you’ve set for them, then they’re deserving of evidence-based educational practices supported by an evidence-based program design!
  4. Monitor and support implementation. Once your summer program is up and running, it’s useful to understand just how well your plan—the logic model you developed earlier—is playing out in real life. If staff trainings were planned, did they occur and did everyone attend as scheduled? Are activities occurring as intended, with the level of quality that was hoped for? Is attendance and engagement high? Monitoring implementation alerts you to where things may be “off track,” flagging where more supports for your team might be helpful. And, importantly, it can provide useful context for the outcomes you observe at the end of the summer. If you don't already have an established protocol for using data as part of continuous improvement, free resources are available!
  5. Track student attendance. If you don’t know who—specifically—participated in summer learning activities, describing how well those activities “worked” can get tricky. Whether your program is full-day, half-day, in-person, hybrid, or something else, develop a system to track (1) who was present, (2) on what days, and (3) for how long. Then, store that information in your student information system (or another database) where it can be accessed later. 
  6. Analyze and report your data, with an explicit eye toward equity. Data and data analysis can help you tell the story of your summer programming. Given the disproportionate impact COVID has had on students that many education systems have underserved, placing equity at the center of your planned analyses is critical. For example:
    • Who—and who did not—participate in summer programs? Data collected to monitor attendance should allow you to know who (specifically) participated in your summer programs. With that information, you can prepare simple tables that show the total number of participants and that total broken down by important student subgroups, such as gender, race/ethnicity, or socioeconomic status. Importantly, those data for your program should be compared with similar data for your school or district (as appropriate). Explore, for example, whether there are one or more populations disproportionately underrepresented in your program and the implications for the work both now and next summer.
    • How strong was attendance? Prior research has suggested that students benefit the most from summer programs when they are “high attenders.” (Twenty or more days out programs’ typical 25 to 30 total days.) Using your daily, by-student attendance data, calculate attendance intensity for your program’s participants overall and by important student subgroups. For example, what percentage of students attended between 0 and 24%, 25% to 49%, 50% to 74%, and 75% or more days?
    • How did important outcomes vary between program participants and non-participants? At the outset of the planning process, you identified one or more outcomes you hoped students would achieve by participating in your program and how you’d measure them. In the case of a “math catch-up” program, for example, you might be hoping that more summer learning participants get a score of “on-grade level” at the outset of the school year than their non-participating peers, potentially promising evidence that the program might have offered a benefit. Disaggregating these results by student subgroups when possible highlights whether the program might have been more effective for some students than others, providing insight into potential changes for next year’s work.     
    • Remember that collecting and analyzing data is just a means to an end: learning to inform improvement. Consider how involving program designers and participants—including educators, parents, and students—in discussions about what was learned as a result of your analyses can be used to strengthen next year’s program.
  7. Ask for help. If you choose to take up the evaluation mantle to build evidence about your summer program, bravo! And know that you do not have to do it alone. First, think locally. Are you near a two-year or four-year college? Consider contacting their education faculty to see whether they’re up for a collaboration. Second, explore whether your state has a state-wide “research hub” for education issues (e.g., Delaware, Tennessee) that could point you in the direction of a state or local evaluation expert. Third, connect with your state’s Regional Educational Lab or Regional Comprehensive Center for guidance or a referral. Finally, consider joining the national conversation! If you would be interested in participating in an Evaluation Working Group, email my colleague Melissa Moritz at melissa.w.moritz@ed.gov.

Summer 2021 is shaping up to be one for the record books. For many students, summer is a time for rest and relaxation. But this year, it will also be a time for reengaging students and families with their school communities and, we hope, a significant amount of learning. Spending time now thinking about measuring that reengagement and learning—even in simple ways—will pay dividends this summer and beyond.

My colleagues and at the U.S. Department of Education are here to help, and we welcome your feedback. Please feel free to contact me directly at matthew.soldner@ed.gov.

Matthew Soldner
Commissioner, National Center for Education Evaluation and Regional Assistance Agency

Teachers Should Not Be Left Wondering What Works

The past two school years have posed many new and unexpected challenges for students and teachers. One thing that has not changed much is that educators continue to need quick access to evidence on strategies that can best support students. The What Works Clearinghouse (WWC), an initiative of the U.S. Department of Education’s Institute of Education Sciences, aims to meet these needs with ready-to-use practices supported by evidence. The WWC Practice Guides describe these practices and how to implement them, most recently in the new guide for assisting students struggling in mathematics. These Practice Guides contain the classroom strategies and tips that are most likely to help improve student outcomes.

More than two dozen free Practice Guides address challenges educators face in teaching math, reading, and writing; supporting positive student behavior; and preventing dropout. The recommendations in Practice Guides are based on evidence from well-designed and well-implemented studies, the experiences of practitioners, and the expert opinions of a panel of nationally recognized experts.

Ann Jolly, an instructional program manager at the Charlotte-Mecklenburg Schools’ Program for Exceptional Children, has used WWC Practice Guides for years. She describes her experiences using the WWC resources below. Her experiences may help teachers or instructional leaders understand how to better incorporate evidence-based practices into their own practice.


The COVID-19 pandemic has us all wondering where the time goes. We want to use the most promising evidence-based practices to support our students. However, as expressed by one teacher who understands how easy it is to forget about trying out something new in the face of day-to-day demands, “Yeah, you just get busy teaching…

Whether you are a new teacher trying to figure out how to balance teaching, lesson planning, grading, and other duties, or a veteran who is “busy teaching,” you should check out the WWC. The WWC, created by the U.S. Department of Education, is an easy-to-navigate website with valuable resources. I know that, as teachers, we are constantly seeking out resources that will enable us to provide the best instruction to our students. The WWC can help by searching for research, reviewing studies for quality, and summarizing findings, so that busy teachers like us can focus on our students! Here’s a quick look at some of the WWC resources I have used to make a difference in my school and district as an instructional leader collaborating with teachers and families.

When I needed help boosting reading comprehension among my special education students, I used the WWC Practice Guide Improving Reading Comprehension in Kindergarten Through 3rd Grade. This guide provided me with recommendations of practices and other relevant information that the WWC gathered to support classroom instruction. For example, I was able to quickly see that teaching students how to use reading comprehension strategies had the strongest evidence, so I knew to focus on that. The guide gave me easy-to-understand resources about how to bring the strategies into my classroom, plus videos and reference tools with examples. These were easy to digest and I was able to immediately implement the recommendations in my classroom.

When I needed strategies to support literacy at home and in school, I used the WWC Practice Guide Foundational Skills to Support Reading for Understanding in Kindergarten Through 3rd Grade and its supplemental resources. Not only does the guide include a wealth of information for teachers, but companion documents include a summary of recommendations, a Professional Learning Communities Facilitator’s Guide, and Tips for Supporting Reading Skills at Home. I used the last tool to develop a presentation for parents. Parents took notes and asked questions as they made connections between the guide and the practices they could use at home with their children. Finding opportunities like this one to build relationships between teachers and parents may be even more important now, during a pandemic, than it was when I held this workshop. 

When my school was looking for strategies to improve student behavior, I facilitated a book club with school staff using the WWC Practice Guide Reducing Behavior Problems in the Elementary School Classroom. I began the club after noticing that other teachers were coming to me for suggestions about a common pattern of behaviors interfering with student learning.  This WWC guide offered several strategies to share. Although we started by discussing a specific behavioral issue and a recommended practice to address it, we eventually worked through the whole guide, chapter by chapter. The WWC Practice Guide gave us a free resource with powerful evidence-based strategies and practices for us to try. Teachers across grade levels and content areas actively collaborated through the book club and were able to build a common language and understanding about schoolwide practices. One of the great embedded features in WWC Practice Guides are the “Obstacles” or “Roadblocks.” This feature acknowledges perceived and actual barriers to implementing evidence-based practices and suggests solutions to overcome them!

The WWC has created a wide range of other Practice Guides, covering students from early childhood through high school graduation (and beyond). The most recent products include Assisting Students Struggling with Mathematics: Intervention in the Elementary Grades, a Practice Guide for educators in grades K to 6 that provides ready-to-use strategies for assisting struggling students. Some of my colleagues have used the guides on Teaching Secondary Students to Write Effectively, Teaching Math to Young Children, and Using Student Achievement Data to Support Instructional Decision Making. So many more Practice Guides are available!

I also encourage you to sign up now for the WWC News Flash and add the WWC to your social media network on Twitter, Facebook, or YouTube to easily keep up with the most current information. Research evidence on “what works” in education is there just for you. When you have a question, rely on the WWC…and don’t be left wondering what works!

This blog was written by Ann C. Jolly, Instructional Program Manager, Programs for Exceptional Children at Charlotte-Mecklenburg Schools with Data Rotz, Mathematica. 

When “More Research is Needed” Is the Key Finding: Improving the Evidence Base for Nonacademic Interventions for Postsecondary Success in Rural and High-Poverty Contexts

Stakeholders in rural and high-poverty districts in Regional Educational Laboratory (REL) Appalachia’s region have noticed a troubling trend: many students graduate from high school academically well prepared but fail to enroll in college or enroll in college only to struggle and drop out within the first year. Stakeholders believe these high-performing students may face nonacademic challenges to postsecondary success, such as completing financial aid paperwork, securing transportation and housing at colleges far from home, or adjusting to campus life. To address these challenges, education leaders are looking for interventions that address nonacademic competencies: the knowledge, skills, and behaviors that enable students to navigate the social, cultural, and other implicit demands of postsecondary study.

To fill this need, REL Appalachia researchers conducted a review of the existing evidence of the impact of nonacademic interventions – that is, those designed to address nonacademic competencies – on postsecondary enrollment, persistence, and completion. The review had a particular focus on identifying interventions that also have evidence of effectiveness in communities serving students similar to those in Appalachia—high-poverty, rural students. Only one intervention, Upward Bound, demonstrated impact in rural, high-poverty communities. The review showed that Upward Bound, as implemented in the early 1990s, benefited high-poverty rural students’ college enrollment, with no demonstrated impact on persistence or completion.

Schools and communities need access to nonacademic interventions that benefit students served in high-poverty rural communities. Researchers: read on to learn more about the methods used in the evidence review, its findings, and steps you can take to support rural and high-poverty communities in improving enrollment and success in postsecondary education!

Nonacademic challenges to postsecondary success for rural students

All students face nonacademic challenges to postsecondary success, but rural populations and high-poverty populations in particular may benefit from interventions addressing those challenges because they enroll in and complete college at significantly lower rates than their nonrural or low-poverty peers. Although academic challenges contribute to this gap, rural and high-poverty populations also face unique nonacademic challenges to postsecondary enrollment and success. For example, rural students are less likely to encounter college-educated role models and high-poverty students often face inadequate college counseling at their schools (see research here, here, and here). As a result, rural and high-poverty students may have inadequate access to knowledgeable adults who can help them understand the steps needed to enroll or prepare them for the challenges of persisting in postsecondary education.  Nonacademic interventions can support students in developing the knowledge, skills, and behaviors necessary to overcome these challenges and improve postsecondary enrollment and success for rural and high-poverty students.

The need for evidence-based interventions

To support decisionmakers at rural and high-poverty schools in identifying evidence-based nonacademic interventions, researchers at REL Appalachia conducted an extensive search of the published research. The search looked for rigorous studies of nonacademic interventions with evidence of positive impact on college enrollment, persistence, performance, and completion for students attending rural schools or who were identified as high poverty. The purpose of the project was to identify a suite of interventions to recommend to these education leaders.

The results of our review indicate there may be gaps in the evidence available to all decisionmakers who are trying to help their students succeed in postsecondary education. The search first identified any studies that focused on postsecondary outcomes of nonacademic interventions serving students ages 5–19. Of the 1,777 studies with the relevant keywords, only 65 focused on the postsecondary outcomes of nonacademic interventions. Next, we evaluated these 65 studies against the What Works Clearinghouse (WWC) design standards, which assess the quality of evaluation study designs. Only 17 studies met WWC’s rigorous study design standards with or without reservations. Finally, researchers from REL Appalachia identified studies that showed positive impacts on students overall, and studies that looked at rural students and students identified as high poverty in particular. Only eight studies showed positive, statistically significant impacts on students’ postsecondary enrollment or success overall. Of the eight studies that showed positive impacts of nonacademic interventions on postsecondary outcomes, only three focused on high-poverty populations, and only one reported specifically on rural populations.

This figure shows the number of studies remaining at each stage of screening. The original searches returned 1,777 unique studies. Of these, 65 focused on postsecondary outcomes of nonacademic interventions with students ages 5 to 19. At the next stage, 17 studies remained that met these criteria and also met WWC standards. At the final stage, 8 studies remained that met all criteria and had a positive effect on postsecondary outcomes.

 Without additional research that focuses on low-income and rural contexts, schools and districts are left to implement programs with limited or no evidence of effectiveness. For example, the Quantum Opportunity Program (QOP) provides mentors to students as part of a long-term after school program. However, WWC reviews of QOP studies (here and here) showed indeterminate effects of the program on postsecondary outcomes. The lack of evidence should not detract from the important role QOP has in serving students, but it leaves open the question of whether those efforts are having the intended effects. With few clear alternatives, schools and districts continue to implement programs with limited evidence of effectiveness.

Action steps

Nationwide, 19 percent of U.S. public school students are enrolled in a rural school, and 24 percent are enrolled in a high-poverty school. To help districts and schools provide effective supports to those students, researchers can provide high-quality evidence on the effectiveness of nonacademic interventions in these contexts.

Carry out more studies on specific interventions designed to improve nonacademic competencies. REL Appalachia’s review found that the research on nonacademic competencies often focuses on defining the competencies themselves, rather than on studying interventions designed to develop the competencies. Of the 1,777 unique studies identified in our review, only 65 (3 percent) studied outcomes of interventions designed to improve nonacademic competencies. From these, we identified only 17 studies, representing nine interventions, with sufficiently rigorous designs to examine evidence of effectiveness.

The limited availability of rigorous evaluations of interventions suggests that, as researchers, we need to increase our focus on evaluating new interventions as they are developed or tested. Decisionmakers rarely design their own programs or interventions from scratch; they need to be able to identify existing programs and policies that are within their power to implement and have been proven effective in similar communities. Researchers can help decisionmakers select and implement successful interventions by providing evidence on whether interventions that develop students’ nonacademic competencies have positive effects on students’ postsecondary outcomes.

Design studies to generalize to rural and high-poverty populations. As researchers, we can also increase our focus on rural and high-poverty populations. REL Appalachia’s review found only three studies that focused on a high-poverty population and one that focused on a rural population. As researchers, we can address this gap in two ways: (a) we can carry out more studies specifically focused on rural and high-poverty areas; and (b) when using large national datasets or multi-site studies, we can consider rural and high-poverty populations in our sampling and disaggregate our results for these populations.

Summary

Stakeholders in rural and high-poverty contexts are looking for nonacademic interventions that will be effective with their students. To that end, REL Appalachia carried out an extensive review of evidence-based interventions. The review found few rigorous studies of nonacademic interventions, and even fewer that examined findings for students identified as high poverty or in rural settings. Without additional research, schools and districts serving rural and high-poverty populations may implement interventions that are not designed for their circumstances and may not achieve intended outcomes. As a result, resources may be wasted while rural and high-poverty students receive inadequate support for postsecondary success.  In addition to investing in rigorous studies, which can take a long time to complete, researchers and practitioners can also collaborate to implement short-term research methods to identify early indicators of the success of these programs. For example, researchers may be able to support schools and districts in developing descriptive studies examining change over time or change in formative assessment outcomes.

 

 

 Researchers have a role in helping more high school graduates from rural communities enroll, persist, and succeed in postsecondary education.

 

Rural and high-poverty schools and districts have unique strengths and challenges, and the lack of information about how interventions perform in those contexts presents a dilemma for decisionmakers: do nothing, or else muddle through with existing evidence, investing in interventions that don’t address local needs. As researchers, we can help resolve this dilemma by providing rigorous evidence about effective interventions tailored to rural and high-poverty contexts, as well as supporting practitioners in using more accessible methods to investigate the short-term outcomes of the programs they are already implementing.

 

by Rebecca A. Schmidt and CJ Park, Regional Educational Laboratory Appalachia

 

Advancing High-Quality Data and Evidence at the U.S. Department of Education

March 5, 2021: A post from Greg Fortelny, Chief Data Officer and Matt Soldner, Evaluation Officer, U.S. Department of Education

Last year, the education landscape changed dramatically as the effects of the coronavirus swept across the country. Overnight, families were confronted with the twin challenges of keeping their children, loved ones, and communities safe while establishing learning environments which enabled students to succeed and achieve. With each passing day, our schools are one step nearer recovery. But here at the U.S. Department of Education (ED), our work is far from done. Among the many lessons learned in the wake of the pandemic is that we must take full advantage of every opportunity to strengthen education systems and improve outcomes for all learners. From where we sit, making the most of those opportunities depends on two things: high-quality data and evidence.

Basing education policy and practice in strong evidence that is rooted in high-quality data can accelerate learning for all students, speeding efforts to recover from the pandemic’s effects. As the stewards of education data and evidence at ED, it is our charge from the Foundations for Evidence-Based Policymaking Act of 2018 (Evidence Act) to improve the collection, analysis, and use of high-quality data and evidence. By doing so, we hope to help educators and policymakers at the federal, state, and local levels make the most effective decisions possible on behalf of the learners, families, and communities they serve.

In the two years since the passage of the Evidence Act, the Department’s Office of the Chief Data Officer (OCDO) has made progress in supporting ED’s mission to improve education outcomes by effectively leveraging data to support evidence-based policy and data-driven decision-making.  The Department’s Data Governance Board (DGB) was created to lead these efforts and, with the launch of its inaugural Data Strategy in December 2020, ED has established guidance and goals to go further to improve data quality and enable evidence-building in service of our nation’s learners.

The work of evidence-building is a collaborative effort, coordinated by ED’s Evaluation Officer housed at the Department’s Institute of Education Sciences. In this first phase of Evidence Act implementation, the Department has published a new agency-wide evaluation policy that governs the generation of its most rigorous evidence and is preparing to release its inaugural Annual Evaluation Plan. As part of the agency’s strategic planning process, ED will also develop and publish its first-ever Learning Agenda, documenting its evidence-building priorities for the next four years.

Even prior to the passage of the Evidence Act, ED has made data and evidence a priority. For decades, ED has been collecting and publishing data on students, teachers, schools, colleges, grants, student aid and more.  Now, with the launch of the ED’s Open Data Platform (ODP) in December 2020, educators, researchers, stakeholders, decision-makers, and the public can explore the array of taxpayer-funded education data and profiles through a user-friendly interface, with all data accessible from one central online repository.

At OCDO, we developed the ODP to link to research and ED data tools that serve to engage and inform the public through various displays of that publicly available data.  One rich example of these tools is the recently enhanced College Scorecard.  Visited by more than 1.4 million users in 2020, ED’s College Scorecard now enables students and their advocates to more easily search field of study identifiers and compare similar fields of study within an institution or across different institutions.  And with recent updates including  information on loan repayment rates and parent PLUS loan debt, prospective students now have even more data to make more informed enrollment decisions and to find the right postsecondary fit. 

In addition to making existing data more accessible to decision-makers, the Department invests in new discoveries in the education sciences that have the potential to dramatically improve student outcomes and strengthen education systems. For nearly 20 years, the Department’s Institute of Education Sciences (IES) has worked to bring rigorous, independent, and objective education statistics, research, and evaluation to bear on challenges from early childhood to adult and postsecondary education.   

Through its National Center for Education Evaluation and Regional Assistance (NCEE), IES supports several programs dedicated to improving the use of data and evidence in education practice. NCEE’s Regional Educational Laboratories (REL) program works in partnership with state and local educators and policymakers to develop and use research that improves academic outcomes for students. It’s What Works Clearinghouse™ reviews existing research on education programs, practices, and policies in education to help families, teachers, and leaders answer the question “what works” in the nation’s schools, colleges, and universities. And, through its Evaluation Division, NCEE conducts independent, high-quality evaluations of education programs supported by federal funds.

In recent months, much of the work of both OCDO and IES has pivoted to address the effects of the coronavirus pandemic. At IES, we have developed a wide range of COVID-related resources for families, educators, and policymakers. And our National Center for Education Statistics has recently announced a new survey designed to collect vital data on schools’ approaches to learning during the pandemic, critical to safely reopening America’s schools and promoting educational equity.  

OCDO also has also created valuable new resources in response to the pandemic. The new Education Stabilization Fund Public Transparency Portal provides public transparency and accountability for the over $30 billion in Elementary and Secondary School Emergency Relief, the Governor's Emergency Education Relief, and the Higher Education Emergency Relief funds established through the Coronavirus Aid, Relief, and Economic (CARES) Act. The grant funds were awarded to states, schools, and institutions of higher education last spring. Continuously updated to reflect new activity, this portal provides the public with accurate, reliable, and accessible data on one of the largest federal investments in education in our country’s history.  The portal will soon include similar accounting of the awards made to states, districts, and colleges through the $81.9 billion in Education Stabilization Funds authorized through the Coronavirus Response and Relief Supplemental Appropriations (CRRSA) Act, 2021. 

Despite the challenges we face, there is optimism, like the spark of an engaged student or the light of an inspired educator; we are eager to continue the work to serve learners through data.  The critical data priorities of ED are to empower users and leverage the data to address education equity gaps too often borne by our nation’s underprivileged students.  Rigorous evaluation identifies effective policies and practices, open and transparent data furthers research and public trust. Leveraging data to inform decisions not only improves ED operations but also helps guide schools and families in their efforts to support students and improve education outcomes. 

Your feedback is welcome, you can email us at data@ed.gov.

Subscribe to the Data Matters Blog at https://www.ed.gov/subscriptions , the NCEE blog at https://ies.ed.gov/blogs/ncee/, and follow OCDO on LinkedIn.

Yours Truly in Data and Evaluation,

Greg and Matt

P.S. Happy International Open Data Day Eve!

Introducing REL 2022

As I write this, my colleagues and I at the Regional Educational Laboratory (REL) Program are thinking about a single number: 535. No, we’re not concerned about 535 because it represents the number of voting members of Congress, though that would be a good guess. We’re also not thinking about Interstate 535, the “2.78-mile-long Auxiliary Interstate Highway spur of I-35 in the U.S. states of Minnesota and Wisconsin,” though now I’m intensely interested in why it might be that, at least according to Wikipedia, this road is “known locally as the ‘Can of Worms’ interchange.” Instead, my colleagues and I are excited about 535 because it represents the number of days between now and the start of the next cycle of the REL program, affectionately known as REL 2022.

Over a year ago, we began a process that culminates in the awarding of contracts to run each of our regional labs. We are excited to share our preliminary thoughts about the contours of REL 2022 through a Request for Information, or RFI, which we have posted hereI hope you will take time to read the RFI. If you have questions or suggestions after doing so, I hope you are moved to comment. Details on how to offer your feedback can be found in the RFI.

Importantly, we aren’t proposing to radically restructure the REL program. Instead, we are retooling some existing expectations and adding a few new features. Below, I’ve highlighted a few proposed changes that merit special attention.

The purpose of RELs is to improve student outcomes. Not to put too fine a point on it, but everything that takes place in REL 2022 should be in service of improving student outcomes. This does not mean that every REL project will, by itself, have a directly observable impact on achievement. But the work of any given REL, in concert with the efforts of those with whom it works, should be trained on a singular focus: bettering the lives of the students through education. There is no other, better, or higher calling.

We accomplish our purpose by working in partnership with stakeholders to support their use of evidence-based practices. Evidence-based practice is “baked in” to the statute that authorizes the REL program, and the importance of building and using evidence in education—and government more generally—is reiterated throughout federal law. (See, for example, the Every Student Succeeds Act of 2015 and the Foundations for Evidence-based Policymaking Act of 2018.) However, our emphasis on evidence isn’t rooted in a statutory imperative. Instead, it’s based on a set of core beliefs about our work: that researchers and educators can strengthen education via the rigorous application of the scientific method; that resources, including money and time, are constrained and that efforts with demonstrated effectiveness should be prioritized; and that each and every student deserves the best of “what works” in education, no matter their circumstance.

Nothing changes if nothing changes. In the REL 2022 cycle, we are explicitly asking RELs to think of themselves as “change agents.” This expectation is, I believe, entirely new to the REL Program and is likely to be uncomfortable to some. For that reason, it is helpful to be clear about what we’re expecting and why. Here goes.

I daresay that, no matter how proud they might be of their students and their educators, there is not a state chief, a district superintendent, or building principal who would report they are serving each of their students as well as they wish they could. (If you’re the one who does, please stop reading this blog and call me. I want to share your successes!) Each of those leaders has something they want to do better on behalf of their students and are contemplating, if not actively pursuing, change. It is our hope that RELs can join them in making change, with evidence in hand and research tools at the ready. REL reports, resources, and trainings are not ends unto themselves. They are means to enable the change efforts of local, state, and regional education leaders, working on behalf of students to improve important outcomes.

RELs work in partnership. Education research and technical assistance must be done in partnership with those it is meant to inform. Absent that, it is likely to fail to achieve its goals. At best, potentially positive impacts will be blunted. At worst, harm will be done. There’s a simple solution: collaboration that authentically engages stakeholders in all phases of project design and execution. That isn’t, I realize, as simple to do as it is to write.

As vendors consider the REL 2022 cycle, we ask that they keep two things in mind about what we’ve traditionally called partnerships. First, there are no necessary restrictions on who RELs can partner with when working with stakeholders to achieve stakeholder goals. Does it make sense to partner across levels of education within a state? Do it. Is there a state or national advocacy association that would accelerate a partner’s progress? Engage it. Is there are role for business or industry? Leverage it. A second and closely related concept is that there are no restrictions on partnerships’ functional forms. In general, it does not matter one whit to IES whether you prefer NICs, DBIR, or any other particular form of research partnership. What does? That RELs build projects in partnership—however and with whomever—intentionally, with the goal of supporting partners’ change efforts to achieve the goals they have identified.

We encourage deeper, not broader, work. We believe RELs are more likely to achieve success when they focus partnerships on clearly defined problems of policy or practice in specific geographies. A “Six-State Research Alliance on High School Graduation” can do important and meaningful work—but the process of agreeing on the work to be done and the targets to be met, seeing that work through to completion, and then achieving pre-specified goals is likely to be exceptionally difficult. The “South-Central Kansas Partnership for Kindergarten Readiness” or the “Maricopa County Alliance for Reducing Chronic Absenteeism in High Schools” may be more likely to achieve impact. This is not to say that lessons learned locally should not be shared regionally or nationally, or that groups with common interests might not form “communities of practice” or other networks for the purpose of sharing information or building connection. Rather, we ask RELs be strategic in scoping their highest-intensity work.

We define success as achieving measurable stakeholder goals. Evaluating the impact of research and technical assistance projects is notoriously hard. Often, program managers and the evaluators with whom they work are forced to satisfice, relying upon end-user self-reports of the quality, relevance, and usefulness of a provider’s work. Counts of outputs, such as report downloads and attendees served, are particularly common metrics reported in evaluation studies. Satisfaction is the coin of the realm. Lest I be accused of throwing stones inside my own glass house, let me be clear that we currently use these very measures to characterize the effectiveness of the current REL program.

In REL 2022, it is our intention to shift focus beyond outputs to emphasize outcomes. We will ask RELs to demonstrate, on a regular basis, that they are making progress toward the goals stakeholders set for important student outcomes at the outset of their work, with the acknowledgment that outputs are often critical to achieving a long-term goal and that satisfaction can be an important leading indicator. In 2027, the mark of success won’t be a glowing narrative from a state superintendent or school superintendent about the REL cycle just passed. Instead, it’ll be seeing that the quantifiable goals those leaders set for their work with the REL program were achieved.   

Putting RELs’ capacity for rigorous R&D to work. Finally, there is one manifestly new requirement for RELs as part of the 2022 cycle, one that I am particularly excited about because it brings together the best of two NCEE programs: the RELs and the What Works Clearinghouse™ (WWC). As part of the 2022 cycle, each REL will be required to develop—and then evaluate—a comprehensive toolkit based on a WWC Practice Guide, helping educators instantiate evidence-based practices in the classroom. RELs already have experience taking the content from Practice Guides and transforming them into tools for educators. Two examples include Professional Learning Community guides for both foundational reading and English learners. Similarly, North Carolina State University’s Friday Institute has looked to Practice Guides for inspiration to develop massive open online courses (MOOCs), including foundational reading and fractions. None have been evaluated for efficacy. Of course, the development and testing of these new toolkits will follow the expectations set above, including the expectation that strong and inclusive partnerships are at the root of all high-leverage work.

My NCEE colleagues and I are excited about the possibilities that REL 2022 represents. The REL program has a proud history and a strong track record of service to local, state, and regional stakeholders. We hope that, as you review the REL 2022 RFI, you’ll find the next iteration of the program continues in that tradition. As always, I welcome your feedback.

Matthew Soldner

Commissioner, National Center for Education Evaluation and Regional Assistance

 

“The How” of “What Works:” The Importance of Core Components in Education Research

Twenty-some odd years ago as a college junior, I screamed in horror watching a friend open a running dishwasher. She wanted to slip in a lightly used fork. I jumped to stop her, yelling “don’t open it, can’t you tell it’s full of water?” She paused briefly, turning to look at me with a “have you lost your mind” grimace, and yanked open the door.

Much to my surprise, nothing happened. A puff of steam. An errant drip, perhaps? But no cascade of soapy water. She slid the fork into the basket, closed the door, and hit a button. The machine started back up with a gurgle, and the kitchen floor was none the wetter.

Until that point in my life, I had no idea how a dishwasher worked. I had been around a dishwasher, but the house I lived in growing up didn’t have one. To me, washing the dishes meant filling the sink with soapy water, something akin to a washer in a laundry. I assumed dishwashers worked on the same principle, using gallons of water to slosh the dishes clean. Who knew?

Lest you think me completely inept, a counterpoint. My first car was a 1979 Ford Mustang. And I quickly learned how that very used car worked when the Mustang’s automatic choke conked out. As it happens, although a choke is necessary to start and run a gasoline engine, that it be “automatic” is not. My father Rube Goldberg-ed up a manual choke in about 15 minutes rather than paying to have it fixed.

My 14-year-old self learned how to tweak that choke “just so” so that I could get to school each morning. First, pull the choke all the way out to start the car, adjusting the fuel-air mixture ever so slightly. Then gingerly slide it back in, micron by micron, as the car warms up and you hit the road. A car doesn’t actually run on liquid gasoline, you see. Cars run on fuel vapor. And before the advent of fuel injection, fuel vapor was courtesy your carburetor and its choke. Not a soul alive who didn’t know how a manual choke worked could have started that car.

You would be forgiven if, by now, you were wondering where I am going with all of this and how it relates to the evaluation of education interventions. To that end, I offer three thoughts for your consideration:

  1. Knowing that something works is different from knowing how something works.

 

  1. Knowing how something works is necessary to put that something to its best use.

 

  1. Most education research ignores the how of interventions, dramatically diminishing the usefulness of research to practitioners.

My first argument—that there is a distinction between knowing what works and how something works—is straightforward. Since it began, the What Works Clearinghouse™ has focused on identifying “what works” for educators and other stakeholders, mounting a full-court press on behalf of internal validity. Taken together, Version 4.1 of the WWC Standards and Procedures Handbooks total some 192 pages. As a result, we have substantially greater confidence today than we did a decade ago that when an intervention developer or researcher reports that something worked for a particular group of students, we know that it actually did.

In contrast, WWC standards do not, and as far as I can tell have not ever, addressed the how of an intervention. By “the how” of an intervention, I’m referring to the parts of it that must be working, sometimes “just so,” if its efficacy claims are to be realized. For a dishwasher, it is something like: “a motor turns a wash arm, which sprays dishes with soapy water.” (It is not, as I had thought, “the dishwasher fills with soapy water that washes the mac and cheese down the drain.”) In the case of my Mustang, it was: “the choke controls the amount of air that mixes with fuel from the throttle, before heading to the cylinders.”

If you have been following the evolution of IES’ Standards for Excellence in Education Research, or SEER, and its principles, you recognize “the how” as core components. Most interventions consist of multiple core components that are—and perhaps must—be arrayed in a certain manner if the whole of the thing is to “work.” Depicted visually, core components and their relationships to one another and to the outcomes they are meant to affect form something between a logic model (often too simplistic) and a theory of change (often too complex).

(A word of caution: knowing how somethings works is also different from knowing why something works. I have been known to ask at work about “what’s in the arrows” that connect various boxes in a logic model. The why lives in those arrows. In the social sciences, those arrows are where theory resides.)  

My second argument is that knowing how something works matters, at least if you want to use it as effectively as possible. This isn’t quite as axiomatic as the distinction between “it works” and “how it works,” I realize.

This morning, when starting my car, I didn’t have to think about the complex series of events leading up to me pulling out of the driveway. Key turn, foot down, car go. But when the key turns and the car doesn’t go, then knowing something about how the parts of a car are meant to work together is very, very helpful. Conveniently, most things in our lives, if they work at all, simply do.  

Inconveniently, we don’t have that same confidence when it comes to things in education. There are currently 10,677 individual studies in the What Works Clearinghouse (WWC) database. Of those, only about 11 percent meet the WWC’s internal validity standards. Among them, only 445 have at least one statistically significant positive finding. Because the WWC doesn’t consider results from studies that don’t have strong internal validity, it isn’t quite as simple as saying “only about 4 percent of things work in education.” Instead, we’re left with “89 percent of things aren’t tested rigorously enough to have confidence about whether they work, and when tested rigorously, only about 38 percent do.” Between the “file drawer” problem that plagues research generally and our own review of the results from IES efficacy trials, we have reason to believe the true efficacy rate of “what works” in education is much lower.

Many things cause an intervention to fail. Some interventions are simply wrong-headed. Some interventions do work, but for only some students. And other interventions would work, if only they were implemented well.

Knowing an intervention’s core components and the relationships among them would, I submit, be helpful in at least that third case. If you don’t know that a dishwasher’s wash arm spins, the large skillet on the bottom rack with its handle jutting to the sky might not strike you as the proximate cause of dirty glasses on the top rack. If you don’t know that a core component of multi-tiered systems of support is progress monitoring, you might not connect the dots between a decision to cut back on periodic student assessments and suboptimal student outcomes.

My third and final argument, that most education research ignores the how of interventions, is based in at least some empiricism. The argument itself is a bit of a journey. One that starts with a caveat, wends its way to dismay, and ends in disappointment.

Here’s the caveat: My take on the relative lack of how in most education research comes from my recent experience trying to surface “what works” in remote learning. This specific segment of education research may well be an outlier. But I somehow doubt it.

Why dismay? Well, as regular readers might recall, in late March I announced plans to support a rapid evidence synthesis on effective practices in remote learning. It seemed simple enough: crowd-source research relevant to the task, conduct WWC reviews of the highest-quality submissions, and then make those reviews available to meta-analysts and other researchers to surface generalizable principles that could be useful to educators and families.

My stated goal had been to release study reviews on June 1. That date has passed, and the focus of this post is not “New WWC Reviews of Remote Learning Released.” As such, you may have gathered something about my plan has gone awry. You would be right.

Simply, things are taking longer than hoped. It is not for lack of effort. Our teams identified more than 930 studies, screened more than 700 of those studies, and surfaced 250 randomized trials or quasi-experiments. We have prioritized 35 of this last group for review. (For those of you who are thinking some version of “wow, it seems like it might be a waste to not look at 96 percent of the studies that were originally located,” I have some thoughts about that. We’ll have to save that discussion, though, for another blog.)

Our best guess for when those reviews will be widely available is now August 15. Why things are taking as long as they are is, as they say, “complicated.” The June 1 date was unlikely from the start, dependent as it was upon a series of best-case situations in times that are anything but. And at least some of the delay is driven by our emphasis on rigor and steps we take to ensure the quality of our work, something we would not short-change in any event.  

Not giving in to my dismay, however, I dug in to the 930 studies in our remote learning database to see what I might be able to learn in the meantime. I found that 22 of those studies had already been reviewed by the WWC. “Good news,” I said to myself. “There are lessons to be learned among them, I’m sure.”

And indeed, there was a lesson to be learned—just not the one I was looking for. After reviewing the lot, there was virtually no actionable evidence to be found. That’s not entirely fair. One of the 22 records was a duplicate, two were not relevant, two were not locatable, and one was behind a paywall that even my federal government IP address couldn’t get behind. Because fifteen of the sixteen remaining studies reviewed name-brand products, there was one action I could take in most cases: buy the product the researcher had evaluated.

I went through each article, this time making an imperfect determination about whether the researcher described the intervention’s core components and, if so, arrayed them in a logic model. My codes for core components included one “yes,” two “bordering on yes,” six “yes-ish,” one “not really,” and six “no.” Not surprisingly, logic models were uncommon, with two studies earning a “yes” and two more tallied as “yes-ish.” (You can see now why I am not a qualitative researcher.)

In case there’s any doubt, herein lies my disappointment: if an educator had turned to one of these articles to eke out a tip or two about “what works” in remote learning, they would have been, on average, out of luck. If they did luck out and find an article that described the core components of the tested intervention, there was a vanishingly small chance there would be information on how to put those components together to form a whole. As for surfacing generalizable principles for educators and families across multiple studies? Not without some serious effort, I can assure you.

I have never been more convinced of the importance of core components being well-documented in education research than I am today. As they currently stand, the SEER principles for core components ask:

  • Did the researcher document the core components of an intervention, including its essential practices, structural elements, and the contexts in which it was implemented and tested?
  • Did the researcher offer a clear description of how the core components of an intervention are hypothesized to affect outcomes?
  • Did the researcher's analysis help us understand which components are most important in achieving impact?

More often than not, the singular answer to the questions above is a resounding “no.” That is to the detriment of consumers of research, no doubt. Educators, or even other researchers, cannot turn to the average journal article or research report and divine enough information about what was actually studied to draw lessons for classroom practice. (There are many reasons for this, of course. I welcome your thoughts on the matter.) More importantly, though, it is to the detriment of the supposed beneficiaries of research: our students. We must do better. If our work isn’t ultimately serving them, who is it serving, really?  

Matthew Soldner
Commissioner, National Center for Education Evaluation and Regional Assistance
Agency Evaluation Officer, U.S. Department of Education

Reducing the Burden to Grantees While Increasing the Public’s Access to IES Funded Research

In 2011 the Institute of Education Sciences (IES) adopted the IES Public Access Policy. This policy requires all IES grantees and contractors to submit their final peer-reviewed manuscripts to ERIC. ERIC then makes the work freely available to the public 12 months after publication. Operationally, this has required all grantees and some contractors to submit their work through ERIC’s Online Submission portal. To date, over 1,400 articles have been submitted as a result of this policy.

As part of an effort to minimize burden for our grantees and contractors, ERIC has negotiated agreements with the publishers of over 600 education journals to display publicly funded articles in ERIC 12 months after publication or sooner. If grantees or contractors publish their work in a participating journal, the journal will submit the full text to ERIC on behalf of the grantee. The grantee will not need to submit their work to the ERIC Online Submission portal. This is the same process currently implemented for work published by IES.

To ensure that their work is included, grantees and contractors are responsible for:

  • Including their grant or contract number(s) in the acknowledgements section of the published article.
  • Confirming that the journal title, publisher, and year matches ERIC’s list of participating journals.
  • Informing their publishers that they are subject to the IES Publication Policy when their manuscript is submitted.

This policy takes effect starting for work published after January 1, 2020. Grantees who published work prior to 2020 will still need to submit their work through ERIC’s Online Submission portal. Similarly, grantees publishing in journals not participating in this program will need to submit their work through the Online Submission portal. If an article was accepted by a journal that was participating in this program, but then the journal moved to a publisher that is not participating, the grantee will have to submit the article to ERIC using the ERIC Online Submission portal

ERIC is working to expand the list of journals who agree to display the full text of grantee articles. ERIC will update the list of participating journals multiple times a year, as new publishers sign agreements to participate in this program or journals move to a non-participating publisher. Publishers interested in participating should email ERICRequests@ed.gov for more information.

New Remote Learning Resources from the REL Program-- Week of 5/1/2020

In response to COVID-19, the 10 Regional Educational Laboratories (RELs) have collaborated to produce a series of evidence-based resources and guidance about teaching and learning in a remote environment, as well as other considerations brought by the pandemic. See below for a roundup of upcoming REL events and recently published resources on this topic. A full list of resources is available on the REL COVID-19 webpage.

Upcoming Webinars

Adapting Instruction for English Learner Students During Distance Learning
Tuesday, May 5 at 3:00–3:45 p.m. CT
REL Southwest
This webinar will provide an overview of promising practices and resources to support remote instruction of English learner (EL) students, followed by a discussion with EL teachers and specialists about how they have leveraged strategies and resources to engage English learner students in remote instruction.

Audience: Teachers, principals, instructional coaches, district superintendents, and state education staff

Teaching Young Learners in a Pandemic: Supporting Children Pre-K–Grade 3 and Their Learning Partners at Home
Wednesday, May 6 at 2:00–3:00 p.m. ET
REL Mid-Atlantic
This webinar will provide research-based information about remotely teaching young children in pre-kindergarten to grade 3, including practical steps that align with research guidance. The webinar will also address ways state and local education agencies can strengthen support for remote learning over the longer term.

Audience: Teachers, principals, and administrators from state education agencies, districts, and schools

Engaging Parents and Students from Diverse Populations in the Context of Distance Learning
Monday, May 11 at 1:00–2:00 p.m. PT
REL West
Effective student and family engagement relies on establishing trusting relationships in which educators, students, and parents see themselves and each other as equal partners. Without opportunities to interact in person, it is now more difficult and more important to build and maintain these strong relationships. This webinar will share lessons from research and practice to help educators engage with students and their families to support continued learning during the COVID-19 pandemic. Presenters will discuss strategies in three areas: cultivating a partnership orientation, practicing cultural responsiveness, and establishing two-way communication.

Audience: State, district, and school-level staff

Supporting Postsecondary Transitions During COVID-19
Thursday, May 14 at 3:00–4:00 p.m. ET
REL Appalachia
This virtual chat will discuss logistical and nonacademic supports for keeping students on the path to postsecondary education, such as supporting students and families in completing and making updates to FAFSA applications, understanding financial aid award letters and comparing costs, addressing "summer melt," and providing students with social-emotional supports. Following a brief presentation, a panel of representatives from the National College Attainment Network (NCAN), the College Transition Collaborative (CTC), and the Virginia College Advising Corps (VCAC) will answer questions from participants and discuss resources to address current concerns.

Audience: School counselors, school leaders, teachers, and other support providers

New Resources

Guidance for Navigating Remote Learning for English Learner Students
Blog | REL Midwest
Audience: School leaders, teachers

How Can Educators Engage Families in At-Home Learning and Provide Support to Them During These Challenging Times?
FAQ | REL West
Audience: School leaders, teachers, families

Plan and Deliver: Educating Students with Disabilities in Remote Settings
Blog | REL Midwest
Audience: School leaders, teachers

Remembering Social Presence: Higher Education Remote Teaching in COVID-19 Times
Blog | REL Southeast
Audience: University leaders, university instructors

Using Culturally Responsive Practices to Foster Learning During School Closures: Challenges and Opportunities for Equity
Blog | REL Mid-Atlantic
Audience: School leaders, teachers

An Evidence-Based Response to COVID-19: What We’re Learning

Several weeks ago, I announced the What Works Clearinghouse’s™ first ever rapid evidence synthesis project: a quick look at “what works” in distance education. I asked families and educators to send us their questions about how to adapt to learning at home, from early childhood to adult basic education. I posed a different challenge to researchers and technologists, asking them to nominate high-quality studies of distance and on-line learning that could begin to answer those questions.

Between public nominations and our own databases, we’ve now surfaced more than 900 studies. I was happy to see the full-text of about 300 studies were already available in ERIC, our own bibliographic database—and that many submitters whose work isn’t yet found there pledged to submit to ERIC, making sure it will be freely available to the public in the future. I was a little less happy to learn that only a few dozen of those 900 had already been reviewed by the WWC. This could mean either that (1) there is not a lot of rigorous research on distance learning, or (2) rigorous research exists, but we are systematically missing it. The truth is probably “both-and,” not “either-or.” Rigorous research exists, but more is needed … and the WWC needs to be more planful in capturing it.

The next step for the WWC team is to screen nominated studies to see which are likely to meet our evidence standards. As I’ve said elsewhere, we’ll be lucky if a small fraction—maybe 50—do. Full WWC reviews of the most actionable studies among them will be posted to the WWC website by June 1st, and at that time it is my hope that meta-analysts and technical assistance providers from across the country pitch in to create the products teachers and families desperately need. (Are you a researcher or content producer who wants to join that effort? If so, email me at matthew.soldner@ed.gov.)

Whether this approach actually works is an open question. Will it reduce the time it takes to create products that are both useful and used? All told, our time on the effort will amount to about two months. I had begun this process hoping for something even quicker. My early thinking was that IES would only put out a call for studies, leaving study reviews and product development to individual research teams. My team was convinced, however, that the value of a full WWC review for studies outweighed the potential benefit of quicker products. They were, of course, correct: IES’ comparative advantage stems from our commitment to quality and rigor.

I am willing to stipulate that these are unusual times: the WWC’s evidence synthesis infrastructure hasn’t typically needed to turn on a dime, and I hope that continues to be the case. That said, there may be lessons to be learned from this moment, about both how the WWC does its own work and how it supports the work of the field. To that end, I’d offer a few thoughts.

The WWC could support partners in research and content creation who can act nimbly, maintaining pressure for rigorous work.

Educators have questions that span every facet of their work, every subject, and every age band. And there’s a lot of education research out there, from complex, multi-site RCTs to small, qualitative case studies. The WWC doesn’t have the capacity to either answer every question that deserves answering or synthesize every study we’re interested in synthesizing. (Not to mention the many types of studies we don’t have good methods for synthesizing today.)

This suggests to me there is a potential market for researchers and technical assistance providers who can quickly identify high-quality evidence, accurately synthesize it, and create educator-facing materials that can make a difference in classroom practice. Some folks have begun to fill the gap, including both familiar faces and not-so-familiar ones. Opportunities for collaboration abound, and partners like these can be sources of inspiration and innovation for one another and for the WWC. Where there are gaps in our understanding of how to do this work well that can be filled through systematic inquiry, IES can offer financial support via our Statistical and Research Methodology in Education grant program.   

The WWC could consider adding new products to its mix, including rigorous rapid evidence syntheses.

Anyone who has visited us at whatworks.ed.gov recently knows the WWC offers two types of syntheses: Intervention Reports and Practice Guides. Neither are meant to be quick-turnaround products.

As their name implies, Intervention Reports are systematic reviews of a single, typically brand-name, intervention. They are fairly short, no longer than 15 pages. And they don’t take too long to produce, since they’re focused on a single product. Despite having done nearly 600 of them, we often hear we haven’t reviewed the specific product a stakeholder reports needing information on. Similarly, we often hear from stakeholders that they aren’t in a position to buy a product. Instead, they’re looking for the “secret sauce” they could use in their state, district, building, or classroom.

Practice Guides are our effort to identify generalizable practices across programs and products that can make a difference in student outcomes. Educators download our most popular Guides tens of thousands of times a year, and they are easily the best thing we create. But it is fair to say they are labors of love. Each Guide is the product of the hard work of researchers, practitioners, and other subject matter experts over about 18 months.  

Something seems to be missing from our product mix. What could the WWC produce that is as useful as a Practice Guide but as lean as an Intervention Report? 

Our very wise colleagues at the UK’s Education Endowment Foundation have a model that is potentially promising: Rapid Evidence Assessments based on pre-existing meta-analyses. I am particularly excited about their work because—despite not coordinating our efforts—they are also focusing on Distance Learning and released a rapid assessment on the topic on April 22nd. There are plusses and minuses to their approach, and they do not share our requirement for rigorous peer review. But there is certainly something to be learned from how they do their work.

The WWC could expand its “what works” remit to include “what’s innovative,” adding forward-looking horizon scanning to here-and-now (and sometimes yesterday) meta-analysis.

Meta-analyses play a critical role in efforts to bring evidence to persistent problems of practice, helping to sort through multiple, sometimes conflicting studies to yield a robust estimate of whether an intervention works. The inputs to any meta-analysis are what is already known—or at least what has already been published—about programs, practices, and policies. They are therefore backward-looking by design. Given how slowly most things change in education, that is typically fine.

But what help is meta-analysis when a problem is novel, or when the best solution isn’t a well-studied intervention but instead a new innovation? In these cases, practitioners are craving evidence before it has been synthesized and, sometimes, before it has even been generated. Present experience demonstrates that any of us can be made to grasp for anything that even smacks of evidence, if the circumstances are precarious enough. The challenge to an organization like the WWC, which relies on traditional conceptions of rigorous evidence of efficacy and effectiveness, is a serious one.

How might the WWC become aware of potentially promising solutions to today’s problems before much if anything is known about their efficacy, and how might we surface those problems that are nascent today but could explode across the landscape tomorrow? 

One model I’m intensely interested in is the Health Care Horizon Scanning System at PCORI. In their words, it “provides a systematic process to identify healthcare interventions that have a high potential to alter the standard of care.” Adapted to the WWC use case, this sort of system would alert us to novel solutions: practices that merited monitoring and might cause us to build and/or share early evidence broadly to relevant stakeholders. This same approach could surface innovations designed to solve novel problems that weren’t already the subject of multiple research efforts and well-represented in the literature. We’d be ahead of—or at least tracking alongside—the curve, not behind.  

Wrapping Up

The WWC’s current Rapid Evidence Synthesis focused on distance learning is an experiment of sorts. It represents a new way of interacting with our key stakeholders, a new way to gather evidence, and a new way to see our reviews synthesized into products that can improve practice. To the extent that it has pushed us to try new models and has identified hundreds of “new” (or “new to us”) studies, it is already a success. Of course, we still hope for more.

As I hope you can see from this blog, it has also spurred us to consider other ways we can further strengthen an already strong program. I welcome your thoughts and feedback – just email me at matthew.soldner@ed.gov.