NCEE Blog

National Center for Education Evaluation and Regional Assistance

An open letter to Superintendents, as summer begins

If the blockbuster attendance at last month’s Summer Learning and Enrichment Collaborative convening is any sign, many of you are in the midst of planning—or have already started to put in place—your plans for summer learning. As you take time to review resources from the Collaborative and see what’s been learned from the National Summer Learning Project, I’d like to add one just one more consideration to your list: please use this summer as a chance to build evidence about “what works” to improve outcomes for your students. In a word: evaluate!

Given all the things that need to be put in place to even make summer learning happen, it’s fair to ask why evaluation merits even a passing thought.   

I’m urging you to consider building evidence about the outcomes of your program through evaluation because I can guarantee you that, in about a year, someone to whom you really want to give a fulsome answer will ask “so, what did we accomplish last summer?” (Depending upon who they are, and what they care about, that question can vary. Twists can include “what did students learn” or business officers’ pragmatic “what bang did we get for that buck.”)

When that moment comes, I want you to be able to smile, take a deep breath, and rattle off the sort of polished elevator speech that good data, well-analyzed, can help you craft. The alternative—mild to moderate panic followed by an unsatisfying version of “well, you know, we had to implement quickly”—is avoidable. Here’s how.

  1. Get clear on outcomes. You probably have multiple goals for your summer learning programs, including those that are academic, social-emotional, and behavioral. Nonetheless, there’s probably a single word (or a short phrase) that completes the following sentence: “The thing we really want for our students this summer is …” It might be “to rebuild strong relationships between families and schools,” “to be physically and emotionally safe,” or “to get back on track in math.” Whatever it is, get clear on two things: (1) the primary outcome(s) of your program and (2) how you will measure that outcome once the summer comes to an end. Importantly, you should consider outcome measures that will be available for both program participants and non-participants so that you can tell the story about the “value add” of summer learning. (You can certainly also include measures relevant only to participants, especially ones that help you track whether you are implementing your program as designed.)
  2. Use a logic model. Logic models are the “storyboard” of your program, depicting exactly how its activities will come together to cause improvement in the student outcomes that matter most. Logic models force program designers to be explicit about each component of their program and its intended impact. Taking time to develop a logic model can expose potentially unreasonable assumptions and missing supports that, if added, would make it more likely that a program succeeds. If you don’t already have a favorite logic model tool, we have resources available for free!  
  3. Implement evidence-based practices aligned to program outcomes. A wise colleague (h/t Melissa Moritz) recently reminded me that a summer program is the “container” (for lack of a better word) in which other learning experiences and educationally purposeful content are packaged, and that there are evidence-based practices for the design and delivery of both. (Remember: “evidence-based practices” run the gamut from those that demonstrate a rationale to those supported by promising, moderate, or strong evidence.) As you are using the best available evidence to build a strong summer program, don’t forget to ensure you’re using evidence-based practices in service of the specific outcomes you want those programs to achieve. For example, if your primary goal for students is math catch-up, then the foundation of your summer program should be an evidence-based Tier I math curriculum. If it is truly important that students achieve the outcome you’ve set for them, then they’re deserving of evidence-based educational practices supported by an evidence-based program design!
  4. Monitor and support implementation. Once your summer program is up and running, it’s useful to understand just how well your plan—the logic model you developed earlier—is playing out in real life. If staff trainings were planned, did they occur and did everyone attend as scheduled? Are activities occurring as intended, with the level of quality that was hoped for? Is attendance and engagement high? Monitoring implementation alerts you to where things may be “off track,” flagging where more supports for your team might be helpful. And, importantly, it can provide useful context for the outcomes you observe at the end of the summer. If you don't already have an established protocol for using data as part of continuous improvement, free resources are available!
  5. Track student attendance. If you don’t know who—specifically—participated in summer learning activities, describing how well those activities “worked” can get tricky. Whether your program is full-day, half-day, in-person, hybrid, or something else, develop a system to track (1) who was present, (2) on what days, and (3) for how long. Then, store that information in your student information system (or another database) where it can be accessed later. 
  6. Analyze and report your data, with an explicit eye toward equity. Data and data analysis can help you tell the story of your summer programming. Given the disproportionate impact COVID has had on students that many education systems have underserved, placing equity at the center of your planned analyses is critical. For example:
    • Who—and who did not—participate in summer programs? Data collected to monitor attendance should allow you to know who (specifically) participated in your summer programs. With that information, you can prepare simple tables that show the total number of participants and that total broken down by important student subgroups, such as gender, race/ethnicity, or socioeconomic status. Importantly, those data for your program should be compared with similar data for your school or district (as appropriate). Explore, for example, whether there are one or more populations disproportionately underrepresented in your program and the implications for the work both now and next summer.
    • How strong was attendance? Prior research has suggested that students benefit the most from summer programs when they are “high attenders.” (Twenty or more days out programs’ typical 25 to 30 total days.) Using your daily, by-student attendance data, calculate attendance intensity for your program’s participants overall and by important student subgroups. For example, what percentage of students attended between 0 and 24%, 25% to 49%, 50% to 74%, and 75% or more days?
    • How did important outcomes vary between program participants and non-participants? At the outset of the planning process, you identified one or more outcomes you hoped students would achieve by participating in your program and how you’d measure them. In the case of a “math catch-up” program, for example, you might be hoping that more summer learning participants get a score of “on-grade level” at the outset of the school year than their non-participating peers, potentially promising evidence that the program might have offered a benefit. Disaggregating these results by student subgroups when possible highlights whether the program might have been more effective for some students than others, providing insight into potential changes for next year’s work.     
    • Remember that collecting and analyzing data is just a means to an end: learning to inform improvement. Consider how involving program designers and participants—including educators, parents, and students—in discussions about what was learned as a result of your analyses can be used to strengthen next year’s program.
  7. Ask for help. If you choose to take up the evaluation mantle to build evidence about your summer program, bravo! And know that you do not have to do it alone. First, think locally. Are you near a two-year or four-year college? Consider contacting their education faculty to see whether they’re up for a collaboration. Second, explore whether your state has a state-wide “research hub” for education issues (e.g., Delaware, Tennessee) that could point you in the direction of a state or local evaluation expert. Third, connect with your state’s Regional Educational Lab or Regional Comprehensive Center for guidance or a referral. Finally, consider joining the national conversation! If you would be interested in participating in an Evaluation Working Group, email my colleague Melissa Moritz at melissa.w.moritz@ed.gov.

Summer 2021 is shaping up to be one for the record books. For many students, summer is a time for rest and relaxation. But this year, it will also be a time for reengaging students and families with their school communities and, we hope, a significant amount of learning. Spending time now thinking about measuring that reengagement and learning—even in simple ways—will pay dividends this summer and beyond.

My colleagues and at the U.S. Department of Education are here to help, and we welcome your feedback. Please feel free to contact me directly at matthew.soldner@ed.gov.

Matthew Soldner
Commissioner, National Center for Education Evaluation and Regional Assistance Agency

NCEE is hiring!

The U.S. Department of Education’s Institute of Education Sciences (IES) is seeking professionals in education-related fields to apply for an open position in the National Center for Education Evaluation and Regional Assistance (NCEE). Located in NCEE’s Evaluation Division, this position would support impact evaluations and policy implementation studies. Learn more about our work here: https://ies.ed.gov/ncee.

If you are even potentially interested in this sort of position, you are strongly encouraged to set up a profile in USAJobs (https://www.usajobs.gov/) and to upload your information now. As you build your profile, include all relevant research experience on your resume whether acquired in a paid or unpaid position. The position will open in USAJobs on July 15, 2019 and will close as soon as 50 applications are received, or on July 29, 2019, whichever is earlier. Getting everything in can take longer than you might expect, so please apply as soon as the position opens in USAJobs (look for vacancy number IES-2019-0023).

 

Regional Educational Laboratories: Connecting Research to Practice

By Joy Lesnick, Acting Commissioner, NCEE

Welcome to the NCEE Blog! 

Joy Lesnick

We look forward to using this space to provide information and insights about the work of the National Center for Education Evaluation and Regional Assistance (NCEE). A part of the Institute of Education Sciences (IES), NCEE’s primary goal is providing practitioners and policymakers with research-based information they can use to make informed decisions. 

We do this in a variety of ways, including large-scale evaluations of education programs and practices supported by federal funds; independent reviews and syntheses of research on what works in education; and a searchable database of research citations and articles (ERIC) and reference searches from National Library of Education. We will explore more of this work in future blogs, but in this post I’d like to talk about an important part of NCEE—the Regional Educational Laboratories (RELs).

It’s a timely topic. Last week, the U.S. Department of Education released a solicitation for organizations seeking to become REL contractors beginning in 2017 (the five-year contracts for the current RELs will conclude at the end of 2016). The REL program is an important part of the IES infrastructure for bridging education research and practice. Through the RELs, IES seeks to ensure that research does not “sit on a shelf” but rather is broadly shared in ways that are relevant and engaging to policymakers and practitioners. The RELs also involve state and district staff in collaborative research projects focused on pressing problems of practice. An important aspect of the RELs’ work is supporting the use of research in education decision making – a charge that the Every Student Succeeds Act has made even more critical.

The RELs and their staff must be able to navigate comfortably between the two worlds of education research and education practice, and understand the norms and requirements of both.  As part of this navigating, RELs focus on: (1) balancing rigor and relevance; (2) differentiating support to stakeholders based on need; (3) providing information in the short term, and developing evidence over the long term; and (4) addressing local issues that can also benefit the nation.

While the RELs are guided by federal legislation, their work reflects – and responds to – the needs of their communities. Each REL has a governing board comprised of state and local education leaders that sets priorities for REL work. Also, nearly all REL work is conducted in collaboration with research alliances, which are ongoing partnerships in which researchers and regional stakeholders work together over time to use research to address an education problem.  

Since the current round of RELs were awarded in 2012, these labs and their partners have conducted meaningful research resulting in published reports and tools, held hundreds of online and in-person seminars and training events that have been attended by practitioners across the country, and produced videos of their work that you can find on the REL Playlist on the IES YouTube site. Currently, the RELs have more than 100 projects in progress. RELs do work in nearly every topic that is crucial to improving education—kindergarten readiness, parent engagement, discipline, STEM education, college and career readiness, teacher preparation and evaluation, and much more.

IES’s vision is that the 2017–2022 RELs will build on and extend the current priorities of high-quality research, genuine partnership, and effective communication, while also tackling high-leverage education problems.  High-leverage problems are those that: (1) if addressed could result in substantial improvements in education outcomes for many students or for key subgroups of students; (2) are priorities for regional policymakers, particularly at the state level; and (3) require research or research-related support to address well. Focusing on high-leverage problems increases the likelihood that REL support ultimately will contribute to improved student outcomes.

Visit the IES REL website to learn more about the 2012-2017 RELs and how you can connect with the REL that serves your region.  Visit the FedBizOpps website for information about the competition for the 2017-2022 RELs.