IES Blog

Institute of Education Sciences

Measuring Social and Emotional Learning in Schools

Social and emotional learning (SEL) has been embraced by many schools and districts around the country. Yet in the rush to adopt SEL practices and support student SEL competencies, educators often lack assessment tools that are valid, reliable, and easy to use.

 

Washoe County School District in Nevada has moved the needle on SEL assessment with support from an IES Researcher-Practitioner Partnership grant. The district partnered with the Collaborative for Academic, Social, and Emotional Learning (CASEL) to develop the Social and Emotional Competency Assessments (WCSD-SECAs)—free, open-source instruments that schools can use to measure SEL competencies of students in 5th through 12th grade.

Long and short versions of the SECA are available to download from the school district’s website, along with a bank of 138 items across 8 SEL domains that schools around the country can use to modify SECA assessments for their local context. The long-form version has been validated and aligned to the CASEL 5 SEL competency clusters and WCSD SEL standards (self-awareness, self-management, social awareness, relationship skills, and responsible decision making). The assessment is also available in Spanish, and the Metro Nashville Public schools offer the assessment in 8 additional languages.  

Students complete the long-form SECA as part of Washoe’s Annual Student Climate Survey by rating how easy or difficult SEL skills are for them. Under the Social Awareness domain, students respond to items such as “Knowing what people may be feeling by the look on their face” or “Learning from people with different opinions than me.” Under the Responsible Decision Making domain, students rate themselves on skills such as “Saying ‘no’ to a friend who wants to break the rules” and “Thinking of different ways to solve a problem.”

The SECA is one component of Washoe County’s larger School Climate Survey Project that is marking its 10th anniversary this year. Washoe provides district-level and school-level reports on school climate to support the district’s commitment to providing safe, caring, and engaging school environments for all of Washoe’s students and families.  

Written by Emily Doolittle, NCER’s Team Lead for Social Behavioral Research

IES Honors Dominic Gibson as Outstanding Predoctoral Fellow

Each year, IES recognizes an outstanding fellow from its Predoctoral Interdisciplinary Research Training Programs in the Education Sciences for academic accomplishments and contributions to education research. The 2018 winner, Dr. Dominic Gibson completed his Ph.D. in Developmental Psychology at the University of Chicago. He is currently a Postdoctoral Researcher at the University of Washington where he specializes in understanding how children learn words and mathematical concepts. In this blog, Dominic discusses his research and his experience as an IES fellow.  

What inspired you to focus your research on early mathematics?

So many everyday activities as well as many of humanity’s greatest achievements rely on math. Simple math becomes so second nature to us that it is often difficult for older students to conceptualize what it would be like to not have a basic understanding of numbers. But children take months and often years to learn the meanings of just the first few number words (one, two, three) and to learn how the counting procedure really works. Children’s acquisition of other math terms (angle, proportion, unit of measurement) is similarly marked by misconceptions and slow, difficult learning.  

Overcoming these learning challenges relies on an interesting mixture of uniquely human abilities (like language) and skills we share with other animals. Moreover, children’s ability to master early math concepts predicts their future academic success. Therefore, by studying how children learn about math, we can better understand the sources of humanity’s unique achievements and apply this knowledge to reducing early achievement gaps and maximizing our potential.

Based on your research, what advice would you give parents of pre-kindergartners on how to help their children develop math skills?

My biggest piece of advice is to talk to children about numbers and other basic math concepts. Children benefit from abundant language input in general, and “math talk” is no different. Even simply talking about different numbers of things seems to be particularly important for acquiring early math concepts. Numbers can be easily incorporated into a variety of activities, like taking a walk (“let’s count the birds we see”) or going to the grocery store (“how many oranges should we buy?”). Likewise, good jumping off points for using other types of early math talk such as relational language are activities like puzzles (“this one is too curvy to fit here—we need to find a piece with a flat edge”) and block building (“can you put this small block on top of the bigger one?”).

It also may be useful to note that even when a child can say a word, they may not fully understand what it means. For instance, two- to four-year-old children can often recite a portion of the count list (for example, the numbers one through ten) but if you ask them to find a certain number of items (“can you give me three blocks?”) they may struggle when asked for sets greater than two or three. Therefore, in addition to counting, it is important to connect number words to specific quantities (“look there are three ducks”). It may be especially helpful to connect counting to the value of a set (“let’s count the ducks—one, two, three—there are three!”).

My last piece of advice is to be careful about the types of messages we send our children about math. Many people experience “math anxiety,” and if we are not careful, children can pick up on these signals and become anxious about math themselves or internalize negative stereotypes about the types of people who are and are not good at math. Ensuring that children feel empowered to excel in math is an important ingredient for their success.

How has being an IES predoctoral fellow helped your development as a researcher?

The diverse group of people and perspectives I encountered as an IES predoctoral fellow made a huge impact on my development as a researcher. As an IES predoctoral fellow pursuing a degree in psychology, I met many students and faculty members who were interested in the same questions that interest me but who approached these questions from a variety of other disciplines, such as economics, public policy, and sociology. I also connected with networks of educators and policymakers outside of academia who alerted me to important issues that I may have missed if I had only worked within my own discipline. Through these experiences, I gained new tools for conducting my research and learned to avoid the types of blind spots that often develop when approaching a problem from a single perspective. In particular, I gained an appreciation for the challenges of translating basic science to educational practice and the number of interesting research questions that emerge when attempting to do this work.

Compiled by Katina Rae Stapleton, Education Research Analyst and Program Officer for the Predoctoral Interdisciplinary Research Training Programs in the Education Sciences, National Center for Education Research

New International Comparisons of Reading, Mathematics, and Science Literacy Assessments

The Program for International Student Assessment (PISA) is a study of 15-year-old students’ performance in reading, mathematics, and science literacy that is conducted every 3 years. The PISA 2018 results provide us with a global view of U.S. students’ performance compared with their peers in nearly 80 countries and education systems. In PISA 2018, the major domain was reading literacy, although mathematics and science literacy were also assessed.

In 2018, the U.S. average score of 15-year-olds in reading literacy (505) was higher than the average score of the Organization for Economic Cooperation and Development (OECD) countries (487). Compared with the 76 other education systems with PISA 2018 reading literacy data, including both OECD and non-OECD countries, the U.S. average reading literacy score was lower than in 8 education systems, higher than in 57 education systems, and not measurably different in 11 education systems. The U.S. percentage of top performers in reading was larger than in 63 education systems, smaller than in 2 education systems, and not measurably different in 11 education systems. The average reading literacy score in 2018 (505) was not measurably different from the average score in 2000 (504), the first year PISA was administered. Among the 36 education systems that participated in both years, 10 education systems reported higher average reading literacy scores in 2018 compared with 2000, and 11 education systems reported lower scores.

The U.S. average score of 15-year-olds in mathematics literacy in 2018 (478) was lower than the OECD average score (489). Compared with the 77 other education systems with PISA 2018 mathematics literacy data, the U.S. average mathematics literacy score was lower than in 30 education systems, higher than in 39 education systems, and not measurably different in 8 education systems. The average mathematics literacy score in 2018 (478) was not measurably different from the average score in 2003 (483), the earliest year with comparable data. Among the 36 education systems that participated in both years, 10 systems reported higher mathematics literacy scores in 2018 compared with 2003, 13 education systems reported lower scores, and 13 education systems reported no measurable changes in scores.  

The U.S. average score of 15-year-olds in science literacy (502) was higher than the OECD average score (489). Compared with the 77 other education systems with PISA 2018 science literacy data, the U.S. average science literacy score was lower than in 11 education systems, higher than in 55 education systems, and not measurably different in 11 education systems. The average science literacy score in 2018 (502) was higher than the average score in 2006 (489), the earliest year with comparable data. Among the 52 education systems that participated in both years, 7 education systems reported higher average science literacy scores in 2018 compared with 2006, 22 education systems reported lower scores, and 23 education systems reported no measurable changes in scores.

PISA is conducted in the United States by NCES and is coordinated by OECD, an intergovernmental organization of industrialized countries. Further information about PISA can be found in the technical notes, questionnaires, list of participating OECD and non-OECD countries, released assessment items, and FAQs.

 

By Thomas Snyder

What Do State CTE Directors Want to Learn from the Research Community?

Career Technical Education (CTE) is gaining widespread interest and support from state policymakers, who see it as a strategy to expand access to opportunity and meet employer needs. Between 2014 and 2018, states enacted roughly 800 policies related to CTE, and in 2019, workforce development was one of the top education-related priorities mentioned by governors in their state-of-the-state addresses.

What’s more, in 2018 Congress passed the Strengthening Career and Technical Education for the 21st Century Act (Perkins V), which reauthorized the federal law for CTE and invests around $1.2 billion a year to strengthen and expand CTE programs. The law was enacted in July 2019 and will be in full effect in July 2020 after states submit their four-year plans for CTE to the U.S. Department of Education (see more about the Perkins V planning process here).

With CTE in the spotlight, State CTE Directors are working hard to improve quality and equity in CTE. But state CTE offices often do not have the staffing or resources to conduct rigorous program evaluations to learn what’s working and what needs improvement. By partnering with CTE researchers, State Directors can gain critical insights into the impact of CTE programs, policies, and practices.

While the design, governance and delivery of CTE varies from state to state, there are several common questions and challenges across the country that CTE researchers can help address, particularly in light of Perkins V implementation:

Improving program quality: State leaders are working to improve CTE program quality by connecting secondary and postsecondary coursework, integrating academic and technical learning, aligning programs with labor market needs and expectations, and preparing learners to earn industry-recognized credentials of value. Tennessee, for example, recently revised its secondary CTE program standards and developed model CTE programs of study that meet statewide workforce needs. Answers to the following research questions would help fuel these efforts:

  • What set of experiences at the secondary and postsecondary levels (CTE coursework, work-based learning, dual enrollment, etc.) best prepares learners for postsecondary enrollment and completion, certificate and degree attainment, and high-wage employment?
  • Do these vary by region of the country, Career Cluster® or program of study?
  • Does the delivery mechanism (comprehensive high schools, career academies, area technical centers, technical colleges) matter?

Ensuring equitable access and success in CTE: To reverse historical inequities in CTE, state leaders are using data to identify disparities and ensure each learner can access, fully participate in, and successfully complete a high-quality CTE program of study. In Rhode Island, the Department of Education repurposed $1.2 million in state funds to launch an Innovation & Equity grant initiative, which provided resources to local recipients to recruit and support underrepresented student populations in high-quality programs. CTE researchers can help these efforts by addressing the following questions:

  • What are the classroom and workplace conditions in which CTE students of color are most likely to develop the interests, knowledge, and skills that prepare them to earn postsecondary credentials of value and obtain high-wage employment in their careers of choice?
  • What interventions, accommodations, and instructional strategies best prepare learners with disabilities to transition successfully into the workforce?
  • How does gender inform the development of occupational identity, and what can educators do to limit the effects of stereotyping on the career aspirations of learners?

Improving the quality and use of CTE data: Most State Directors believe improving and enhancing their CTE data systems is a priority, but only 45 percent say they have the information they need at both the secondary and postsecondary levels to improve program quality. States like Minnesota (through the State Colleges and University System) are working to improve the validity and reliability of their data by collaborating with industry-recognized credential providers to obtain data for their students. CTE researchers can help state leaders improve data quality in two ways:

  • Identifying relevant data sources and matching student records to allow for a comprehensive examination of student pathways and outcomes
  • Developing and sharing guidance for collecting, validating, and matching student data relevant to CTE

Fostering collaboration and alignment across state agencies: Supporting learner success requires cross-agency collaboration and coordination. State leaders are working to create seamless pathways by sharing data, coordinating program design, and braiding resources to achieve economies of scale. One example is Massachusetts, where Governor Charlie Baker established a cross-agency workforce skills cabinet to coordinate education, workforce, housing, and economic development. The following research questions would help accelerate the work in Massachusetts and other states:

  • Do states with policies that foster cross-agency coordination see better education and employment outcomes for students? Can merging datasets across agencies help states better understand and respond to student needs?
  • Does credit for prior learning and/or credit transfer between institutions decrease time to credential attainment and entry into employment?
  • How does the integration of support services—such as financial aid, Medicaid, Temporary Assistance for Needy Families, and other state and federal programs—impact the likelihood of student success?

Expanding career advisement opportunities: School counselors are the most trusted source of information on CTE and career options, and states are working to bolster their career advisement systems by reducing the counselor-to-student ratio, requiring each student to complete an individualized graduation plan, and developing user-friendly platforms for career exploration. In Oklahoma, for example, it is now policy for all students to identify their career and academic goals through the state’s new Individual Career and Academic Planning program. CTE researchers can help address the following questions:  

  • Do career and academic planning programs increase the likelihood that learners will complete CTE programs of study, graduate from high school, and earn postsecondary credentials?
  • How does early career exposure through job shadowing, career fairs and career counseling inform student course taking, academic achievement, and future employment and earnings?

As states chart a vision and path for the future of CTE, they can and should use their data to inform decisions. Researchers can help them collect and analyze high quality data to understand the relationships between CTE program elements and various learner outcomes. This can help them understand what is and isn’t working with current policy and practice and identify how to focus their efforts to improve quality and equity in CTE. In addition, researchers can help state directors plan and conduct rigorous evaluations as they roll out new CTE policies and programs. Over the next few months, Advance CTE and the Institute of Education Sciences (IES) will feature a series of successful partnerships between states and CTE researchers and explore how those projects provided critical data and insights to inform state policy.

This blog series was co-authored by Corinne Alfeld at IES (corinne.alfeld@ed.gov) and Austin Estes from Advance CTE (aestes@careertech.org). IES began funding research grants in CTE in 2017 and established a CTE Research Network in 2018. IES hopes to encourage more research on CTE in the coming years in order to increase the evidence base and guide program and policy decisions. At the same time, Advance CTE has been providing resources to help states improve their CTE data quality and use data more effectively to improve CTE program quality and equity.

From Data Collection to Data Release: What Happens?

In today’s world, much scientific data is collected automatically from sensors and processed by computers in real time to produce instant analytic results. People grow accustomed to instant data and expect to get things quickly.

At the National Center for Education Statistics (NCES), we are frequently asked why, in a world of instant data, it takes so long to produce and publish data from surveys. Although improvements in the timeliness of federal data releases have been made, there are fundamental differences in the nature of data compiled by automated systems and specific data requested from federal survey respondents. Federal statistical surveys are designed to capture policy-related and research data from a range of targeted respondents across the country, who may not always be willing participants.

This blog is designed to provide a brief overview of the survey data processing framework, but it’s important to understand that the survey design phase is, in itself, a highly complex and technical process. In contrast to a management information system, in which an organization has complete control over data production processes, federal education surveys are designed to represent the entire country and require coordination with other federal, state, and local agencies. After the necessary coordination activities have been concluded, and the response periods for surveys have ended, much work remains to be done before the survey data can be released.

Survey Response

One of the first sources of potential delays is that some jurisdictions or individuals are unable to fill in their surveys on time. Unlike opinion polls and online quizzes, which use anyone who feels like responding to the survey (convenience samples), NCES surveys use rigorously formulated samples meant to properly represent specific populations, such as states or the nation as a whole. In order to ensure proper representation within the sample, NCES follows up with nonresponding sampled individuals, education institutions, school districts, and states to ensure the maximum possible survey participation within the sample. Some large jurisdictions, such as the New York City school district, also have their own extensive survey operations to conclude before they can provide information to NCES. Before the New York City school district, which is larger than about two-thirds of all state education systems, can respond to NCES surveys, it must first gather information from all its schools. Receipt of data from New York City and other large districts is essential to compiling nationally representative data.

Editing and Quality Reviews

Waiting for final survey responses does not mean that survey processing comes to a halt. One of the most important roles NCES plays in survey operations is editing and conducting quality reviews of incoming data, which take place on an ongoing basis. In these quality reviews, a variety of strategies are used to make cost-effective and time-sensitive edits to the incoming data. For example, in the Integrated Postsecondary Education Data System (IPEDS), individual higher education institutions upload their survey responses and receive real-time feedback on responses that are out of range compared to prior submissions or instances where survey responses do not align in a logical way. All NCES surveys use similar logic checks in addition to a range of other editing checks that are appropriate to the specific survey. These checks typically look for responses that are out of range for a certain type of respondent.

Although most checks are automated, some particularly complicated or large responses may require individual review. For IPEDS, the real-time feedback described above is followed by quality review checks that are done after collection of the full dataset. This can result in individualized follow up and review with institutions whose data still raise substantive questions. 

Sample Weighting

In order to lessen the burden on the public and reduce costs, NCES collects data from selected samples of the population rather than taking a full census of the entire population for every study. In all sample surveys, a range of additional analytic tasks must be completed before data can be released. One of the more complicated tasks is constructing weights based on the original sample design and survey responses so that the collected data can properly represent the nation and/or states, depending on the survey. These sample weights are designed so that analyses can be conducted across a range of demographic or geographic characteristics and properly reflect the experiences of individuals with those characteristics in the population.

If the survey response rate is too low, a “survey bias analysis” must be completed to ensure that the results will be sufficiently reliable for public use. For longitudinal surveys, such as the Early Childhood Longitudinal Study, multiple sets of weights must be constructed so that researchers using the data will be able to appropriately account for respondents who answered some but not all of the survey waves.

NCES surveys also include “constructed variables” to facilitate more convenient and systematic use of the survey data. Examples of constructed variables include socioeconomic status or family type. Other types of survey data also require special analytic considerations before they can be released. Student assessment data, such as the National Assessment of Educational Progress (NAEP), require that a number of highly complex processes be completed to ensure proper estimations for the various populations being represented in the results. For example, just the standardized scoring of multiple choice and open-ended items can take thousands of hours of design and analysis work.

Privacy Protection

Release of data by NCES carries a legal requirement to protect the privacy of our nation’s children. Each NCES public-use dataset undergoes a thorough evaluation to ensure that it cannot be used to identify responses of individuals, whether they are students, parents, teachers, or principals. The datasets must be protected through item suppression, statistical swapping, or other techniques to ensure that multiple datasets cannot be combined in such a way as to identify any individual. This is a time-consuming process, but it is incredibly important to protect the privacy of respondents.

Data and Report Release

When the final data have been received and edited, the necessary variables have been constructed, and the privacy protections have been implemented, there is still more that must be done to release the data. The data must be put in appropriate formats with the necessary documentation for data users. NCES reports with basic analyses or tabulations of the data must be prepared. These products are independently reviewed within the NCES Chief Statistician’s office.

Depending on the nature of the report, the Institute of Education Sciences Standards and Review Office may conduct an additional review. After all internal reviews have been conducted, revisions have been made, and the final survey products have been approved, the U.S. Secretary of Education’s office is notified 2 weeks in advance of the pending release. During this notification period, appropriate press release materials and social media announcements are finalized.

Although NCES can expedite some product releases, the work of preparing survey data for release often takes a year or more. NCES strives to maintain a balance between timeliness and providing the reliable high-quality information that is expected of a federal statistical agency while also protecting the privacy of our respondents.  

 

By Thomas Snyder