IES Blog

Institute of Education Sciences

Asian Voices in Education Research: Perspectives from Predoctoral Fellows Na Lor and Helen Lee

The IES Predoctoral Training Programs prepare doctoral students to conduct high-quality education research that advances knowledge within the field of education sciences and addresses issues important to education policymakers and practitioners. In recognition of Asian American and Pacific Islander Heritage Month, we asked two predoctoral scholars who are embarking on their careers as education researchers to share their career journeys, perspectives on diversity and equity in education research, and advice for emerging scholars from underrepresented backgrounds who are interested in pursuing careers in education research. Here is what they shared with us.

 

Na Lor (University of Wisconsin-Madison) is currently a PhD candidate in educational leadership and policy analysis where she is studying inequity in higher education from a cultural perspective.

How did you become interested in a career in education research? How have your background experiences shaped your scholarship and career?

I view education institutions as important sites of knowledge transmission with infinite potential for addressing inequity. In addition, my background as a Hmong refugee and a first-generation scholar from a low-income family informs my scholarship and career interests. My positive and negative experiences growing up in predominantly White spaces also shape the way in which I see the world. Meanwhile, my time spent living abroad and working in the non-profit sector further influence my ideals of improving the human condition. With my training through IES, I look forward to conducting education research with a focus on higher education in collaboration with local schools and colleges to better serve students and families from underserved communities.  

In your area of research, what do you see as the most critical areas of need to address diversity and equity and improve the relevance of education research for diverse communities of students and families?

I see ethnic studies, culturally sustaining pedagogies, and experiential learning in postsecondary education as core areas in need of improvement to provide relevant education for an ever-diverse student body. Likewise, I see community college transfer pathways as crucial for addressing and advancing equity. 

What advice would you give to emerging scholars from underrepresented, minoritized groups who are pursuing a career in education research?

Chase your burning questions relentlessly and continuously strengthen your methodological toolkit. Embrace who you are and rely on your lived experience and ways of knowing as fundamental assets that contribute to knowledge formation and the research process. 

 

Helen Lee (University of Chicago) is currently a PhD candidate in the Department of Comparative Human Development where she is studying the impact of racial dialogue and ethnic community engagement on the identity and agency development of Asian American youth.

How did you become interested in a career in education research? How have your background and experiences shaped your scholarship and career?

I first considered a career in education research while completing my Master’s in educational leadership and policy at the University of Michigan-Ann Arbor. I had entered my program in need of a break after working as a classroom teacher, organizer, and community educator in Detroit for five years. During my program, I had the opportunity to reflect on and contextualize my experiences in and around public education. It was also during my program that I first came across scholarship that aligned to my values and spoke to my experiences as a teacher in under-resourced communities and as a first-generation college graduate.

Taking classes with Dr. Carla O’Connor and Dr. Alford Young, working with Dr. Camille Wilson, and engaging with scholarship that counters deficit notions of people of color was a critical turning point for me. The work of these scholars motivated me to pursue a path in education research. Since then, I’ve been fortunate to meet other scholars who conduct community-based and action-oriented research in service of social justice movements. These interactions, along with the opportunities to collaborate with and learn from youth and educators over the years, has sustained my interest in education research and strengthened my commitment to conducting research that promotes more equitable educational policies and practice.

In your area of research, what do you see as the most critical areas of need to address diversity and equity and improve the relevance of education research for diverse communities of students and families?

My current research examines the racial socialization experiences of Asian American youth in relation to their sociopolitical development. This work is motivated by my own experiences as an Asian American, my work with Chinese and Asian American-serving community organizations, and a recognition that Asian American communities are often overlooked in conversations about racism due to pervasive stereotypes.

Education research must be better attuned to the history and current manifestations of racism. That is, research should not only consider the consequences of systemic racism on the educational experiences and outcomes of marginalized communities but also challenge and change these conditions. I believe there is a critical need for scholarship that reimagines and transforms the education system into a more just and humanizing one.

What advice would you give to emerging scholars from underrepresented, minoritized groups who are pursuing a career in education research?

I would provide the following advice:

  • Clarify what your purpose isthe reason why you are engaged in this work. This will help guide the opportunities you pursue or pass on and connect you to the people who can support your development toward these goals. Your purpose will also serve as a beacon to guide you in times of uncertainty.
  • Seek out mentorship from scholars whose work inspires your own. Mentorship may come from other students as well as from those outside of academia. It may stem from collaborations in which you participate or simply through one-time interactions.
  • Be attuned to your strengths and your areas of growth and nurture both accordingly. In retrospect, I could have done a better job of recognizing my own assets and engaging in diverse writing opportunities to strengthen my ability to communicate research across audiences.
  • Continuously put your ideas and research in conversation with the ideas and research of others. This enables growth in important ways—it can open you up to new perspectives and questions as well as strengthen your inquiry and understanding of your findings.
  • Engage in exercises that nurture your creativity and imagination and participate in spaces that sustain your passion for education research. A more just and humanizing education system requires us to think beyond our current realities and to engage in long-term efforts.      

This year, Inside IES Research is publishing a series of blogs showcasing a diverse group of IES-funded education researchers and fellows that are making significant contributions to education research, policy, and practice. For Asian American and Pacific Islander (AAPI) Heritage month blog series, we are focusing on AAPI researchers and fellows, as well as researchers that focus on the education of AAPI students.

Produced by Katina Stapleton (Katina.Stapleton@ed.gov), co-Chair of the IES Diversity and Inclusion Council and training program officer for the National Center for Education Research.

Understanding the Co-Development of Language and Behavior Disorders in the Context of an Early Career Grant

The Early Career Development and Mentoring Program in Special Education provides support for investigators in the early stages of their academic careers to conduct an integrated research and career development plan focused on learners with or at risk for disabilities. Dr. Jason Chow is an assistant professor of special education at the University of Maryland, College Park and principal investigator of a current Early Career grant funded by NCSER. Dr. Chow’s research focuses on the comorbidity of language and behavior disorders in school-age children as well as teacher and related service provider training in behavior management. We recently caught up with Dr. Chow to learn more about his career, the experiences that have shaped it, and the lessons he’s learned from the Early Career grant. This is what he shared with us.

How did your experiences shape your interest in a career in special education?

Photo of Jason Chow

I first became interested in education when I started substituting for paraprofessionals in special education programs over winter and summer breaks in college, which I really enjoyed. That experience, along with a class I took in my senior year on disability in the media and popular culture, got me interested in the field of special education. After I graduated, I ended up applying for a full-time position as a paraprofessional in a program supporting high schoolers with emotional and behavioral disorders (EBD).

My experiences as a paraprofessional definitely shaped my career path. As a substitute paraprofessional in college, I was surprised that my job was to support students with the most intensive needs even though I had the least amount of classroom training. That made me recognize the need for research-based training and supports for related service providers and got me interested in different factors that contribute to decision making in school systems. Another memorable experience occurred when I was working in the support program for students with EBD. All our students had the accommodation to be able to come to our room at any time of the day as needed for a check in or a break. I was alarmed by how often students needed a break because of things teachers said or did to upset them or make them feel singled out. I was also coaching several sports at the time and saw first-hand how a strong, positive relationships with the players were vital. These experiences got me interested in teacher-student relationships, how important positive interactions and experiences can be, and the need for general education teachers to receive training on working with students with disabilities. Ultimately, my work as a paraprofessional supporting kids with EBD also helped shape my interest in determining how language and communication can facilitate prosocial development, which led to my Early Career grant.

What are the goals of your NCSER Early Career grant?

My project focuses on better understanding the co-development of language and behavior in children at risk for language disorders, behavior disorders, or both in early elementary school. Many studies have examined the concurrent and developmental relations between language and behavior, but they are typically done using extant datasets. The goal of this project was to conduct a prospective study aimed at measuring both constructs in several different ways (such as direct observations, interviews, and teacher report) to provide a more robust analysis of how each of these constructs and assessment types are related over time. This type of research could inform the types of interventions provided to children with EBD and, more specifically, the need to address language impairments alongside behavior to improve academic outcomes for these learners.

How has the Early Career grant helped your development as a researcher?

This project has taught me a lot about the realities of doing school-based research and managing a grant. First, I have learned a great deal about budgeting. For example, I proposed to recruit a sample based on a power analysis I conducted for the grant application. But in my original budget, I did not consider that I would need to screen about triple the number of children I estimated in order to enroll my planned sample. I have also learned a lot about hiring, human resources, procurement, and university policies that are directly and indirectly involved in process of conducting research. Also, like many others, my project was impacted by pandemic-related school closures, and I have learned how to be flexible under unpredictable circumstances. More specifically, we had intended to determine how developmental trajectories of language and behavior were associated with academic outcomes, but we lost our outcome assessment timepoint due to the pandemic. Fortunately, we are working collaboratively with our partner schools to use district-level data to approximate some of these intended analyses. I’m thankful that I had the opportunity to learn and develop my skills in the context of a training grant.

What advice would you give to other early career researchers, including those who may be interested in applying for an Early Career grant?

Reach out to other early career grantees and ask for their proposals. (I am happy to share mine!) Just be aware that the RFA has changed over time—including a substantial increase in funds—so the more recent proposals the better. Also, in terms of setting up a strong mentorship team for your career development plan, reach out to the people whom you see as the best to support your career development (no matter how busy you think they are or if you think they are too senior). In talking with other folks, I’ve learned that generally people are very willing to support the next generation of researchers!

This interview blog is part of a larger IES blog series on diversity, equity, inclusion and accessibility (DEIA) in the education sciences. It was produced by Katie Taylor (Katherine.Taylor@ed.gov), program officer for the Early Career Development and Mentoring program at the National Center for Special Education Research.

Using Cost Analysis to Inform Replicating or Scaling Education Interventions

A key challenge when conducting cost analysis as part of an efficacy study is producing information that can be useful for addressing questions related to replicability or scale. When the study is a follow up conducted many years after the implementation, the need to collect data retrospectively introduces additional complexities. As part of a recent follow-up efficacy study, Maya Escueta and Tyler Watts of Teachers College, Columbia University worked with the IES-funded Cost Analysis in Practice (CAP) project team to plan a cost analysis that would meet these challenges. This guest blog describes their process and lessons learned and provides resources for other researchers.

What was the intervention for which you estimated costs retrospectively?

We estimated the costs of a pre-kindergarten intervention, the Chicago School Readiness Project (CSRP), which was implemented in nine Head Start Centers in Chicago, Illinois for two cohorts of students in 2004-5 and 2005-6. CSRP was an early childhood intervention that targeted child self-regulation by attempting to overhaul teacher approaches to behavioral management. The intervention placed licensed mental health clinicians in classrooms, and these clinicians worked closely with teachers to reduce stress and improve the classroom climate. CSRP showed signs of initial efficacy on measures of preschool behavioral and cognitive outcomes, but more recent results from the follow-up study showed mainly null effects for the participants in late adolescence.

The IES research centers require a cost study for efficacy projects, so we faced the distinct challenge of conducting a cost analysis for an intervention nearly 20 years after it was implemented. Our goal was to render the cost estimates useful for education decision-makers today to help them consider whether to replicate or scale such an intervention in their own context.

What did you learn during this process?

When enumerating costs and considering how to implement an intervention in another context or at scale, we learned four distinct lessons.

1. Consider how best to scope the analysis to render the findings both credible and relevant given data limitations.

In our case, because we were conducting the analysis 20 years after the intervention was originally implemented, the limited availability of reliable data—a common challenge in retrospective cost analysis—posed two challenges. We had to consider the data we could reasonably obtain and what that would mean for the type of analysis we could credibly conduct. First, because no comprehensive cost analysis was conducted at the time of the intervention’s original implementation (to our knowledge), we could not accurately collect costs on the counterfactual condition. Second, we also lacked reliable measures of key outcomes over time, such as grade retention or special education placement that would be required for calculating a complete cost-benefit analysis. This meant we were limited in both the costs and the effects we could reliably estimate. Due to these data limitations, we could only credibly conduct a cost analysis, rather than a cost-effectiveness analysis or cost-benefit analysis, which generally produce more useful evidence to aid in decisions about replication or scale.

Because of this limitation, and to provide useful information for decision-makers who are considering implementing similar interventions in their current contexts, we decided to develop a likely present-day implementation scenario informed by the historical information we collected from the original implementation. We’ll expand on how we did this and the decisions we made in the following lessons.

2. Consider how to choose prices to improve comparability and to account for availability of ingredients at scale.

We used national average prices for all ingredients in this cost analysis to make the results more comparable to other cost analyses of similar interventions that also use national average prices. This involved some careful thought about how to price ingredients that were unique to the time or context of the original implementation, specific to the intervention, or in low supply. For example, when identifying prices for personnel, we either used current prices (national average salaries plus fringe benefits) for personnel with equivalent professional experience, or we inflation-adjusted the original consulting fees charged by personnel in highly specialized roles. This approach assumes that personnel who are qualified to serve in specialized roles are available on a wider scale, which may not always be the case.

In the original implementation of CSRP, spaces were rented for teacher behavior management workshops, stress reduction workshops, and initial training of the mental health clinicians. For our cost analysis, we assumed that using available school facilities were more likely and tenable when implementing CSRP at large scale. Instead of using rental prices, we valued the physical space needed to implement CSRP by using amortized construction costs of school facilities (for example, cafeteria/gym/classroom). We obtained these from the CAP Project’s Cost of Facilities Calculator.

3. Consider how to account for ingredients that may not be possible to scale.

Some resources are simply not available in similar quality at large scale. For example, the Principal Investigator (PI) for the original evaluation oversaw the implementation of the intervention, was highly invested in the fidelity of implementation, was willing to dedicate significant time, and created a culture that was supportive of the pre-K instructors to encourage buy-in for the intervention. In such cases, it is worth considering what her equivalent role would be in a non-research setting and how scalable this scenario would be. A potential proxy for the PI in this case may be a school principal or leader, but how much time could this person reasonably dedicate, and how similar would their skillset be?  

4. Consider how implementation might work in institutional contexts required for scale.

Institutional settings might necessarily change when taking an intervention to scale. In larger-scale settings, there may be other ways of implementing the intervention that might change the quantities of personnel and other resources required. For example, a pre-K intervention such as CSRP at larger scale may need to be implemented in various types of pre-K sites, such as public schools or community-based centers in addition to Head Start centers. In such cases, the student/teacher ratio may vary across different institutional contexts, which has implications for the per-student cost. If delivered in a manner where the student/ teacher ratio is higher than in the original implementation, the intervention may be less costly, but may also be less impactful. This highlights the importance of the institutional setting in which implementation is occurring, and how this might affect the use and costs of resources.

How can other researchers get assistance in conducting a cost analysis?

In conducting this analysis, we found the following CAP Project tools to be especially helpful (found on the CAP Resources page and the CAP Project homepage):

  • The Cost of Facilities Calculator: A tool that helps estimate the cost of physical spaces (facilities).
  • Cost Analysis Templates: Semi-automated Excel templates that support cost analysis calculations.
  • CAP Project Help Desk: Real-time guidance from a member of the CAP Project team. You will receive help in troubleshooting challenging issues with experts who can share specific resources. Submit a help desk request by visiting this page.

Maya Escueta is a Postdoctoral Associate in the Center for Child and Family Policy at Duke University where she researches the effects of poverty alleviation policies and parenting interventions on the early childhood home environment.

Tyler Watts is an Assistant Professor in the Department of Human Development at Teachers College, Columbia University. His research focuses on the connections between early childhood education and long-term outcomes.

For questions about the CSRP project, please contact the NCER program officer, Corinne.Alfeld@ed.gov. For questions about the CAP project, contact Allen.Ruby@ed.gov.

 

You’ve Been Asked to Participate in a Study

Dear reader,

You’ve been asked to participate in a study.

. . . I know what you’re thinking. Oh, great. Another request for my time. I am already so busy.

Hmm, if I participate, what is my information going to be used for? Well, the letter says that collecting data from me will help researchers study education, and it says something else about how the information I provide would “inform education policy . . .”

But what does that mean?

If you’re a parent, student, teacher, school administrator, or district leader, you may have gotten a request like this from me or a colleague at the National Center for Education Statistics (NCES). NCES is one of 13 federal agencies that conducts survey and assessment research in order to help federal, state, and local policymakers better understand public needs and challenges. It is the U.S. Department of Education’s (ED’s) statistical agency and fulfills a congressional mandate to collect, collate, analyze, and report statistics on the condition of American education. The law also directs NCES to do the same about education across the globe.

But how does my participation in a study actually support the role Congress has given NCES?

Good question. When NCES conducts a study, participants are asked to provide information about themselves, their students or child/children, teachers, households, classrooms, schools, colleges, or other education providers. What exactly you will be asked about is based on many considerations, including previous research or policy needs. For example, maybe a current policy might be based on results from an earlier study, and we need to see if the results are still relevant. Maybe the topic has not been studied before and data are needed to determine policy options. In some cases, Congress has charged NCES with collecting data for them to better understand education in general.

Data collected from participants like you are combined so that research can be conducted at the group level. Individual information is not the focus of the research. Instead, NCES is interested in the experiences of groups of people or groups of institutions—like schools—based on the collected data. To protect respondents, personally identifiable information like your name (and other information that could identify you personally) is removed before data are analyzed and is never provided to others. This means that people who participate in NCES studies are grouped in different ways, such as by age or type of school attended, and their information is studied to identify patterns of experiences that people in these different groups may have had.

Let’s take a look at specific examples that show how data from NCES studies provide valuable information for policy decisions.

When policymakers are considering how data can inform policy—either in general or for a specific law under consideration—data from NCES studies play an important role. For example, policymakers concerned that students in their state/district/city often struggle to pay for college may be interested in this question:

“What can education data tell me about how to make college more affordable?”

Or policymakers further along in the law development process might have more specific ideas about how to help low-income students access college. They may have come across research linking programs such as dual enrollment—when high school students take college courses—to college access for underrepresented college students. An example of this research is provided in the What Works Clearinghouse (WWC) dual-enrollment report produced by ED’s Institute for Education Sciences (IES), which shows that dual-enrollment programs are effective at increasing students’ access to and enrollment in college and attainment of degrees. This was found to be the case especially for students typically underrepresented in higher education.   

Then, these policymakers might need more specific questions answered about these programs, such as:

What is the benefit of high school students from low-income households also taking college courses?”

Thanks to people who participate in NCES studies, we have the data to address such policy questions. Rigorous research using data from large datasets, compiled from many participants, can be used to identify differences in outcomes between groups. In the case of dual-enrollment programs, college outcomes for dual-enrollment participants from low-income households can be compared with those of dual-enrollment participants from higher-income households, and possible causes of those differences can be investigated.

The results of these investigations may then inform enactment of laws or creation of programs to support students. In the case of dual enrollment, grant programs might be set up at the state level for districts and schools to increase students’ local access to dual-enrollment credit earning.

This was very close to what happened in 2012, when I was asked by analysts in ED’s Office of Planning, Evaluation, and Policy Development to produce statistical tables with data on students’ access to career and technical education (CTE) programs. Research, as reviewed in the WWC dual-enrollment report, was already demonstrating the benefits of dual enrollment for high school students. Around 2012, ED was considering a policy that would fund the expansion of dual enrollment specifically for CTE. The reason I was asked to provide tables on the topic was my understanding of two important NCES studies, the Education Longitudinal Study of 2002 (ELS:2002) and the High School Longitudinal Study of 2009 (HSLS:09). Data provided by participants in those studies were ideal for studying the question. The tables were used to evaluate policy options. Based on the results, ED, through the President, made a budget request to Congress to support dual-enrollment policies. Ultimately, dual-enrollment programs were included in the Strengthening Career and Technical Education for the 21st Century Act (Perkins V).  

The infographic below shows that this scenario—in which NCES data provided by participants like you were used to provide information about policy—has happened on different scales for different policies many times over the past few decades. The examples included are just some of those from the NCES high school longitudinal studies. NCES data have been used countless times in its 154-year history to improve education for American students. Check out the full infographic (PDF) with other examples.


Excerpt of full infographic showing findings and actions for NCES studies on Equity, Dropout Prevention, and College and Career Readiness


However, it’s not always the case that a direct line can be drawn between data from NCES studies and any one policy. Research often informs policy indirectly by educating policymakers and the public they serve on critical topics. Sometimes, as in the dual-enrollment and CTE programs research question I investigated, it can take time before policy gets enacted or a new program rolls out. This does not lessen the importance of the research, nor the vital importance of the data participants provide that underpin it.

The examples in the infographic represent experiences of actual individuals who took the time to tell NCES about themselves by participating in a study.  

If you are asked to participate in an NCES study, please consider doing so. People like you, schools like yours, and households in your town do matter—and by participating, you are helping to inform decisions and improve education across the country.

 

By Elise Christopher, NCES

Measuring “Traditional” and “Non-Traditional” Student Success in IPEDS: Data Insights from the IPEDS Outcome Measures (OM) Survey Component

This blog post is the second in a series highlighting the Integrated Postsecondary Education Data System (IPEDS) Outcome Measures (OM) survey component. The first post introduced a new resource page that helps data reporters and users understand OM and how it compares to the Graduation Rates (GR) and Graduation Rates 200% (GR200) survey components. Using data from the OM survey component, this post provides key findings about the demographics and college outcomes of undergraduates in the United States and is designed to spark further study of student success using OM data.

What do Outcome Measures cohorts look like?

OM collects student outcomes for all entering degree/certificate-seeking undergraduates, including non-first-time (i.e., transfer-in) and part-time students. Students are separated into eight subcohorts by entering status (i.e., first-time or non-first-time), attendance status (i.e., full-time or part-time), and Pell Grant recipient status.1 Figure 1 shows the number and percentage distribution of degree/certificate-seeking undergraduates in each OM subcohort from 2009–10 to 2012–13, by institutional level.2

Key takeaways:

  • Across all cohort years, the majority of students were not first-time, full-time (FTFT) students, a group typically referred to as “traditional” college students. At 2-year institutions, 36 percent of Pell Grant recipients and 16 percent of non-Pell Grant recipients were FTFT in 2012–13. At 4-year institutions, 43 percent of Pell Grant recipients and 44 percent of non-Pell Grant recipients were FTFT in 2012–13.
  • Pell Grant recipient cohorts have become less “traditional” over time. In 2012–13, some 36 percent of Pell Grant recipients at 2-year institutions were FTFT, down 5 percentage points from 2009–10 (41 percent). At 4-year institutions, 43 percent of Pell Grant recipients were FTFT in 2012–13, down 5 percentage points from 2009–10 (48 percent).

Figure 1. Number and percentage distribution of degree/certificate-seeking undergraduate students in the adjusted cohort, by Pell Grant recipient status, institutional level, and entering and attendance status: 2009–10 to 2012–13 adjusted cohorts

Stacked bar chart showing the number and percentage distribution of degree/certificate-seeking undergraduate students by Pell Grant recipient status (recipients and non-recipients), institutional level (2-year and 4-year), and entering and attendance status (first-time/full-time, first-time/part-time, non-first-time/full-time, and non-first-time/part-time) for 2009–10 to 2012–13 adjusted cohorts

NOTE: This figure presents data collected from Title IV degree-granting institutions in the United States. Percentages may not sum to 100 due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component final data (2017­–19) and provisional data (2020).


What outcomes does Outcome Measures collect?

The OM survey component collects students’ highest credential earned (i.e., certificate, associate’s, or bachelor’s) at 4,3 6, and 8 years after entry. Additionally, for students who did not earn a credential by the 8-year status point, the survey component collects an enrollment status outcome (i.e., still enrolled at the institution, enrolled at another institution, or enrollment status unknown). Figure 2 shows these outcomes for the 2012–13 adjusted cohort.

Key takeaways:

  • The percentage of students earning an award (i.e., certificate, associate’s, or bachelor’s) was higher at each status point, with the greatest change occurring between the 4- and 6-year status points (a 7-percentage point change, from 32 percent to 39 percent).
  • At the 8-year status point, more than a quarter of students were still enrolled in higher education: 26 percent had “transferred-out” to enroll at another institution and 1 percent were still enrolled at their original institution. This enrollment status outcome fills an important gap left by the GR200 survey component, which does not collect information on students who do not earn an award 8 years after entry.

Figure 2. Number and percentage distribution of degree/certificate-seeking undergraduate students, by award and enrollment status and entry status point: 2012–13 adjusted cohort

Waffle chart showing award status (certificate, associate’s, bachelor’s, and did not receive award) and enrollment status (still enrolled at institution, enrolled at another institution, and enrollment status unknown) of degree/certificate-seeking undergraduate students, by status point (4-year, 6-year, and 8-year) for 2012–13 adjusted cohort

NOTE: One square represents 1 percent. This figure presents data collected from Title IV degree-granting institutions in the United States.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component provisional data (2020).


How do Outcome Measures outcomes vary across student subgroups?

Every data element collected by the OM survey component (e.g., cohort counts, outcomes by time after entry) can be broken down into eight subcohorts based on entering, attendance, and Pell Grant recipient statuses. In addition to these student characteristics, data users can also segment these data by key institutional characteristics such as sector, Carnegie Classification, special mission (e.g., Historically Black College or University), and region, among others.4 Figure 3 displays the status of degree/certificate-seeking undergraduates 8 years after entry by each student subcohort within the broader 2012–13 degree/certificate-seeking cohort.

Key takeaways:

  • Of the eight OM subcohorts, FTFT non-Pell Grant recipients had the highest rate of earning an award or still being enrolled 8 years after entry. Among this subcohort, 18 percent had an unknown enrollment status 8 years after entry.
  • Among both Pell Grant recipients and non-Pell Grant recipients, full-time students had a higher rate than did part-time students of earning an award or still being enrolled 8 years after entry.
  • First-time, part-time (FTPT) students had the lowest rate of the subcohorts of earning a bachelor’s degree. One percent of FTPT Pell Grant recipients and 2 percent of FTPT non-Pell Grant recipients had earned a bachelor’s degree by the 8-year status point.

Figure 3. Number and percentage distribution of degree/certificate-seeking undergraduate students 8 years after entry, by Pell Grant Recipient status, entering and attendance status, and award and enrollment status: 2012–13 adjusted cohort

Horizontal stacked bar chart showing award (certificate, associate’s, and bachelor’s) and enrollment statuses (still enrolled at institution, enrolled at another institution, and enrollment status unknown) of degree/certificate-seeking undergraduate students by Pell Grant recipient status (recipients and non-recipients), institutional level (2-year and 4-year), and entering and attendance status (first-time/full-time, first-time/part-time, non-first-time/full-time, and non-first-time/part-time) for 2012–13 adjusted cohort

 

NOTE: This figure presents data collected from Title IV degree-granting institutions in the United States. Percentages may not sum to 100 due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component provisional data (2020).


How do Outcome Measures outcomes vary over time?

OM data are comparable across 4 cohort years.5 Figure 4 shows outcomes of degree/certificate-seeking undergraduates 8 years after entry from the 2009–10 cohort through the 2012–13 cohort for so-called “traditional” (i.e., FTFT) and “non-traditional” (i.e., non-FTFT) students.

Key takeaways:

  • For both traditional and non-traditional students, the percentage of students earning an award was higher for the 2012–13 cohort than for the 2009–10 cohort, climbing from 47 percent to 51 percent for traditional students and from 32 percent to 35 percent for non-traditional students.
  • The growth in award attainment for traditional students was driven by the share of students earning bachelor’s degrees (30 percent for the 2009–10 cohort vs. 35 percent for the 2012–13 cohort).
  • The growth in award attainment for non-traditional students was driven by the share of students earning both associate’s degrees (15 percent for the 2009–10 cohort vs. 16 percent for the 2012–13 cohort) and bachelor’s degrees (13 percent for the 2009–10 cohort vs. 15 percent for the 2012–13 cohort).

Figure 4. Number and percentage distribution of degree/certificate-seeking undergraduate students 8 years after entry, by first-time, full-time (FTFT) status and award and enrollment status: 2009–10 to 2012–13 adjusted cohorts

Stacked bar chart showing award status (certificate, associate’s, and bachelor’s) and enrollment status (still enrolled at institution, enrolled at another institution, and enrollment status unknown) of degree/certificate-seeking undergraduate students 8 years after entry by first-time, full-time status (traditional or first-time, full-time students and non-traditional or non-first-time, full-time students) for 2009–10 to 2012–13 adjusted cohorts

NOTE: This figure presents data collected from Title IV degree-granting institutions in the United States. “Non-traditional” (i.e., non-first-time, full-time) students include first-time, part-time, non-first-time, full-time, and non-first-time, part-time subcohorts. Percentages may not sum to 100 due to rounding.

SOURCE: U.S. Department of Education, National Center for Education Statistics, Integrated Postsecondary Education Data System (IPEDS) Outcome Measures component final data (2017–2019) and provisional data (2020).


To learn more about the IPEDS OM survey component, visit the Measuring Student Success in IPEDS: Graduation Rates (GR), Graduation Rates 200% (GR200), and Outcome Measures (OM) resource page and the OM survey component webpage. Go to the IPEDS Use the Data page to explore IPEDS data through easy-to-use web tools, access data files to conduct your own analyses like those presented in this blog post, or view OM web tables.  

By McCall Pitcher, AIR


[1] The Federal Pell Grant Program (Higher Education Act of 1965, Title IV, Part A, Subpart I, as amended) provides grant assistance to eligible undergraduate postsecondary students with demonstrated financial need to help meet education expenses.

[2] Due to the 8-year measurement lag between initial cohort enrollment and student outcome reporting for the Outcome Measures survey component, the most recent cohort for which data are publicly available is 2012–13. Prior to the 2009–10 cohort, OM did not collect cohort subgroups by Pell Grant recipient status. Therefore, this analysis includes data only for the four most recent cohorts.

[3] The 4-year status point was added in the 2017–18 collection.

[4] Data users can explore available institutional variables on the IPEDS Use the Data webpage.

[5] For comparability purposes, this analysis relies on data from the 2017–18 collection (reflecting the 2009–10 adjusted cohort) through the 2020–21 collection (reflecting the 2012–13 adjusted cohort). Prior to the 2017–18 collection, OM cohorts were based on a fall term for academic reporters and a full year for program reporters.