When I first met Margaret Spellings in 2004, she was the Secretary of Education, and I was an anxious candidate for the position of Commissioner of Education Statistics. At the time, I was working as a professor of political science at Stony Brook University, where I studied education innovations and the role of information in helping parents and students find schools that better matched their interests and needs. I assumed the interview would focus on K-12 education. Instead, Secretary Spellings wanted to know how I was going to fix IPEDS—the Integrated Postsecondary Education Data System. Something I had never even heard of!
Her motivation for the questioning was simple: she had a daughter who was looking at colleges and found that NCES had very few resources that she could use to help in the selection process. Of course, she could have done what generations of well-connected, highly educated parents do—called friends, looked at a broad range of college rankings, or made a crude judgment based on name recognition. The interview, quite frankly, was a disaster, followed by my sojourn to a local bar for a stiff drink.
To my surprise, Secretary Spellings signed off on my appointment and I spent the next three years as Commissioner of Education Statistics. One unanticipated consequence of that interview was a radical broadening of my own interests away from K-12 education and toward higher education—and, in particular, how to get better measures of college student success. I became a passionate supporter of modernizing IPEDS and, indeed, argued many times and in many venues—before Congress and elsewhere—for replacing much of IPEDS with a student-level data system.
The politics surrounding student-level data systems are intense, often centered on the protection of privacy and whether the federal government should have such detailed information about students. Yet the power of student-level data to better understand and improve both consumer information and public policy is undeniable. The debate between these two positions has still not been resolved. Almost 20 years after my initial interview, IPEDS is still the federal government's primary data repository for postsecondary education, and student-level data continues to be elusive.
While someday much of IPEDS will be replaced with a more modern data collection, IES and NCES continue to work to improve it. For now, I would like to celebrate one major improvement that I fear has mostly gone unnoticed: the Outcome Measures component of IPEDS launched in 2016.
IPEDS, as an old legacy system, had up until that point focused on "traditional" college students—that is, first-time, full-time students (in IPEDS parlance "FTFTs"). Indeed, that focus was codified in the Student Right to Know Act of 1990—and despite some changes in the Higher Education Opportunity and Assistance Act of 2008, the FTFT focus remained mostly intact.
In the "Leave it to Beaver" world of the historic IPEDS, college students begin college as full-time students in the fall and graduate in the spring a few years later. Much of IPEDS is still built largely around this view of college students. But here's the problem: Over time, FTFTs have come to represent an ever-smaller percentage of college students. At 2-year colleges, they represent less than a third of students and at 4-year institutions, less than half. And historically, IPEDS graduation rate data, important for understanding how well students were doing in each college and university across the nation, represented only FTFTs, as just noted a shrinking share of college enrollments.
In 2016, IPEDS introduced its Outcome Measures survey. OM collects student graduation rates for the majority of entering degree/certificate-seeking undergraduates, including transfer-in and part-time students. Students are separated into eight reporting groups by entering status (first-time or not first-time), attendance status (full-time or part-time), and Pell Grant recipient status. This gives far greater insight into the success rates of a far larger percentage of students than traditional IPEDS.
Different groups of students reported by OM can experience substantial differences in graduation rates across colleges and universities—information that was not available before the Outcome Measures survey. It necessarily takes eight years to collect data on the 8-year graduation rates for each of the cohort OM tracks. Add to that a couple of years to collect, clean, and compute different data elements and the most recent Outcome Measures reflect the experience of the 2012–13 cohort of students.
Sorry for such a long setup, but here's why these data matter. Consider "non-first-time, full-time entering" students (in plain English, full-time transfer students). There were more than 1.9 million such students in the 2012-13 cohort. Only through the OM survey can we begin to document the wide variation in this cohort's graduation rates across the nation's colleges and universities. The median eight-year graduation rate for full-time transfer students was 57 percent—not bad, all things considered. But we know that medians hide lots of information. Indeed, eight-year graduation rates range from 0 (that's right: eight years after starting, 35 schools reported that not a single full-time transfer student received a certificate or a degree) to 100 percent.
There were even more part-time transfer students, totaling over 2.35 million. Remember, before OM there was no information available about the success of these (literally) millions of students. Here, the range is equally large with close to 150 campuses reporting a zero percent graduation rate for part-time transfer students after 8 years and about 70 sporting 100 percent graduation rates. The median graduation rate is about 20 percentage points lower than that of full-time transfer students (36% versus 57%).
Of course, variation in student graduation rates is driven by many factors, but students are investing time and money when they enroll in a college. Probably few (if any) enroll expecting not to graduate. Obviously, the probability of successfully completing should be widely available and widely used by the millions of students who select colleges, either as beginning students or as transfer students, and either as full-time or part-time students. And IPEDS now makes these data available.
NCES staff are doing important work that was practically a fantasy when IES was first established 20 years ago. Yet, when I go to conferences, I all too often hear the same old complaint about how we once calculated graduation rates—a problem long-solved!
As part of our 20th Anniversary year, IES has embarked on an investigation of how we can better communicate with our stakeholders. While they say you can't get a second chance at a first impression, our work to improve IPEDS—and spread the word about those improvements—forges on.
As always, feel free to reach out to me at email@example.com