Skip Navigation

The Conference Circuit Reboots + What AI is Leaving Behind

Mark Schneider, Director of IES | August 24, 2021

Last week, for the first time since the world closed down in March 2020, I attended a mostly in-person conference—the ASU+GSV Summit, held in San Diego. It was an "interesting" experience and highlighted some of the challenges we can expect as we begin to recover from the pandemic. First, allow me to expound on these challenges for those of you contemplating when to rejoin the conference circuit. Then I want to share some of my takeaways from the summit and the implications for education research.

Getting back to normal is hard

As many of you know from your own travels, the airlines are a mess. Every one of my flights had some drama associated with it: missing crew, broken equipment, delays, missed connections (in one case, I made a connection only because my connecting flight at Dallas Fort Worth was even more delayed than my originating flight in DC). The decimation of the taxi industry is now coming back to haunt us, since the fares on Uber and Lyft are often far higher than what taxis used to charge (or still charge if you can find one).

At times, I felt I was in the audience for Pandemic Theatre. I needed proof of a negative COVID test to attend the summit. The original plan was for attendees to take the test within the 72 hours before their arrival, supervised by an online proctor. That system broke down the day it was announced. The conference organizers then required attendees to bring a photo of a negative test result. I have no idea how many people shared photos or how many used photos from outside the 72-hour window.

Although the hotel was littered with signs noting that fully vaccinated individuals need not be masked, conference organizers wanted everyone to wear masks except when on a panel on stage. Since the attendees of a conference like this are generally rule-abiding folk, masking was widespread, though everyone tore off their masks as soon as they made a break to a balcony, patio, or lobby. Social distancing was not in evidence. In the auditorium where the plenary sessions were held and the lights were dim, I noted many people were maskless. More than occasionally, I worried that this was going to go down in history as a super-spreader event.

I believe every in-person panel was also live-streamed. In addition, there were panels that were only virtual—so there were more panels available online than in-person. Many attendees told me that they were live streaming the conference in their hotel rooms. Imagine traveling to a conference to hang out in your hotel room. (Oh, I guess many of us paid for college tuition for our kids who sat in their dorm rooms and live streamed their classes.)

Enough travelogue—let me share a few substantive observations.

AI and the future of education research

Not surprisingly given the mission of the ASU+GSV summit, discussions about AI were ubiquitous. The term "AI" has been appropriated for anything involving data, so it's probably not surprising that "AI-this" and "AI-that" showed up all over the conference. Many of those discussions focused on transparency and the equitable uses of AI. These themes are showing up in many different venues across the federal government and in the private sector.

Transparency is, like AI itself, highly valued. As a federal science agency, IES fully supports transparent, open data and open science. But "open, transparent AI" is a new challenge. Questions abound: How can we make something "transparent" that is difficult to express without complex formulas few people understand? What does transparency mean in practice for AI in education research and practice? Should the algorithms and code be made public? Do the testing data used to train algorithms need to be shared? Perhaps we could begin with shifting from "transparency" to "interpretable" AI that humans can understand, interrogate, and ultimately trust if that's deserved. There are many challenging technical issues here; but IES must begin the journey, maybe with a short-term goal of including AI interpretability principles in SEER.

There is also the question of equity in the deployment of AI. We know that some interventions work better with some groups of learners than with others. But to what extent are such differential outcomes the result of faulty AI algorithms or biased training data? How do we discover and correct those problems?

Clearly, IES needs to start moving towards better solutions to transparency and equity in AI. Thoughts or advice welcome.

K–12 vs. higher education

I was struck by how little attention was given to K–12 education relative to a much bigger focus on postsecondary education. Some of this, no doubt, was a function of ASU sponsorship—remember that ASU is one of the largest universities in the nation and a leader in reforming higher education practices. But several reliable sources also told me that there is widespread belief among entrepreneurs that opportunities are much more limited in the K–12 world than in higher education. A harsher variant of that sentiment is that the K–12 world is such a mess that entrepreneurs are afraid to venture into it. (For the record, I am sharing observations, not endorsing any viewpoint!)

Within the higher education panels and presentations, much attention was paid to short-term (usually non-credit) credentials with a consistent concern for measuring the return on investment from those credentials as measured by labor market outcomes.

As some of you might know, this was a particular focus of the work I did before joining IES through College Measures—so maybe I was just choosing panels/presentations that reinforced my bias. But there was a lot of energy focused on this non-credit course taking.

IES and other federal statistical agencies, including the Census Bureau and Bureau of Labor Statistics, have long recognized the growth of these credentials. See here for a discussion of the GEMEnA project (BTW: this project ended before I arrived, so the crazy and distracting mixture of caps and lower-case letters predates me). IES held a technical review panel last October on how to capture more non-credit activities via IPEDS—so we are moving along, albeit slowly. And other efforts, like Credential Engine, the Chamber of Commerce Foundation's T3 initiative, and the joint effort of the National Association of Manufacturers and the National Student Clearinghouse, are moving down the same path. The merged analytic power of Emsi and Burning Glass provides a means of measuring labor market outcomes. IES needs to figure out exactly how it can help push this effort forward.

Obviously, there was a lot more going on at the conference, but these seem to me to be immediate challenges that IES needs to face. As always, I encourage you to share your ideas with me. Mark.schneider@ed.gov