IES Blog

Institute of Education Sciences

A Conversation about the Learning Sciences and Human-AI Interaction with Outstanding Predoctoral Fellow Ken Holstein

Each year, the Institute of Education Sciences (IES) recognizes an outstanding fellow from its Predoctoral Interdisciplinary Research Training Programs in the Education Sciences for academic accomplishments and contributions to education research. The 2020 awardee, Ken Holstein, completed his PhD at Carnegie Mellon University and is currently an assistant professor in the Human-Computer Interaction Institute at Carnegie Mellon University, where he directs a research lab focused on human-AI interaction.

Recently, we caught up with Dr. Holstein and asked him to discuss his research on human-computer interaction (HCI) and his experiences as a scholar.

 

How did you become interested in human-computer interaction and learning sciences research?

I have long been fascinated with human learning and expertise. As an undergraduate, I worked on research in computational cognitive science, with a focus on understanding how humans are often able to learn so much about the world from so little information (relative to state-of-the-art machine learning systems). Originally, I had planned on a career conducting basic research to better understand some of our most remarkable and mysterious cognitive capabilities. However, as I neared graduation, I became increasingly interested in pursuing research with more immediate potential for positive real-world impact. The fields of HCI and the learning sciences were a perfect fit to my interests. These areas provided opportunities to study how to support and enhance human learning and expertise in real-world settings, using a bricolage of research methods from a wide range of disciplines. 

Much of your lab’s research focuses on how humans and AI systems can augment each other’s abilities and learn from each other. What are the most promising applications of these ideas for education research and vice versa? 

I see a lot of potential for AI systems to augment the abilities of human teachers and tutors. In my PhD research, I worked with middle and high school teachers to understand their experiences working with AI-based tutoring software in their classrooms, and to co-design and prototype new possibilities together. Overall, teachers saw many opportunities to redesign AI tutoring software with the aim of augmenting and amplifying their own abilities as teachers, beyond simply automating instructional interactions with students. My research explored a small subset of these design directions, but there is a very rich design space that has yet to be explored.

In general, I believe that to design technologies that can effectively augment the abilities of human workers, such as teachers, it is critical to first understand what unique expertise and abilities they bring to the table as humans, which complement the capabilities of AI systems. This understanding can then inform the design of AI systems that explicitly support and draw upon the strengths of human workers (co-augmentation), and that can both learn from workers’ knowledge and support their professional learning (co-learning).

While I’ve described so far about ways the concepts of co-augmentation and co-learning can be applied to education research, I am also very excited about the opposite direction. I think that research on human-AI complementarity, AI-augmented work, and AI-assisted decision-making can benefit greatly by drawing upon ideas from education and the learning sciences. A lot of the research that we’re currently working on in my group involves bringing theories and approaches from the learning sciences to bear on open challenges in this space. To give just one example: there is a body of research that aims to design systems that support human-AI complementarity—configurations of humans and AI systems that yield better outcomes than working alone. So far, this research tends to focus on human ability as if it were static, rather than centering human learning. I believe this is a major missed opportunity, given that the human ability to learn and adapt based on incredibly scarce data is at the core of many of our most impressive capabilities relative to modern AI systems.

What advice would you give to emerging scholars that are pursuing a career in human-computer interaction? 

The field of human-computer interaction brings together a wide range of different topics, disciplines, research methods, and ways of knowing. As a junior scholar, this breadth can be both exciting and overwhelming. To navigate the overwhelm, I think it can be helpful to think about the forms of impact you would like your work to have. For example, are you interested in changing the way a research community thinks about a given topic? Are you interested in creating new technologies that can empower a particular group of people to do something that they could not have (easily) done otherwise? Are you interested in informing public policy with your research? Or are you interested in some combination of all of the above? Oftentimes, I have seen junior scholars in HCI start from a specific project idea, without having a clear sense of what impacts on the world their project might have if it is successful. Working “backwards” by considering and discussing desired impacts of research earlier on in the process can help to productively guide choices of research questions, methods, and lenses.


This blog was produced by IES training program officer Katina Stapleton (Katina.Stapleton@ed.gov). It is part of an Inside IES Research blog series showcasing a diverse group of IES-funded education fellows that are making significant contributions to education research, policy, and practice.