Inside IES Research

Notes from NCER & NCSER

IES Makes Three New Awards to Accelerate Breakthroughs in the Education Field

Through the Transformative Research in the Education Sciences Grants program (ALN 84.305T), IES  invests in innovative research that has the potential to make dramatic advances towards solving seemingly intractable problems and challenges in the education field, as well as to accelerate the pace of conducting education research to facilitate major breakthroughs. In the most recent FY 2024 competition for this program, IES invited applications from partnerships between researchers, product developers, and education agencies to propose transformative solutions to major education problems that leverage advances in technology combined with research insights from the learning sciences.

IES is thrilled to announce that three grants have been awarded in the FY 2024 competition. Building on 20 years of IES research funding to lay the groundwork for advances, these three projects focus on exploring potentially transformative uses of generative artificial intelligence (AI) to deliver solutions that can scale in the education marketplace if they demonstrate positive impacts on education outcomes. The three grants are:

Active Learning at Scale (Active L@S): Transforming Teaching and Learning via Large-Scale Learning Science and Generative AI

Awardee: Arizona State University (ASU; PI: Danielle McNamara)

The project team aims to solve the challenge that postsecondary learners need access to course materials and high-quality just-in-time generative learning activities flexibly and on-the-go.  The solution will be a mobile technology that uses interactive, research-informed, and engaging learning activities created on the fly, customized to any course content with large language models (LLMs). The project team will leverage two digital learning platforms from the SEERNet networkTerracotta and ASU Learning@Scale – to conduct research and will include over 100,000 diverse students at ASU, with replication studies taking place at Indiana University (IU). IES funding has supported a large portion of the research used to identify the generative learning activities the team will integrate into the system—note-taking, self-explanation, summarization, and question answering (also known as retrieval practice). The ASU team includes in-house technology developers and researchers, and they are partnering with researchers at IU and developers at INFLO and Clevent AI Technology LLC. The ASU and IU teams will have the educator perspective represented on their teams, as these universities provide postsecondary education to large and diverse student populations.

Talking Math: Improving Math Performance and Engagement Through AI-Enabled Conversational Tutoring

Awardee: Worcester Polytechnic Institute (PI: Neil Heffernan)

The project team aims to provide a comprehensive strategy to address persistent achievement gaps in math by supporting students during their out-of-school time. The team will combine an evidence-based learning system with advances in generative AI to develop a conversational AI tutor (CAIT– pronounced as “Kate”) to support independent math practice for middle school students who struggle with math, and otherwise, may not have access to after-school tutoring. CAIT will be integrated into ASSISTments, a freely available, evidence-based online math platform with widely used homework assignments from open education resources (OER). This solution aims to dramatically improve engagement and math learning during independent math problem-solving time. The team will conduct research throughout the product development process to ensure that CAIT is effective in supporting math problem solving and is engaging and supportive for all students. ASSISTments has been used by over 1 million students and 30,000 teachers, and IES has supported its development and efficacy since 2003. The project team includes researchers and developers at Worcester Polytechnic Institute and the ASSISTments Foundation, researchers from WestEd, educator representation from Greater Commonwealth Virtual School, and a teacher design team.

Scenario-Based Assessment in the age of generative AI: Making space in the education market for alternative assessment paradigm

Awardee: University of Memphis (PI: John Sabatini)

Educators face many challenges building high-quality assessments aligned to course content, and traditional assessment practices often lack applicability to real world scenarios. To transform postsecondary education, there needs to be a shift in how knowledge and skills are assessed to better emphasize critical thinking, complex reasoning, and problem solving in practical contexts. Supported in large part by numerous IES-funded projects, including as part of the Reading for Understanding Initiative, the project team has developed a framework for scenario-based assessments (SBAs). SBAs place knowledge and skills into a practical context and provide students with the opportunity to apply their content knowledge and critical thinking skills. The project team will leverage generative AI along with their framework for SBAs to create a system for postsecondary educators to design and administer discipline-specific SBAs with personalized feedback to students, high levels of adaptivity, and rich diagnostic information with little additional instructor effort. The project team includes researchers, developers, and educators at University of Memphis and Georgia State University, researchers and developers at Educational Testing Service (ETS), and developers from multiple small businesses including Capti/Charmtech, MindTrust, Caimber/AMI, and Workbay who will participate as part of a technical advisory group.

We are excited by the transformative potential of these projects and look forward to seeing what these interdisciplinary teams can accomplish together. While we are hopeful the solutions they create will make a big impact on learners across the nation, we will also share lessons learned with the field about how to build interdisciplinary partnerships to conduct transformative research and development.


For questions or to learn more about the Transformative Research in the Education Sciences grant program, please contact Erin Higgins (Erin.Higgins@ed.gov), Program Lead for the Accelerate, Transform, Scale Initiative.

Designing Culturally Responsive and Accessible Assessments for All Adult Learners

Dr. Meredith Larson, program officer for adult education at NCER, interviewed Dr. Javier Suárez-Álvarez, associate professor and associate director at the Center for Educational Assessment, University of Massachusetts Amherst. Dr. Suárez-Álvarez has served as the project director for the Adult Skills Assessment Project: Actionable Assessments for Adult Learners (ASAP) grant and was previously an education policy analyst in France for the Organisation for Economic Co-operation and Development (OECD), where he was the lead author of the PISA report 21st-Century Readers: Developing Literacy Skills in a Digital World. He and the ASAP team are working on an assessment system to meet the needs of adult education learners, educators, and employers that leverages online validated and culturally responsive banks of literacy and numeracy tasks. In this interview, Dr. Suárez-Álvarez discusses the importance of attending to learners’ goals and cultural diversity in assessment.

How would you describe the current context of assessment for adult education, and how does ASAP fit in it?

In general, the adult education field lacks assessments that meet the—sometimes competing—needs and goals of educators and employers and that attend to and embrace learner characteristics, goals, and cultural diversity. There is often a disconnect where different stakeholders want different things from the same assessments. Educators ask for curriculum-aligned assessments, learners want assessments to help them determine whether they have job-related skills for employment or promotion, and employers want to determine whether job candidates are trained in high-demand skills within their industries.

Despite these differing needs and interests, everyone involved needs assessment resources for lower skilled and culturally diverse learners that are easy to use, affordable or free, and provide actionable information for progress toward personal or occupational goals. ASAP is one of the first attempts to respond to these needs by developing an assessment system that delivers real-time customizable assessments to measure and improve literacy and numeracy skills. ASAP incorporates socioculturally responsive assessment principles to serve the needs of all learners by embracing the uniqueness of their characteristics. These principles involve ensuring that stakeholders from diverse socioeconomic, cultural, linguistic, racial, and ethnic groups are represented in our test design and development activities.

Why is attending to cultural diversity important to ASAP and assessment, and how are you incorporating this into your work?

U.S. Census projections for 2045 predict a shift in the demographic composition of the population from a White majority to a racially mixed majority. This suggests that we should prepare for cultural shifts and ensure our assessments fully embrace socioculturally responsive assessment practices. Without these practices, assessments limit the ability of adults from varied demographic backgrounds to demonstrate their capabilities adequately. Socioculturally responsive assessments are pivotal for representing the growing diversity in the learner population and for uncovering undetected workforce potential.

In ASAP, we are conducting focus groups, interviews, and listening sessions with learners, educators, and employers to understand their needs. We are also co-designing items in collaboration with key stakeholders and building consensus across adult education, workforce, and policy experts. We are developing use cases to understand hypothetical product users and conducting case studies to establish linkages between instruction and assessment as well as across classroom and workplace settings.

How has your background informed your interest in and contributions to ASAP?

As a teenager growing up in Spain, I saw first-hand the possible negative impact assessments could have when they don’t attend to learner goals and circumstances. When I was 15, my English teacher, based on narrow assessments, told my parents I was incapable of learning English, doubted my academic potential, and suggested I forego higher education for immediate employment. Defying this with the support of other teachers and my family, I pursued my passion. I became proficient in English at the age of 25 when I needed it to be a researcher, and I completed my PhD in psychology (psychometrics) at the age of 28.

Many adult students may have heard similar messages from prior teachers based on assessment results. And even now, many of the assessments the adult education field currently uses for these learners are designed by and for a population that no longer represents most learners. These adult learners may be getting advice or feedback that does not actually reflect their abilities or doesn’t provide useful guidance. Unfortunately, not all students are as lucky as I was. They may not have the support of others to counterbalance narrow assessments, and that shouldn’t be the expectation.

What are your hopes for the future of assessments for this adult population and the programs and employers that support them?

I hope we switch from measuring what we know generally how to measure (such as math and reading knowledge on a multiple-choice test) to measuring what matters to test takers and those using assessment results so that they can all accomplish goals in ways that honor individuals’ circumstances. Knowledge and skills—like the real world—are much more than right and wrong responses on a multiple-choice item. I also hope that as we embrace the latest developments in technology, such as AI, we can use them to deliver more flexible and personalized assessments.

In addition, I hope we stop assuming every learner has the same opportunities to learn or the same goals for their learning and that we start using assessments to empower learners rather than just as a measure of learning. In ASAP, for example, the adult learner will decide the type of test they want to take when to take it, the context within which the assessment will be framed, and when, where, and to whom the assessment result will be delivered.


This blog was produced by Meredith Larson (Meredith.Larson@ed.gov), program officer for adult education at NCER.

 

Innovative Approaches to High Quality Assessment of SEL Skills

In celebration of IES’s 20th anniversary and SEL Day, we are highlighting NCER’s investments in field-initiated research. In this blog, program officer Dr. Emily Doolittle discusses a persistent barrier to supporting social and emotional learning (SEL) in schools—the lack of high quality, reliable, and valid SEL assessments—and the innovative research supported by IES to tackle this challenge.

High quality measurement is critical for education research and practice. Researchers need valid and reliable assessments to answer questions about what works for whom and why. Schools use assessments to guide instruction, determine student response to intervention, and for high-stakes decision-making such as provision of special education services.

For social and emotional learning (SEL), assessment can be particularly challenging due to lack of precision in defining core SEL competencies. One consequence of this imprecision is that measures and intervention targets are often misaligned. SEL assessment also tends to rely on student, teacher, and parent reports despite the lack of agreement among reporters and the potential for biased responding. Through NCER, IES is supporting the development and validation of SEL measures using new technologies and approaches to address some of these challenges. Here are some examples of this innovative measurement work.

  • SELweb is a web-based direct assessment of four specific SEL skills - emotion recognition, social perspective taking, social problem solving, and self-control. It is available for use with elementary school students in grades K-3 and 4-6 with a middle school version currently under development. The SEL Quest Digital Platform will support school-based implementation of SELweb and other SEL assessments with an instrument library and a reporting dashboard for educators.
  • vSchool uses a virtual reality (VR) environment to assess prosocial skills. Students in 4th to 6th grades build their own avatar to interact with other characters in school settings using menu-driven choices for prosocial (helping, encouraging, sharing) and non-prosocial (aggressive, bullying, teasing) behaviors.
  • VESIP (Virtual Environment for Social Information Processing) also uses a VR school environment with customizable avatars to assess 3rd through 7th grade students’ social information processing in both English and Spanish.

Other assessments take a different approach to the challenges of SEL measurement by looking for ways to improve self, teacher, and parent reports.

  • In Project MIDAS, the research team is creating a system to integrate the different information provided by students, teachers, and parents to see if combining these reports will lead to more accurate identification of middle school students with SEL needs.
  • In Project EASS-E, the researchers are creating a teacher-report measure that will incorporate information about a child’s environment (e.g., neighborhood and community context) to better support elementary school students’ needs.

Please check out IES’s search tool to learn more about the wide variety of measurement research we fund to develop and validate high quality assessments for use in education research and practice.


Written by Emily Doolittle (Emily.Doolittle@ed.gov), NCER Team Lead for Social Behavioral Research

 

SELweb: From Research to Practice at Scale in Education

With a 2011 IES development grant, researchers at Rush University Medical Center, led by Clark McKown, created SELweb, a web-based system to assess the social-emotional skills in children in Kindergarten to Grade 3. The system (watch the video demo) includes illustrated and narrated modules that gauge children’s social acceptance with peers and assess their ability to understand others’ emotions and perspectives, solve social problems, and self-regulate. The system generates teacher reports with norm-referenced scores and classroom social network maps. Field trials with 8,881 children in seven states demonstrate that system produces reliable and valid measures of social-emotional skills. Findings from all publications on SELweb are posted here.

In 2016, with support from the university, McKown launched a company called xSEL Labs, to further develop and ready SELweb for use at scale and to facilitate the launch SELweb into the school marketplace. SELweb is currently used in 21 school districts in 16 states by over 90,000 students per year.

Interview with Clark McKown of Rush University Medical Center and xSEL Labs

 

From the start of the project, was it always a goal for SELweb to one day be ready to be used widely in schools?

CM: When we started our aspiration was to build a usable, feasible, scientifically sound assessment and it could be done. When the end of the grant got closer, we knew that unless we figured out another way to support the work, this would be yet another good idea that would wither on the vine after showing evidence of promise. In the last year and a half of the grant, I started thinking about how to get this into the hands of educators to support teaching and learning, and how to do it in a large-scale way.

 

By the conclusion of your IES grant to develop SELweb, how close were you to the version that is being used now in schools? How much more time and money was it going to take?

CM: Let me answer that in two ways. First is how close I thought we were to a scalable version. I thought we were pretty close. Then let me answer how close we really were. Not very close. We had built SELweb in a Flash based application that was perfectly suited to small-scale data collection and was economical to build. But for a number of reasons, there was no way that it would work at scale. So we needed capital, time, and a new platform. We found an outstanding technology partner, the 3C Institute, who have a terrific ed tech platform well-suited to our needs, robust, and scalable. And we received funding from the Wallace Foundation to migrate the assessment from the original platform to 3C’s. The other thing I have learned is that technology is not one and done. It requires continued investment, upkeep, and improvement.

What experiences led you to start a company? How were you able to do this as an academic researcher?

CM: I could tell you that I ran a children’s center, had a lot of program development experience, had raised funds, and all that would be true, and some of the skills I developed in those roles have transferred. But starting a company is really different than anything I’d done before. It’s exciting and terrifying. It requires constant effort, a willingness to change course, rapid decision-making, collaboration, and a different kind of creativity than the academy. Turns out I really like it. I probably wouldn’t have made the leap except that the research led me to something that I felt required the marketplace to develop further and to realize its potential. There was really only so far I could take SELweb in the academic context. And universities recognize the limitations of doing business through the university—that’s why they have offices of technology transfer—to spin off good ideas from the academy to the market. And it’s a feather in their cap when they help a faculty member commercialize an invention. So really, it was about finding out how to use the resources at my disposal to migrate to an ecosystem suited to continuing to improve SELweb and to get it into the hands of educators.

How did xSEL Labs pay for the full development of the version of SELweb ready for use at scale?

CM: Just as we were getting off the ground, we developed

 a partnership with a research funder (the Wallace Foundation) who was interested in using SELweb as an outcome measure in a large-scale field trial of an SEL initiative. They really liked SELweb, but it was clear that in its original form, it simply wouldn’t work at the scale they required. So we worked out a contract that included financial support for improving the system in exchange for discounted fees in the out years of the project.

What agreement did you make with the university in order to start your company and commercial SELweb?

CM: I negotiated a license for the intellectual property from Rush University with the university getting a royalty and a small equity stake in the company.

Did anyone provide you guidance on the business side?

CM: Yes. I lucked into a group of in-laws who happen to be entrepreneurs, some in the education space. And my wife has a sharp business mind. They were helpful. I also sought and found advisors with relevant expertise to help me think through the initial licensing terms, and then pricing, marketing, sales, product development, and the like. One of the nice things about business is that you aren’t expected to know everything. You do need to know how and when to reach out to others for guidance, and how to frame the issues so that guidance is relevant and helpful.

How do you describe the experience of commercializing SELWeb?

CM: Commercialization is, in my experience, an exercise in experimentation and successive approximations. How will you find time and money to test the waters? Commercialization is an exciting and challenging leap from the lab to the marketplace. In my experience, you can’t do it alone, and even with great partners, competitive forces and chance factors make success scale hard to accomplish. Knowing what you don’t know, and finding partners who can help, is critical.

I forgot who described a startup as a temporary organization designed to test whether a business idea is replicable and sustainable. That really rings true. The experience has been about leaving the safe confines of the university and entering the dynamic and endlessly interesting bazaar beyond the ivory tower to see if what I have to offer can solve a problem of practice.

In one sentence (or two!), what would say is most needed for gaining traction in the marketplace?

CM: Figure out who the customer is, what the customer needs, and how what you have to offer addresses those needs. Until you get that down, all the evidence in the world won’t lead to scale.

Do you have advice for university researchers seeking to move their laboratory research into wide-spread practice?

CM: It’s not really practical for most university researchers to shift gears and become an entrepreneur. So I don’t advise doing what I did, although I’m so glad I did. For most university researchers, they should continue doing great science, and when they recognize a scalable idea, consider commercialization as an important option for bringing the idea to scale. My impression is that academic culture often finds commerce to be alien and somewhat grubby, which can get in the way. The truth is, there are whip-smart people in business who have tremendous expertise. The biggest hurdle for many university researchers will be to recognize that they lack expertise in bringing ideas to market, they will need to find that expertise, respect it, and let go of some control as the idea, program, or product is shaped by market forces. It’s also a hard truth for researchers, but most of the world doesn’t care very much about evidence of efficacy. They have much more pressing problems of practice to attend to. Don’t get me wrong—evidence of efficacy is crucial. But for an efficacious idea to go to scale, usability and feasibility are the biggest considerations.

For academics, getting the product into the marketplace requires a new set of considerations, such as: Universities and granting mechanisms reward solo stars; the marketplace rewards partnerships. That is a big shift in mindset, and not easily accomplished. Think partnerships, not empires; listening more than talking.

Any final words of wisdom in moving your intervention from research to practice?

CM: Proving the concept of an ed tech product gets you to the starting line, not the finish. Going to scale benefits from, probably actually requires, the power of the marketplace. Figuring out how the marketplace works and how to fit your product into it is a big leap for most professors and inventors. Knowing the product is not the same as knowing how to commercialize it.

 ____________________________________________________________________________

Clark McKown is a national expert on social and emotional learning (SEL) assessments. In his role as a university faculty member, Clark has been the lead scientist on several large grants supporting the development and validation of SELweb, Networker, and other assessment systems. Clark is passionate about creating usable, feasible, and scientifically sound tools that help educators and their students.

This interview was produced by Ed Metz of the Institute of Education Sciences. This post is the third in an ongoing series of blog posts examining moving from university research to practice at scale in education.

Lexia RAPID Assessment: From Research to Practice at Scale in Education

With a 2010 measurement grant award and a 2010 Reading for Understanding subaward from IES, a team at Florida State University (FSU) led by Barbara Foorman, developed a web-based literacy assessment for Kindergarten to Grade 12 students.

Years of initial research and development of the assessment method, algorithms, and logic model at FSU concluded in 2015 with a fully functioning prototype assessment called RAPID, the Reading Assessment for Prescriptive Instructional Data. A body of research demonstrates its validity and utility. In 2014, to ready the prototype for use in schools and to disseminate on a wide-scale basis, FSU entered into licensing agreements with the Florida Department of Education (FLDOE) to use the prototype assessment royalty-free as the Florida Assessment for Instruction in Reading—Florida Standards (FAIR-FS), and with Lexia Learning Systems LLC, a Rosetta Stone company (Lexia), to create its commercial solution: Lexia® RAPID™ Assessment program.  Today, RAPID (watch video) consists of adaptive screening and diagnostic tests for students as they progress in areas such as word recognition, vocabulary knowledge, syntactic knowledge and reading comprehension. Students use RAPID up to three times per year in sessions of 45 minutes or less, with teachers receiving results immediately to inform instruction.

RAPID is currently used by thousands of educators and students across the U.S. RAPID has been recommended in Massachusetts as a primary screening tool for students ages 5 and older, is on both the Ohio Department of Education List of Approved Screening Assessments and the Michigan Lists of Initial and Extensive Literacy Assessments.

Interview with Barbara Foorman (BF) of Florida State University and Liz Brooke (LB) of Lexia Learning  

Photograph of Barbara Foorman, PhD

From the start of the project, was it always a goal for the assessment to one day be ready to be used widely in schools?

BF: Yes!

How was the connection made with the Florida Department of Education?   

BF: FSU authors (Yaacov Petscher, Chris Schatschneider, and I) gave the assessment royalty-free in perpetuity to the FLDOE, with the caveat that they had to host and maintain it. The FLDOE continues to host and maintain the Grade 3 to 12 system but never completed the programming on the K to 2 system prototype. The assessment we provided to the FLDOE is called the Florida Assessment for Instruction in Reading (FAIR—FS).  We also went to FSU’s Office of Commercialization to create royalty and commercialization agreements.

How was the connection made with Lexia? 

BF: Dr. Liz (Crawford) Brooke, Chief Learning Officer of Lexia/Rosetta Stone, and Dr. Alison Mitchell, Director of Assessment at Lexia, had both previously worked at the Florida Center for Reading Research (FCRR). Liz served as the Director of Interventions, as well as a doctoral student under me, and Alison was a postdoctoral assistant in research. Both Liz and Alison had worked on previous versions of the assessment.

Photograph of Liz Brooke, PhD

LB: Also, both Yaacov and Chris had done some previous work with me on the Assessment Without Testing® technology, which was embedded in our K to 5 literacy curriculum solution, the Lexia® Core5 Reading® program.

Did Lexia have to do additional R&D to develop the FSU assessment into RAPID as a commercial offering for larger scale use? Were resources provided?  

LB: To build and scale the FSU prototype assessment into a commercial platform, our team of developers worked closely with the developers at FSU to reprogram certain software application and databases. We’ve also spent the last several years at Lexia working to translate the valuable results that RAPID generates into meaningful, dynamic and usable data and tools for schools and educators.  This meant designing customized teacher and administrator reports for our myLexia® administrator dashboard, creating a library of offline instructional materials for teachers, as well as developing both online and in-person training materials specifically designed to support our RAPID solution.

BF: They also hired a psychometrician to submit RAPID to the National Center for Intensive Intervention, and had their programmers develop capabilities to support access to RAPID via iPads as well as through the web-based application.

What kind of licensing agreement did you (or FSU) work out?  

BF: The prototype assessment method, algorithms, and logic model that were used to develop RAPID are licensed to Lexia by FSU. Some of these may also be available for FSU to license to other interested companies.  Details of FSU’s licensing agreement terms to Lexia are confidential, however, royalties received by FSU through its licensing arrangements are shared between authors, academic units, and the FSU Research Foundation, according to FSU policies. (Read here for more about commercialization of FSU technologies and innovations.)

Does FSU receive royalties from the sale of RAPID?

BF: Yes. The revenue flows through FSU’s royalty stream—percentages to the three authors and the colleges and departments that we three authors are housed in.

What factors did Lexia consider when determining to partner with FSU to develop RAPID?

LB: We considered the needs of our customers and the fact that we wanted to develop and offer a commercial assessment solution that would provide a great balance between efficiency from the adaptive technology, but also insight based on an emphasis on reading and language skills. At Lexia, we are laser-focused on literacy and supporting the skills students need to be proficient readers. The value of the research foundation of the assessment was a natural fit for that reason. RAPID emphasizes Academic Language skills in a way that many other screening tools miss - often you’d need a specialized assessment given by a speech language pathologist to assess the skills that RAPID captures in a relatively short period of time for a whole classroom of students.

Describe how RAPID is marketed and distributed to schools?

LB: The Lexia RAPID Assessment was designed and is offered as a K-12 universal screening tool that schools can use up to three times per year. We currently offer RAPID as a software as a service -based subscription on an annual cost per license basis that can be either purchased per student or per school.  We also encourage schools that utilize RAPID to participate in a yearlong Lexia Implementation Support Plan that includes professional learning opportunities and data coaching specific to the RAPID solution, to really understand and maximize the value of the data and instructional resources that they receive as part of using RAPID.

Do you have advice for university researchers seeking to move their laboratory research into wide-spread practice?

BF: Start working with your university’s office of commercialization sooner than later to help identify market trends and create Non-Disclosure Agreements. In the case of educational curricula and assessments, researchers need to be (a) knowledgeable about competing products, (b) able to articulate what’s unique and more evidence-based than the competitors’ products, and (c) know that educators will find their product useful.

LB: As Barbara noted, it is critical to identify the specific, real-world need that your work is addressing and to be able to speak to how that’s different than other solutions out there.  It’s also really important to make sure that the research you’ve done has really validated that it does meet the need you are stating, as this will be the foundation of your claims in the market.

____________________________________________________________________________________________

Barbara Foorman, Ph.D., is the Frances Eppes Professor of Education, Director Emeritus of FCRR, and Director of the Regional Educational Laboratory Southeast at FSU. Barbara is an internationally known expert in reading with over 150 peer reviewed publications. Barbara was co-editor of the Journal of Research on Educational Effectiveness and is a co-founder and on the board of the Society for Research on Educational Effectiveness

Liz Brooke, Ph.D., CCC-SLP is the Chief Learning Officer for Rosetta Stone/Lexia Learning. Dr. Liz Brooke is responsible for setting the educational vision for the company's Language and Literacy products, including the Adaptive Blended Learning (ABL) strategy that serves as the foundation for Rosetta Stone’s products and services. Liz has been working in the education sector for over 25 years and has been published in several scholarly journals. Liz joined Lexia in 2010. Prior to that, she worked as the Director of Interventions at the FCRR and she has also served as a speech-language pathologist at Massachusetts General Hospital and in the public school setting. Liz began her career in the classroom as a first-grade teacher.

This interview was produced by Edward Metz of the Institute of Education Sciences. This post is the second in an ongoing series of blog posts examining moving from university research to practice at scale in education.