IES Blog

Institute of Education Sciences

Using Mistakes as a Vehicle for Learning in Mathematics: From Research to Practice at Scale in Education

Every student makes mistakes. But not every student is given the opportunity to learn from mistakes. Left unaddressed, the mathematical misconceptions that underlie many mistakes can keep students from progressing in mathematics.

 

At the request of districts in the Minority Student Achievement Network (MSAN), a Strategic Education Research Partnership (SERP) team was convened in 2007 to address a widening achievement gap in Algebra I. The team was charged with identifying an intervention strategy, subject to several district constraints:

  1. The solution would need to be applied to all students in the regular classroom to avoid the stereotype threat associated with separating students based on performance and to protect the intervention from budget cuts that target supplemental, after-school, and summer programs first.
  2. A new curriculum was off the table because it would create upheaval for a time and would be followed by a decline in student performance during the period of adjustment.
  3. Extensive teacher training was considered undesirable because it would be costly and because algebra teachers consider themselves more expert in mathematics teaching than central office staff who would be requiring the training.

 

Julie Booth joined the partnership, and with funding from IES, led the iterative development and testing of worked example assignments that, with the input of teachers and administrators, fit within the routines of the classroom. The result—AlgebraByExample—consists of 42 uniquely designed assignments that address misconceptions, harness the power of explanation, and use mistakes as a vehicle for learning.

Typical math assignments require students to solve problems on their own. If a student’s work is incorrect, the student may never focus on what went wrong. ByExample assignments also give students problems to solve, but they first provide a solution to a similar problem that is marked right or wrong. Students are prompted with questions that target common misconceptions and errors before solving a similar problem on their own. Each assignment contains several strategically designed item pairs:

 

 

Designed in collaboration with teachers from districts in several states, the assignments can be easily incorporated into any Algebra I curriculum and teachers can choose in what way and in what order to use them. The assignments were tested in randomized trials in classrooms in eight districts with more than 6,000 students. Not only did students using AlgebraByExample improve an average of 7 percentage points on an assessment of standardized test items, students at the lower end of the distribution improved the most. The PDF downloads of the assignments are freely available for anyone to use.

The success of AlgebraByExample  led to  further IES funding of MathByExample for Grades 4 and 5 and GeometryByExample for high school geometry .

 

Resources:

AlgebraByExample website

MathByExample website

Booth et al, 2015

NSF Stem for All Video Submission 2019

 

Interview with Dr. Suzanne Donovan (SERP), Dr. Julie Booth (Temple University), and Allie Huyghe (SERP), the developers of the ByExample interventions.

 

 

Was it part of the original plan to develop an intervention that could one day be used at scale in schools?

Yes. SERP partnerships begin with problems of practice nominated by district partners, but the partnership agreement distinguishes SERP from a consultant. The intention from the start is to frame the problem and design a solution that can be used at scale. SERP has developed in-house, user-centered design expertise so that resources (such as the ByExample products) developed through partnerships meet the needs of teachers and students. Products scale when they improve the experience of teachers and students. Both the model and the internal design capacity allow SERP to move from problem framing through research, development, and dissemination of a product with IES grant funding.

 

Describe the initial research and development that occurred.

Dr. Julie Booth drafted initial assignments drawing on the mathematics misconceptions literature. SERP held regular partnership meetings with teachers and administrators at which assignments were reviewed and additional misconceptions were nominated for attention in the assignments. Administrators agreed to randomization of the assignments across classrooms and within-teacher. Assignments were first tested in individual topic blocks and revised in accordance with student performance data, observations, and teacher feedback. A year-long pilot study was then conducted using the full set of assignments.

 

Beyond IES or ED grants, what additional funding was needed to develop the intervention?

For the ByExample work, additional funding was provided by the Goldman Sachs Foundation in the initial phase to support partnership formation, problem framing, and the solution generation. IES grants funded the research and development, along with initial dissemination activities to make the materials available to the public. We were also able to develop an online platform to allow for digital use with the IES grant funds.

 

What model was used for dissemination and sustainability?

The assignments are available as free downloads on SERP’s website, and as printed workbooks through SERP’s partner print-on-demand company. They have been publicized through online communications, journal articles, presentations at conferences of various types, social media, and word of mouth. There will be a small fee for use of the digital platform to support its maintenance, but the PDFs will remain as free downloads. We have been able to sustain the collaboration of the partnership team by responding to requests from educators to expand the approach to other grade levels and submitting additional proposals to IES that have been awarded.

 

What advice would you provide to researchers who are looking to move their research from the lab to market? What steps should they take? What resources should they look for?

First, I would note that it is difficult to persuade educators to use a product that solves a problem they don’t believe they have. Listen to educators and apply research expertise to address the challenges that they experience on a day-to-day basis. Design for ease of use by teachers. No matter how good your strategy or marketing is, if it’s too much work for an already busy teacher to use, you may get uptake by a few committed teachers, but not at scale. Finally, pay attention to where teachers get their information. For AlgebraByExample, we got a big boost from the Marshall Report, produced by a teacher for other teachers to call attention to usable research.  

 

In one sentence, what would you say is most needed for gaining traction and wide scale use by educators?

Design for the routines of the classroom.

 


Suzanne Donovan, PhD, is the founding Executive Director of the SERP Institute, an education research, development, and implementation organization incubated at the National Academies. SERP leads collaborations of educators, researchers, and designers to generate research-based, scalable, and sustainable solutions to critical problems of practice. 

Julie Booth, PhD, is a Professor of STEM Education and Psychology and the Deputy Dean of Academic and Faculty Affairs at Temple University’s College of Education and Human Development. Her work focuses on translating between cognitive science and education to better understand students’ learning and improve instruction, primarily in mathematics education. She is currently an Executive Editor for the Journal of Experimental Education.

Allie Huyghe is the Assistant Director of the SERP Institute, where she manages several projects, including the IES-funded MathbyExample and GeometryByExample projects. She is also intricately involved with other SERP areas of work, participating in the design of materials from early development through release to the public.

 

This interview was produced by Christina Chhin (Christina.Chhin@ed.gov) and Edward Metz (Edward.Metz@ed.gov) of the Institute of Education Sciences. This is the fifth in an ongoing series of blog posts examining moving from university research to practice at scale in education.​

 

 

A2i: From Research to Practice at Scale in Education

This blog post is part of an interview series with education researchers who have successfully scaled their interventions.

Assessment-to-Instruction (A2i) is an online Teacher Professional Support System that guides teachers in providing Kindergarten to Grade 3 students individualized literacy instruction and assessments. Students complete the assessments independently online without the teacher taking time away from instruction. A2i generates instantaneous teacher reports with precise recommendations for each student and group recommendations. See a video demo here. Between 2003 and 2017, researchers at Florida State University (FSU) and Arizona State University (ASU), led by Carol Connor, developed and evaluated A2i with the support of a series of awards from IES and the National Institutes of Health. Findings from all publications on the A2i are posted here.

While results across seven controlled studies demonstrated the effectiveness of A2i, feedback from practitioners in the field demonstrated that implementation often required substantial amounts of researcher support and local district adaptation, and that the cost was not sustainable for most school district budgets. In 2014, the development firm Learning Ovations, led by Jay Connor, received an award from the Department of Education (ED) and IES’s Small Business Innovation Research program (ED/IES SBIR) to develop an technologically upgraded and commercially viable version of A2i to be ready to be used at scale in classrooms around the country. In 2018, with the support of a five-year Education Innovation and Research (EIR) expansion grant from ED totaling $14.65 million, A2i is now used in more than 110 schools across the country, with plans for further expansion. 

 

Interview with Carol Connor (CC) and Jay Connor (JC)

From the start of the research in the early 2000s, was it always the goal to develop a reading intervention that would one day be used on a wide scale?
CC: Yes and no. First, we had to answer the question as to whether individualization was effective in achieving student literacy outcomes. Once the research established that, we knew that this work would have wide-scale application.

When did you start thinking about a plan for distribution
CC: Before embarking on the cumulative results studies, in 2008, Jay said that we needed to know who the “customer” was… i.e., how purchasing decisions were made at scale.  His 2008 Phase I ED/IES SBIR was critical in shifting our research focus from individual classrooms to school districts as the key scaling point. 

Did you work with a technology transfer office at the university?
CC: Only to the extent of contractually clarifying intellectual property (IP) ownership and licensing. 

Who provided the support on the business side?
CC: Jay, who has an MBA/JD and has been a senior officer in two Fortune 100 companies was very instrumental in guiding our thinking of this evolution from important research to practical application. 


 Do you have any agreement about the IP with the university? What were the biggest challenges in this area?

JC: Yes, Learning Ovations has a 60-year renewable exclusive licensing agreement with FSU Foundation. FSU couldn’t have been better to work with.  Though there were expected back-and-forth elements of the original negotiations, it was clear that we shared the central vision of transforming literacy outcomes.  They continue to be a meaningful partner.

When and why was Learning Ovations first launched?
JC: In order to pursue SBIR funding we needed to be a for-profit company.  At first, I used my consulting business – Rubicon Partners LLP – as the legal entity for a 2008 Phase I award from ED/IES SBIR.  When we considered applying (and eventually won) a Fast Track Phase I & II award from SBIR in 2014, it was clear that we needed to create a full C – Corp that could expand with the scaling of the business, thus Learning Ovations was formed.

Who has provided you great guidance on the business side over the year? What did they say and do? 
JC: Having run large corporate entities and worked with small business start-ups in conjunction with Arizona State University (Skysong) and the University of California, Irvine (Applied Innovation at The Cove) and having taught entrepreneurship at The Paul Merage School of Business at UC Irvine, I had the experience or network to connect to whatever business guidance we needed.  Further, having attended a number of reading research conferences with Carol, I was quite conversant in the literacy language both from the research side and from the district decision maker’s side.

How do you describe the experience of commercializing the A2i? What were the biggest achievements and challenges in terms of preparing for commercialization?

JC: Having coached scores of entrepreneurs at various stages, I can safely say that there is no harder commercialization than one that must stay faithful to the underlying research.  A key strategy for most new businesses: being able to pivot as you find a better (easier) solution.  It is often circumscribed by the “active ingredients” of the underlying research.  Knowing this, we imbued Learning Ovations with a very strong outcomes mission – all children reading at, or above, grade level by 3rd grade.  This commitment to outcomes certainty is only assured by staying faithful to the research.  Thus, a possible constraint, became our uncontroverted strength.

Do you have advice for university researchers seeking to move their laboratory research in education into wide-spread practice? 
JC:  Start with the end in mind.  As soon as you envision wide-scale usage, learn as much as you can about the present pain and needs of your future users and frame your research questions to speak to this.  Implementation should not be an after-the-fact consideration; build it into how you frame your research questions. On one level you are asking simultaneously “will this work with my treatment group” AND “will this help me understand/deliver to my end-user group.”  I can’t imagine effective research being graphed onto a business after the fact.  One key risk that we see a number of researchers make is thinking in very small fragments whereas application (i.e., the ability to go to scale) is usually much more systemic and holistic.

In one sentence, what would say is most needed for gaining traction in the marketplace?
JC: If not you, as a researcher, someone on your team of advisors needs to know the target marketplace as well as you know the treatment protocols in your RCT.

____________

Carol Connor is a Chancellor’s Professor in the UC Irvine School of Education. Prior she was a professor of Psychology and a Senior Learning Scientist at the Learning Sciences Institute at ASU. Carol’s research focuses on teaching and learning in preschool through fifth grade classrooms – with a particular emphasis on reading comprehension, executive functioning, and behavioral regulation development, especially for low-income children.

Joseph “Jay” Connor, JD/MBA, is the Founder/CEO of Learning Ovations, Inc, the developer of the platform that has enabled the A2i intervention to scale.  Jay has 20+ years of experience in senior business management at the multi-billion dollar corporate level, and has experience in the nonprofit and public policy arenas.

This interview was produced by Edward Metz of the Institute of Education Sciences.

SELweb: From Research to Practice at Scale in Education

With a 2011 IES development grant, researchers at Rush University Medical Center, led by Clark McKown, created SELweb, a web-based system to assess the social-emotional skills in children in Kindergarten to Grade 3. The system (watch the video demo) includes illustrated and narrated modules that gauge children’s social acceptance with peers and assess their ability to understand others’ emotions and perspectives, solve social problems, and self-regulate. The system generates teacher reports with norm-referenced scores and classroom social network maps. Field trials with 8,881 children in seven states demonstrate that system produces reliable and valid measures of social-emotional skills. Findings from all publications on SELweb are posted here.

In 2016, with support from the university, McKown launched a company called xSEL Labs, to further develop and ready SELweb for use at scale and to facilitate the launch SELweb into the school marketplace. SELweb is currently used in 21 school districts in 16 states by over 90,000 students per year.

Interview with Clark McKown of Rush University Medical Center and xSEL Labs

 

From the start of the project, was it always a goal for SELweb to one day be ready to be used widely in schools?

CM: When we started our aspiration was to build a usable, feasible, scientifically sound assessment and it could be done. When the end of the grant got closer, we knew that unless we figured out another way to support the work, this would be yet another good idea that would wither on the vine after showing evidence of promise. In the last year and a half of the grant, I started thinking about how to get this into the hands of educators to support teaching and learning, and how to do it in a large-scale way.

 

By the conclusion of your IES grant to develop SELweb, how close were you to the version that is being used now in schools? How much more time and money was it going to take?

CM: Let me answer that in two ways. First is how close I thought we were to a scalable version. I thought we were pretty close. Then let me answer how close we really were. Not very close. We had built SELweb in a Flash based application that was perfectly suited to small-scale data collection and was economical to build. But for a number of reasons, there was no way that it would work at scale. So we needed capital, time, and a new platform. We found an outstanding technology partner, the 3C Institute, who have a terrific ed tech platform well-suited to our needs, robust, and scalable. And we received funding from the Wallace Foundation to migrate the assessment from the original platform to 3C’s. The other thing I have learned is that technology is not one and done. It requires continued investment, upkeep, and improvement.

What experiences led you to start a company? How were you able to do this as an academic researcher?

CM: I could tell you that I ran a children’s center, had a lot of program development experience, had raised funds, and all that would be true, and some of the skills I developed in those roles have transferred. But starting a company is really different than anything I’d done before. It’s exciting and terrifying. It requires constant effort, a willingness to change course, rapid decision-making, collaboration, and a different kind of creativity than the academy. Turns out I really like it. I probably wouldn’t have made the leap except that the research led me to something that I felt required the marketplace to develop further and to realize its potential. There was really only so far I could take SELweb in the academic context. And universities recognize the limitations of doing business through the university—that’s why they have offices of technology transfer—to spin off good ideas from the academy to the market. And it’s a feather in their cap when they help a faculty member commercialize an invention. So really, it was about finding out how to use the resources at my disposal to migrate to an ecosystem suited to continuing to improve SELweb and to get it into the hands of educators.

How did xSEL Labs pay for the full development of the version of SELweb ready for use at scale?

CM: Just as we were getting off the ground, we developed

 a partnership with a research funder (the Wallace Foundation) who was interested in using SELweb as an outcome measure in a large-scale field trial of an SEL initiative. They really liked SELweb, but it was clear that in its original form, it simply wouldn’t work at the scale they required. So we worked out a contract that included financial support for improving the system in exchange for discounted fees in the out years of the project.

What agreement did you make with the university in order to start your company and commercial SELweb?

CM: I negotiated a license for the intellectual property from Rush University with the university getting a royalty and a small equity stake in the company.

Did anyone provide you guidance on the business side?

CM: Yes. I lucked into a group of in-laws who happen to be entrepreneurs, some in the education space. And my wife has a sharp business mind. They were helpful. I also sought and found advisors with relevant expertise to help me think through the initial licensing terms, and then pricing, marketing, sales, product development, and the like. One of the nice things about business is that you aren’t expected to know everything. You do need to know how and when to reach out to others for guidance, and how to frame the issues so that guidance is relevant and helpful.

How do you describe the experience of commercializing SELWeb?

CM: Commercialization is, in my experience, an exercise in experimentation and successive approximations. How will you find time and money to test the waters? Commercialization is an exciting and challenging leap from the lab to the marketplace. In my experience, you can’t do it alone, and even with great partners, competitive forces and chance factors make success scale hard to accomplish. Knowing what you don’t know, and finding partners who can help, is critical.

I forgot who described a startup as a temporary organization designed to test whether a business idea is replicable and sustainable. That really rings true. The experience has been about leaving the safe confines of the university and entering the dynamic and endlessly interesting bazaar beyond the ivory tower to see if what I have to offer can solve a problem of practice.

In one sentence (or two!), what would say is most needed for gaining traction in the marketplace?

CM: Figure out who the customer is, what the customer needs, and how what you have to offer addresses those needs. Until you get that down, all the evidence in the world won’t lead to scale.

Do you have advice for university researchers seeking to move their laboratory research into wide-spread practice?

CM: It’s not really practical for most university researchers to shift gears and become an entrepreneur. So I don’t advise doing what I did, although I’m so glad I did. For most university researchers, they should continue doing great science, and when they recognize a scalable idea, consider commercialization as an important option for bringing the idea to scale. My impression is that academic culture often finds commerce to be alien and somewhat grubby, which can get in the way. The truth is, there are whip-smart people in business who have tremendous expertise. The biggest hurdle for many university researchers will be to recognize that they lack expertise in bringing ideas to market, they will need to find that expertise, respect it, and let go of some control as the idea, program, or product is shaped by market forces. It’s also a hard truth for researchers, but most of the world doesn’t care very much about evidence of efficacy. They have much more pressing problems of practice to attend to. Don’t get me wrong—evidence of efficacy is crucial. But for an efficacious idea to go to scale, usability and feasibility are the biggest considerations.

For academics, getting the product into the marketplace requires a new set of considerations, such as: Universities and granting mechanisms reward solo stars; the marketplace rewards partnerships. That is a big shift in mindset, and not easily accomplished. Think partnerships, not empires; listening more than talking.

Any final words of wisdom in moving your intervention from research to practice?

CM: Proving the concept of an ed tech product gets you to the starting line, not the finish. Going to scale benefits from, probably actually requires, the power of the marketplace. Figuring out how the marketplace works and how to fit your product into it is a big leap for most professors and inventors. Knowing the product is not the same as knowing how to commercialize it.

 ____________________________________________________________________________

Clark McKown is a national expert on social and emotional learning (SEL) assessments. In his role as a university faculty member, Clark has been the lead scientist on several large grants supporting the development and validation of SELweb, Networker, and other assessment systems. Clark is passionate about creating usable, feasible, and scientifically sound tools that help educators and their students.

This interview was produced by Ed Metz of the Institute of Education Sciences. This post is the third in an ongoing series of blog posts examining moving from university research to practice at scale in education.

Lexia RAPID Assessment: From Research to Practice at Scale in Education

With a 2010 measurement grant award and a 2010 Reading for Understanding subaward from IES, a team at Florida State University (FSU) led by Barbara Foorman, developed a web-based literacy assessment for Kindergarten to Grade 12 students.

Years of initial research and development of the assessment method, algorithms, and logic model at FSU concluded in 2015 with a fully functioning prototype assessment called RAPID, the Reading Assessment for Prescriptive Instructional Data. A body of research demonstrates its validity and utility. In 2014, to ready the prototype for use in schools and to disseminate on a wide-scale basis, FSU entered into licensing agreements with the Florida Department of Education (FLDOE) to use the prototype assessment royalty-free as the Florida Assessment for Instruction in Reading—Florida Standards (FAIR-FS), and with Lexia Learning Systems LLC, a Rosetta Stone company (Lexia), to create its commercial solution: Lexia® RAPID™ Assessment program.  Today, RAPID (watch video) consists of adaptive screening and diagnostic tests for students as they progress in areas such as word recognition, vocabulary knowledge, syntactic knowledge and reading comprehension. Students use RAPID up to three times per year in sessions of 45 minutes or less, with teachers receiving results immediately to inform instruction.

RAPID is currently used by thousands of educators and students across the U.S. RAPID has been recommended in Massachusetts as a primary screening tool for students ages 5 and older, is on both the Ohio Department of Education List of Approved Screening Assessments and the Michigan Lists of Initial and Extensive Literacy Assessments.

Interview with Barbara Foorman (BF) of Florida State University and Liz Brooke (LB) of Lexia Learning  

Photograph of Barbara Foorman, PhD

From the start of the project, was it always a goal for the assessment to one day be ready to be used widely in schools?

BF: Yes!

How was the connection made with the Florida Department of Education?   

BF: FSU authors (Yaacov Petscher, Chris Schatschneider, and I) gave the assessment royalty-free in perpetuity to the FLDOE, with the caveat that they had to host and maintain it. The FLDOE continues to host and maintain the Grade 3 to 12 system but never completed the programming on the K to 2 system prototype. The assessment we provided to the FLDOE is called the Florida Assessment for Instruction in Reading (FAIR—FS).  We also went to FSU’s Office of Commercialization to create royalty and commercialization agreements.

How was the connection made with Lexia? 

BF: Dr. Liz (Crawford) Brooke, Chief Learning Officer of Lexia/Rosetta Stone, and Dr. Alison Mitchell, Director of Assessment at Lexia, had both previously worked at the Florida Center for Reading Research (FCRR). Liz served as the Director of Interventions, as well as a doctoral student under me, and Alison was a postdoctoral assistant in research. Both Liz and Alison had worked on previous versions of the assessment.

Photograph of Liz Brooke, PhD

LB: Also, both Yaacov and Chris had done some previous work with me on the Assessment Without Testing® technology, which was embedded in our K to 5 literacy curriculum solution, the Lexia® Core5 Reading® program.

Did Lexia have to do additional R&D to develop the FSU assessment into RAPID as a commercial offering for larger scale use? Were resources provided?  

LB: To build and scale the FSU prototype assessment into a commercial platform, our team of developers worked closely with the developers at FSU to reprogram certain software application and databases. We’ve also spent the last several years at Lexia working to translate the valuable results that RAPID generates into meaningful, dynamic and usable data and tools for schools and educators.  This meant designing customized teacher and administrator reports for our myLexia® administrator dashboard, creating a library of offline instructional materials for teachers, as well as developing both online and in-person training materials specifically designed to support our RAPID solution.

BF: They also hired a psychometrician to submit RAPID to the National Center for Intensive Intervention, and had their programmers develop capabilities to support access to RAPID via iPads as well as through the web-based application.

What kind of licensing agreement did you (or FSU) work out?  

BF: The prototype assessment method, algorithms, and logic model that were used to develop RAPID are licensed to Lexia by FSU. Some of these may also be available for FSU to license to other interested companies.  Details of FSU’s licensing agreement terms to Lexia are confidential, however, royalties received by FSU through its licensing arrangements are shared between authors, academic units, and the FSU Research Foundation, according to FSU policies. (Read here for more about commercialization of FSU technologies and innovations.)

Does FSU receive royalties from the sale of RAPID?

BF: Yes. The revenue flows through FSU’s royalty stream—percentages to the three authors and the colleges and departments that we three authors are housed in.

What factors did Lexia consider when determining to partner with FSU to develop RAPID?

LB: We considered the needs of our customers and the fact that we wanted to develop and offer a commercial assessment solution that would provide a great balance between efficiency from the adaptive technology, but also insight based on an emphasis on reading and language skills. At Lexia, we are laser-focused on literacy and supporting the skills students need to be proficient readers. The value of the research foundation of the assessment was a natural fit for that reason. RAPID emphasizes Academic Language skills in a way that many other screening tools miss - often you’d need a specialized assessment given by a speech language pathologist to assess the skills that RAPID captures in a relatively short period of time for a whole classroom of students.

Describe how RAPID is marketed and distributed to schools?

LB: The Lexia RAPID Assessment was designed and is offered as a K-12 universal screening tool that schools can use up to three times per year. We currently offer RAPID as a software as a service -based subscription on an annual cost per license basis that can be either purchased per student or per school.  We also encourage schools that utilize RAPID to participate in a yearlong Lexia Implementation Support Plan that includes professional learning opportunities and data coaching specific to the RAPID solution, to really understand and maximize the value of the data and instructional resources that they receive as part of using RAPID.

Do you have advice for university researchers seeking to move their laboratory research into wide-spread practice?

BF: Start working with your university’s office of commercialization sooner than later to help identify market trends and create Non-Disclosure Agreements. In the case of educational curricula and assessments, researchers need to be (a) knowledgeable about competing products, (b) able to articulate what’s unique and more evidence-based than the competitors’ products, and (c) know that educators will find their product useful.

LB: As Barbara noted, it is critical to identify the specific, real-world need that your work is addressing and to be able to speak to how that’s different than other solutions out there.  It’s also really important to make sure that the research you’ve done has really validated that it does meet the need you are stating, as this will be the foundation of your claims in the market.

____________________________________________________________________________________________

Barbara Foorman, Ph.D., is the Frances Eppes Professor of Education, Director Emeritus of FCRR, and Director of the Regional Educational Laboratory Southeast at FSU. Barbara is an internationally known expert in reading with over 150 peer reviewed publications. Barbara was co-editor of the Journal of Research on Educational Effectiveness and is a co-founder and on the board of the Society for Research on Educational Effectiveness

Liz Brooke, Ph.D., CCC-SLP is the Chief Learning Officer for Rosetta Stone/Lexia Learning. Dr. Liz Brooke is responsible for setting the educational vision for the company's Language and Literacy products, including the Adaptive Blended Learning (ABL) strategy that serves as the foundation for Rosetta Stone’s products and services. Liz has been working in the education sector for over 25 years and has been published in several scholarly journals. Liz joined Lexia in 2010. Prior to that, she worked as the Director of Interventions at the FCRR and she has also served as a speech-language pathologist at Massachusetts General Hospital and in the public school setting. Liz began her career in the classroom as a first-grade teacher.

This interview was produced by Edward Metz of the Institute of Education Sciences. This post is the second in an ongoing series of blog posts examining moving from university research to practice at scale in education.

Inside IES Special Interview Series: From University Research to Practice at Scale in Education

Over two decades, the National Center for Education Research and the National Center for Special Education Research at IES have built a knowledge base to inform and improve education practice. This work has also spurred the development of evidence-based tools, technological products, training guides, instructional approaches, and assessments. 

While some IES-supported interventions are used on a wide scale (hundreds of schools or more), we acknowledge that a “research to practice gap” hinders the uptake of more evidence-based interventions in education.  The gap refers to the space between the initial research and development in university laboratories and pilot evaluations in schools, and everything else that is needed for the interventions to be adopted as a regular practice outside of a research evaluation.

For many academic researchers, advancing beyond the initial stage of R&D and pilot evaluations is complex and often requires additional time, financing, and specialized expertise and support. For example, interventions often need more R&D to ready interventions for scale—whether to ensure that implementation is turnkey and feasible without any researcher assistance, that interventions work the same across divergent settings and across different populations, or to bolster technology systems to be able to process huge amounts of data across numerous sites at the same time. Advancing from research to practice may also entail commercialization planning to address issues such as intellectual property, licensing, sales, and marketing, to facilitate dissemination of interventions from a university to the education marketplace, and to sustain it over time by generating revenue or securing other means of support.

Special Inside IES Research Interview Series

This winter and spring, Inside IES Research is publishing a series of interviews with the teams of researchers, developers, and partners who successfully advanced IES-funded education research from the university laboratory to practice in schools at scale.  Collectively, the interviews illustrate a variety of models and approaches for scaling evidenced-based interventions and for disseminating and sustaining the interventions over time.

Each interview will address a similar set of questions:

  • Was it part of the original plan to develop an intervention that could one day be used at scale in schools?
  • Describe the initial research and development that occurred. 
  • What role did the university play in facilitating the research to practice process? 
  • What other individuals or organizations provided support during the process?
  • Beyond the original R&D process through IES or ED grants, what additional R&D was needed to ready the intervention for larger scale use?
  • What model was used for dissemination and sustainability?
  • What advice would you provide to researchers who are looking to move their research from the lab to market? What steps should they take? What resources should they look for?

Check this page regularly to read new interviews.

We hope you enjoy the series.

This series is produced by Edward Metz of the Institute of Education Sciences