IES Blog

Institute of Education Sciences

Lexia RAPID Assessment: From Research to Practice at Scale in Education

With a 2010 measurement grant award and a 2010 Reading for Understanding subaward from IES, a team at Florida State University (FSU) led by Barbara Foorman, developed a web-based literacy assessment for Kindergarten to Grade 12 students.

Years of initial research and development of the assessment method, algorithms, and logic model at FSU concluded in 2015 with a fully functioning prototype assessment called RAPID, the Reading Assessment for Prescriptive Instructional Data. A body of research demonstrates its validity and utility. In 2014, to ready the prototype for use in schools and to disseminate on a wide-scale basis, FSU entered into licensing agreements with the Florida Department of Education (FLDOE) to use the prototype assessment royalty-free as the Florida Assessment for Instruction in Reading—Florida Standards (FAIR-FS), and with Lexia Learning Systems LLC, a Rosetta Stone company (Lexia), to create its commercial solution: Lexia® RAPID™ Assessment program.  Today, RAPID (watch video) consists of adaptive screening and diagnostic tests for students as they progress in areas such as word recognition, vocabulary knowledge, syntactic knowledge and reading comprehension. Students use RAPID up to three times per year in sessions of 45 minutes or less, with teachers receiving results immediately to inform instruction.

RAPID is currently used by thousands of educators and students across the U.S. RAPID has been recommended in Massachusetts as a primary screening tool for students ages 5 and older, is on both the Ohio Department of Education List of Approved Screening Assessments and the Michigan Lists of Initial and Extensive Literacy Assessments.

Interview with Barbara Foorman (BF) of Florida State University and Liz Brooke (LB) of Lexia Learning  

Photograph of Barbara Foorman, PhD

From the start of the project, was it always a goal for the assessment to one day be ready to be used widely in schools?

BF: Yes!

How was the connection made with the Florida Department of Education?   

BF: FSU authors (Yaacov Petscher, Chris Schatschneider, and I) gave the assessment royalty-free in perpetuity to the FLDOE, with the caveat that they had to host and maintain it. The FLDOE continues to host and maintain the Grade 3 to 12 system but never completed the programming on the K to 2 system prototype. The assessment we provided to the FLDOE is called the Florida Assessment for Instruction in Reading (FAIR—FS).  We also went to FSU’s Office of Commercialization to create royalty and commercialization agreements.

How was the connection made with Lexia? 

BF: Dr. Liz (Crawford) Brooke, Chief Learning Officer of Lexia/Rosetta Stone, and Dr. Alison Mitchell, Director of Assessment at Lexia, had both previously worked at the Florida Center for Reading Research (FCRR). Liz served as the Director of Interventions, as well as a doctoral student under me, and Alison was a postdoctoral assistant in research. Both Liz and Alison had worked on previous versions of the assessment.

Photograph of Liz Brooke, PhD

LB: Also, both Yaacov and Chris had done some previous work with me on the Assessment Without Testing® technology, which was embedded in our K to 5 literacy curriculum solution, the Lexia® Core5 Reading® program.

Did Lexia have to do additional R&D to develop the FSU assessment into RAPID as a commercial offering for larger scale use? Were resources provided?  

LB: To build and scale the FSU prototype assessment into a commercial platform, our team of developers worked closely with the developers at FSU to reprogram certain software application and databases. We’ve also spent the last several years at Lexia working to translate the valuable results that RAPID generates into meaningful, dynamic and usable data and tools for schools and educators.  This meant designing customized teacher and administrator reports for our myLexia® administrator dashboard, creating a library of offline instructional materials for teachers, as well as developing both online and in-person training materials specifically designed to support our RAPID solution.

BF: They also hired a psychometrician to submit RAPID to the National Center for Intensive Intervention, and had their programmers develop capabilities to support access to RAPID via iPads as well as through the web-based application.

What kind of licensing agreement did you (or FSU) work out?  

BF: The prototype assessment method, algorithms, and logic model that were used to develop RAPID are licensed to Lexia by FSU. Some of these may also be available for FSU to license to other interested companies.  Details of FSU’s licensing agreement terms to Lexia are confidential, however, royalties received by FSU through its licensing arrangements are shared between authors, academic units, and the FSU Research Foundation, according to FSU policies. (Read here for more about commercialization of FSU technologies and innovations.)

Does FSU receive royalties from the sale of RAPID?

BF: Yes. The revenue flows through FSU’s royalty stream—percentages to the three authors and the colleges and departments that we three authors are housed in.

What factors did Lexia consider when determining to partner with FSU to develop RAPID?

LB: We considered the needs of our customers and the fact that we wanted to develop and offer a commercial assessment solution that would provide a great balance between efficiency from the adaptive technology, but also insight based on an emphasis on reading and language skills. At Lexia, we are laser-focused on literacy and supporting the skills students need to be proficient readers. The value of the research foundation of the assessment was a natural fit for that reason. RAPID emphasizes Academic Language skills in a way that many other screening tools miss - often you’d need a specialized assessment given by a speech language pathologist to assess the skills that RAPID captures in a relatively short period of time for a whole classroom of students.

Describe how RAPID is marketed and distributed to schools?

LB: The Lexia RAPID Assessment was designed and is offered as a K-12 universal screening tool that schools can use up to three times per year. We currently offer RAPID as a software as a service -based subscription on an annual cost per license basis that can be either purchased per student or per school.  We also encourage schools that utilize RAPID to participate in a yearlong Lexia Implementation Support Plan that includes professional learning opportunities and data coaching specific to the RAPID solution, to really understand and maximize the value of the data and instructional resources that they receive as part of using RAPID.

Do you have advice for university researchers seeking to move their laboratory research into wide-spread practice?

BF: Start working with your university’s office of commercialization sooner than later to help identify market trends and create Non-Disclosure Agreements. In the case of educational curricula and assessments, researchers need to be (a) knowledgeable about competing products, (b) able to articulate what’s unique and more evidence-based than the competitors’ products, and (c) know that educators will find their product useful.

LB: As Barbara noted, it is critical to identify the specific, real-world need that your work is addressing and to be able to speak to how that’s different than other solutions out there.  It’s also really important to make sure that the research you’ve done has really validated that it does meet the need you are stating, as this will be the foundation of your claims in the market.

____________________________________________________________________________________________

Barbara Foorman, Ph.D., is the Frances Eppes Professor of Education, Director Emeritus of FCRR, and Director of the Regional Educational Laboratory Southeast at FSU. Barbara is an internationally known expert in reading with over 150 peer reviewed publications. Barbara was co-editor of the Journal of Research on Educational Effectiveness and is a co-founder and on the board of the Society for Research on Educational Effectiveness

Liz Brooke, Ph.D., CCC-SLP is the Chief Learning Officer for Rosetta Stone/Lexia Learning. Dr. Liz Brooke is responsible for setting the educational vision for the company's Language and Literacy products, including the Adaptive Blended Learning (ABL) strategy that serves as the foundation for Rosetta Stone’s products and services. Liz has been working in the education sector for over 25 years and has been published in several scholarly journals. Liz joined Lexia in 2010. Prior to that, she worked as the Director of Interventions at the FCRR and she has also served as a speech-language pathologist at Massachusetts General Hospital and in the public school setting. Liz began her career in the classroom as a first-grade teacher.

This interview was produced by Edward Metz of the Institute of Education Sciences. This post is the second in an ongoing series of blog posts examining moving from university research to practice at scale in education.

Measuring Social and Emotional Learning in Schools

Social and emotional learning (SEL) has been embraced by many schools and districts around the country. Yet in the rush to adopt SEL practices and support student SEL competencies, educators often lack assessment tools that are valid, reliable, and easy to use.

 

Washoe County School District in Nevada has moved the needle on SEL assessment with support from an IES Researcher-Practitioner Partnership grant. The district partnered with the Collaborative for Academic, Social, and Emotional Learning (CASEL) to develop the Social and Emotional Competency Assessments (WCSD-SECAs)—free, open-source instruments that schools can use to measure SEL competencies of students in 5th through 12th grade.

Long and short versions of the SECA are available to download from the school district’s website, along with a bank of 138 items across 8 SEL domains that schools around the country can use to modify SECA assessments for their local context. The long-form version has been validated and aligned to the CASEL 5 SEL competency clusters and WCSD SEL standards (self-awareness, self-management, social awareness, relationship skills, and responsible decision making). The assessment is also available in Spanish, and the Metro Nashville Public schools offer the assessment in 8 additional languages.  

Students complete the long-form SECA as part of Washoe’s Annual Student Climate Survey by rating how easy or difficult SEL skills are for them. Under the Social Awareness domain, students respond to items such as “Knowing what people may be feeling by the look on their face” or “Learning from people with different opinions than me.” Under the Responsible Decision Making domain, students rate themselves on skills such as “Saying ‘no’ to a friend who wants to break the rules” and “Thinking of different ways to solve a problem.”

The SECA is one component of Washoe County’s larger School Climate Survey Project that is marking its 10th anniversary this year. Washoe provides district-level and school-level reports on school climate to support the district’s commitment to providing safe, caring, and engaging school environments for all of Washoe’s students and families.  

Written by Emily Doolittle, NCER’s Team Lead for Social Behavioral Research

New International Comparisons of Reading, Mathematics, and Science Literacy Assessments

The Program for International Student Assessment (PISA) is a study of 15-year-old students’ performance in reading, mathematics, and science literacy that is conducted every 3 years. The PISA 2018 results provide us with a global view of U.S. students’ performance compared with their peers in nearly 80 countries and education systems. In PISA 2018, the major domain was reading literacy, although mathematics and science literacy were also assessed.

In 2018, the U.S. average score of 15-year-olds in reading literacy (505) was higher than the average score of the Organization for Economic Cooperation and Development (OECD) countries (487). Compared with the 76 other education systems with PISA 2018 reading literacy data, including both OECD and non-OECD countries, the U.S. average reading literacy score was lower than in 8 education systems, higher than in 57 education systems, and not measurably different in 11 education systems. The U.S. percentage of top performers in reading was larger than in 63 education systems, smaller than in 2 education systems, and not measurably different in 11 education systems. The average reading literacy score in 2018 (505) was not measurably different from the average score in 2000 (504), the first year PISA was administered. Among the 36 education systems that participated in both years, 10 education systems reported higher average reading literacy scores in 2018 compared with 2000, and 11 education systems reported lower scores.

The U.S. average score of 15-year-olds in mathematics literacy in 2018 (478) was lower than the OECD average score (489). Compared with the 77 other education systems with PISA 2018 mathematics literacy data, the U.S. average mathematics literacy score was lower than in 30 education systems, higher than in 39 education systems, and not measurably different in 8 education systems. The average mathematics literacy score in 2018 (478) was not measurably different from the average score in 2003 (483), the earliest year with comparable data. Among the 36 education systems that participated in both years, 10 systems reported higher mathematics literacy scores in 2018 compared with 2003, 13 education systems reported lower scores, and 13 education systems reported no measurable changes in scores.  

The U.S. average score of 15-year-olds in science literacy (502) was higher than the OECD average score (489). Compared with the 77 other education systems with PISA 2018 science literacy data, the U.S. average science literacy score was lower than in 11 education systems, higher than in 55 education systems, and not measurably different in 11 education systems. The average science literacy score in 2018 (502) was higher than the average score in 2006 (489), the earliest year with comparable data. Among the 52 education systems that participated in both years, 7 education systems reported higher average science literacy scores in 2018 compared with 2006, 22 education systems reported lower scores, and 23 education systems reported no measurable changes in scores.

PISA is conducted in the United States by NCES and is coordinated by OECD, an intergovernmental organization of industrialized countries. Further information about PISA can be found in the technical notes, questionnaires, list of participating OECD and non-OECD countries, released assessment items, and FAQs.

 

By Thomas Snyder

New Study on U.S. Eighth-Grade Students’ Computer Literacy

In the 21st-century global economy, computer literacy and skills are an important part of an education that prepares students to compete in the workplace. The results of a recent assessment show us how U.S. students compare to some of their international peers in the areas of computer information literacy and computational thinking.

In 2018, the U.S. participated for the first time in the International Computer and Information Literacy Study (ICILS), along with 13 other education systems around the globe. The ICILS is a computer-based international assessment of eighth-grade students that measures outcomes in two domains: computer and information literacy (CIL)[1] and computational thinking (CT).[2] It compares U.S. students’ skills and experiences using technology to those of students in other education systems and provides information on teachers’ experiences, school resources, and other factors that may influence students’ CIL and CT skills.

ICILS is sponsored by the International Association for the Evaluation of Educational Achievement (IEA) and is conducted in the United States by the National Center for Education Statistics (NCES).

The newly released U.S. Results from the 2018 International Computer and Information Literacy Study (ICILS) web report provides information on how U.S. students performed on the assessment compared with students in other education systems and describes students’ and teachers’ experiences with computers.


U.S. Students’ Performance

In 2018, U.S. eighth-grade students’ average score in CIL was higher than the average of participating education systems[3] (figure 1), while the U.S. average score in CT was not measurably different from the average of participating education systems.

 


Figure 1. Average computer and information literacy (CIL) scores of eighth-grade students, by education system: 2018p < .05. Significantly different from the U.S. estimate at the .05 level of statistical significance.

¹ Met guidelines for sample participation rates only after replacement schools were included.

² National Defined Population covers 90 to 95 percent of National Target Population.

³ Did not meet the guidelines for a sample participation rate of 85 percent and not included in the international average.

⁴ Nearly met guidelines for sample participation rates after replacement schools were included.

⁵ Data collected at the beginning of the school year.

NOTE: The ICILS computer and information literacy (CIL) scale ranges from 100 to 700. The ICILS 2018 average is the average of all participating education systems meeting international technical standards, with each education system weighted equally. Education systems are ordered by their average CIL scores, from largest to smallest. Italics indicate the benchmarking participants.

SOURCE: International Association for the Evaluation of Educational Achievement (IEA), the International Computer and Information Literacy Study (ICILS), 2018.


 

Given the importance of students’ home environments in developing CIL and CT skills (Fraillon et al. 2019), students were asked about how many computers (desktop or laptop) they had at home. In the United States, eighth-grade students with two or more computers at home performed better in both CIL and CT than their U.S. peers with fewer computers (figure 2). This pattern was also observed in all participating countries and education systems.

 


Figure 2. Average computational thinking (CT) scores of eighth-grade students, by student-reported number of computers at home and education system: 2018

p < .05. Significantly different from the U.S. estimate at the .05 level of statistical significance.

¹ Met guidelines for sample participation rates only after replacement schools were included.

² National Defined Population covers 90 to 95 percent of National Target Population.

³ Did not meet the guidelines for a sample participation rate of 85 percent and not included in the international average.

⁴ Nearly met guidelines for sample participation rates after replacement schools were included.

NOTE: The ICILS computational thinking (CT) scale ranges from 100 to 700. The number of computers at home includes desktop and laptop computers. Students with fewer than two computers include students reporting having “none” or “one” computer. Students with two or more computers include students reporting having “two” or “three or more” computers. The ICILS 2018 average is the average of all participating education systems meeting international technical standards, with each education system weighted equally. Education systems are ordered by their average scores of students with two or more computers at home, from largest to smallest. Italics indicate the benchmarking participants.

SOURCE: International Association for the Evaluation of Educational Achievement (IEA), the International Computer and Information Literacy Study (ICILS), 2018.


 

U.S. Students’ Technology Experiences

Among U.S. eighth-grade students, 72 percent reported using the Internet to do research in 2018, and 56 percent reported completing worksheets or exercises using information and communications technology (ICT)[4] every school day or at least once a week. Both of these percentages were higher than the respective ICILS averages (figure 3). The learning activities least frequently reported by U.S eighth-grade students were using coding software to complete assignments (15 percent) and making video or audio productions (13 percent).

 


Figure 3. Percentage of eighth-grade students who reported using information and communications technology (ICT) every school day or at least once a week, by activity: 2018

p < .05. Significantly different from the U.S. estimate at the .05 level of statistical significance.

¹ Did not meet the guidelines for a sample participation rate of 85 percent and not included in the international average.

NOTE: The ICILS 2018 average is the average of all participating education systems meeting international technical standards, with each education system weighted equally. Activities are ordered by the percentages of U.S. students reporting using information and communications technology (ICT) for the activities, from largest to smallest.

SOURCE: International Association for the Evaluation of Educational Achievement (IEA), the International Computer and Information Literacy Study (ICILS), 2018.


 

Browse the full U.S. Results from the 2018 International Computer and Information Literacy Study (ICILS) web report to learn more about how U.S. students compare with their international peers in their computer literacy skills and experiences.

 

By Yan Wang, AIR, and Linda Hamilton, NCES

 

[1] CIL refers to “an individual's ability to use computers to investigate, create, and communicate in order to participate effectively at home, at school, in the workplace, and in society” (Fraillon et al. 2019).

[2] CT refers to “an individual’s ability to recognize aspects of real-world problems which are appropriate for computational formulation and to evaluate and develop algorithmic solutions to those problems so that the solutions could be operationalized with a computer” (Fraillon et al. 2019). CT was an optional component in 2018. Nine out of 14 ICILS countries participated in CT in 2018.

[3] U.S. results are not included in the ICILS international average because the U.S. school level response rate of 77 percent was below the international requirement for a participation rate of 85 percent.

[4] Information and communications technology (ICT) can refer to desktop computers, notebook or laptop computers, netbook computers, tablet devices, or smartphones (except when being used for talking and texting).

 

Reference

Fraillon, J., Ainley, J., Schulz, W., Duckworth, D., and Friedman, T. (2019). IEA International Computer and Information Literacy Study 2018: Assessment Framework. Cham, Switzerland: Springer. Retrieved October 7, 2019, from https://link.springer.com/book/10.1007%2F978-3-030-19389-8.

Equity Through Innovation: New Models, Methods, and Instruments to Measure What Matters for Diverse Learners

In today’s diverse classrooms, it is both challenging and critical to gather accurate and meaningful information about student knowledge and skills. Certain populations present unique challenges in this regard – for example, English learners (ELs) often struggle on assessments delivered in English. On “typical” classroom and state assessments, it can be difficult to parse how much of an EL student’s performance stems from content knowledge, and how much from language learner status. This lack of clarity makes it harder to make informed decisions about what students need instructionally, and often results in ELs being excluded from challenging (or even typical) coursework.

Over the past several years, NCER has invested in several grants to design innovative assessments that will collect and deliver better information about what ELs know and can do across the PK-12 spectrum. This work is producing some exciting results and products.

  • Jason Anthony and his colleagues at the University of South Florida have developed the School Readiness Curriculum Based Measurement System (SR-CBMS), a collection of measures for English- and Spanish-speaking 3- to 5-year-old children. Over the course of two back-to-back Measurement projects, Dr. Anthony’s team co-developed and co-normed item banks in English and Spanish in 13 different domains covering language, math, and science. The assessments are intended for a variety of uses, including screening, benchmarking, progress monitoring, and evaluation. The team used item development and evaluation procedures designed to assure that both the English and Spanish tests are sociolinguistically appropriate for both monolingual and bilingual speakers.

 

  • Daryl Greenfield and his team at the University of Miami created Enfoque en Ciencia, a computerized-adaptive test (CAT) designed to assess Latino preschoolers’ science knowledge and skills. Enfoque en Ciencia is built on 400 Spanish-language items that cover three science content domains and eight science practices. The items were independently translated into four major Spanish dialects and reviewed by a team of bilingual experts and early childhood researchers to create a consensus translation that would be appropriate for 3 to 5 year olds. The assessment is delivered via touch screen and is equated with an English-language version of the same test, Lens on Science.

  • A University of Houston team led by David Francis is engaged in a project to study the factors that affect assessment of vocabulary knowledge among ELs in unintended ways. Using a variety of psychometric methods, this team explores data from the Word Generation Academic Vocabulary Test to identify features that affect item difficulty and explore whether these features operate similarly for current, former, as well as students who have never been classified as ELs. The team will also preview a set of test recommendations for improving the accuracy and reliability of extant vocabulary assessments.

 

  • Researchers led by Rebecca Kopriva at the University of Wisconsin recently completed work on a set of technology-based, classroom-embedded formative assessments intended to support and encourage teachers to teach more complex math and science to ELs. The assessments use multiple methods to reduce the overall language load typically associated with challenging content in middle school math and science. The tools use auto-scoring techniques and are capable of providing immediate feedback to students and teachers in the form of specific, individualized, data-driven guidance to improve instruction for ELs.

 

By leveraging technology, developing new item formats and scoring models, and expanding the linguistic repertoire students may access, these teams have found ways to allow ELs – and all students – to show what really matters: their academic content knowledge and skills.

 

Written by Molly Faulkner-Bond (former NCER program officer).