IES Blog

Institute of Education Sciences

Real-World Responses in Real Time: COVID-19 Information Needs to Consider Literacy Gaps

During the COVID-19 pandemic, when people have a heightened need for information, literacy barriers can be life threatening. In the United States, roughly 20 percent of adults read at the lowest level, with another 33 percent still below proficiency1. Thus, many may be struggling to understand written guidance on COVID-19.

IES researchers at the Center for the Study of Adult Literacy (R305C120001 and R305H180061) and their associated Adult Literacy Research Center at Georgia State University are working to address the needs of adults with literacy skill gaps. Dr. Meredith Larson spoke to Dr. Daphne Greenberg and Dr. Iris Feinberg about their work in this area.

What are your concerns for adults with low literacy during the pandemic?

Daphne GreenbergIris FeinbergWe have known for a long time that the high prevalence of adults with low basic skills has consequences for both the individual and society. These consequences are heightened during this pandemic. Many adults with low literacy have “essential” jobs and must continue to work. They often interact with many different people daily. So it is crucial that they understand COVID-19 precautions for their own health and because their ability to know and practice safe behaviors has a direct impact on disease transmission to others. To be quite frank, we are concerned about the health and safety of our learners and the health and safety of others.

In the United States, we receive an overwhelming amount information about COVID-19 daily. To make matters worse, there’s no uniform national guidance, some of the information is incorrect, and other information is conflicting. It is challenging for highly literate individuals to make sense of it all. For example: When can a COVID-positive person step out of quarantine? Can someone be re-infected? How many feet constitute safe distance? The list of questions goes on and on.

For someone with low literacy, it’s even more difficult to make sense of all the COVID-19 information. For example, people with low reading skills may not be able to read or understand all of the written information. Additionally, because much of the information is on the internet, adults with low digital skills and/or poor access to the internet have the added problem of not being able to find information that could possibly be helpful to them.

How are you trying to address their needs?

We’ve created a library with a large sample of materials written for 9th grade reading levels and below available on the ALRC website. These documents provide specific information on topics like how to stop the spread or what to do if someone in your home has COVID-19. We hope that providers who work with adults with low literacy skills—like adult educators, community organizers and healthcare providers—will use our library and find the high-interest/low-literacy materials. The library is also divided into “easier” and “harder” resources, so people can quickly find material at appropriate reading levels.

What could healthcare providers, the media, or others do to help?

We all must help those who may not know where to find information. Not everyone knows how or where to look for health information or whether the information they find is valid. Our analysis of PIAAC data found that people with low literacy rely more on TV and radio for information. Simple, short public service announcements that are action oriented would be great for anyone who relies on TV or radio but particularly for those who have low reading skills.

Also, we need to be better prepared for all kinds of emergencies by creating community-wide partnership plans among trusted sources for adults with low literacy like community organizations, healthcare providers, and adult education providers. In addition, we should be following plain language guidelines in all of our written and oral communications. Writing health information in plain language helps everyone and should not be an afterthought.

 


Written by Meredith Larson. This is the first in a series of blog posts that explores how researchers respond to various education-related issues and challenges.


About the PIAAC

The PIAAC is an international assessment for adults that assesses cognitive skills (literacy, numeracy, and problem solving) and contains data on educational background, workplace experiences and skills, and other items. For the purposes of this blog, the category of lowest levels is defined as Below Level 1 and Level 1 and below proficiency is Level 2. For more information about estimates of U.S. adult skills as measured by the PIAAC: https://nces.ed.gov/surveys/piaac/current_results.asp

 

A2i: From Research to Practice at Scale in Education

This blog post is part of an interview series with education researchers who have successfully scaled their interventions.

Assessment-to-Instruction (A2i) is an online Teacher Professional Support System that guides teachers in providing Kindergarten to Grade 3 students individualized literacy instruction and assessments. Students complete the assessments independently online without the teacher taking time away from instruction. A2i generates instantaneous teacher reports with precise recommendations for each student and group recommendations. See a video demo here. Between 2003 and 2017, researchers at Florida State University (FSU) and Arizona State University (ASU), led by Carol Connor, developed and evaluated A2i with the support of a series of awards from IES and the National Institutes of Health. Findings from all publications on the A2i are posted here.

While results across seven controlled studies demonstrated the effectiveness of A2i, feedback from practitioners in the field demonstrated that implementation often required substantial amounts of researcher support and local district adaptation, and that the cost was not sustainable for most school district budgets. In 2014, the development firm Learning Ovations, led by Jay Connor, received an award from the Department of Education (ED) and IES’s Small Business Innovation Research program (ED/IES SBIR) to develop an technologically upgraded and commercially viable version of A2i to be ready to be used at scale in classrooms around the country. In 2018, with the support of a five-year Education Innovation and Research (EIR) expansion grant from ED totaling $14.65 million, A2i is now used in more than 110 schools across the country, with plans for further expansion. 

 

Interview with Carol Connor (CC) and Jay Connor (JC)

From the start of the research in the early 2000s, was it always the goal to develop a reading intervention that would one day be used on a wide scale?
CC: Yes and no. First, we had to answer the question as to whether individualization was effective in achieving student literacy outcomes. Once the research established that, we knew that this work would have wide-scale application.

When did you start thinking about a plan for distribution
CC: Before embarking on the cumulative results studies, in 2008, Jay said that we needed to know who the “customer” was… i.e., how purchasing decisions were made at scale.  His 2008 Phase I ED/IES SBIR was critical in shifting our research focus from individual classrooms to school districts as the key scaling point. 

Did you work with a technology transfer office at the university?
CC: Only to the extent of contractually clarifying intellectual property (IP) ownership and licensing. 

Who provided the support on the business side?
CC: Jay, who has an MBA/JD and has been a senior officer in two Fortune 100 companies was very instrumental in guiding our thinking of this evolution from important research to practical application. 


 Do you have any agreement about the IP with the university? What were the biggest challenges in this area?

JC: Yes, Learning Ovations has a 60-year renewable exclusive licensing agreement with FSU Foundation. FSU couldn’t have been better to work with.  Though there were expected back-and-forth elements of the original negotiations, it was clear that we shared the central vision of transforming literacy outcomes.  They continue to be a meaningful partner.

When and why was Learning Ovations first launched?
JC: In order to pursue SBIR funding we needed to be a for-profit company.  At first, I used my consulting business – Rubicon Partners LLP – as the legal entity for a 2008 Phase I award from ED/IES SBIR.  When we considered applying (and eventually won) a Fast Track Phase I & II award from SBIR in 2014, it was clear that we needed to create a full C – Corp that could expand with the scaling of the business, thus Learning Ovations was formed.

Who has provided you great guidance on the business side over the year? What did they say and do? 
JC: Having run large corporate entities and worked with small business start-ups in conjunction with Arizona State University (Skysong) and the University of California, Irvine (Applied Innovation at The Cove) and having taught entrepreneurship at The Paul Merage School of Business at UC Irvine, I had the experience or network to connect to whatever business guidance we needed.  Further, having attended a number of reading research conferences with Carol, I was quite conversant in the literacy language both from the research side and from the district decision maker’s side.

How do you describe the experience of commercializing the A2i? What were the biggest achievements and challenges in terms of preparing for commercialization?

JC: Having coached scores of entrepreneurs at various stages, I can safely say that there is no harder commercialization than one that must stay faithful to the underlying research.  A key strategy for most new businesses: being able to pivot as you find a better (easier) solution.  It is often circumscribed by the “active ingredients” of the underlying research.  Knowing this, we imbued Learning Ovations with a very strong outcomes mission – all children reading at, or above, grade level by 3rd grade.  This commitment to outcomes certainty is only assured by staying faithful to the research.  Thus, a possible constraint, became our uncontroverted strength.

Do you have advice for university researchers seeking to move their laboratory research in education into wide-spread practice? 
JC:  Start with the end in mind.  As soon as you envision wide-scale usage, learn as much as you can about the present pain and needs of your future users and frame your research questions to speak to this.  Implementation should not be an after-the-fact consideration; build it into how you frame your research questions. On one level you are asking simultaneously “will this work with my treatment group” AND “will this help me understand/deliver to my end-user group.”  I can’t imagine effective research being graphed onto a business after the fact.  One key risk that we see a number of researchers make is thinking in very small fragments whereas application (i.e., the ability to go to scale) is usually much more systemic and holistic.

In one sentence, what would say is most needed for gaining traction in the marketplace?
JC: If not you, as a researcher, someone on your team of advisors needs to know the target marketplace as well as you know the treatment protocols in your RCT.

____________

Carol Connor is a Chancellor’s Professor in the UC Irvine School of Education. Prior she was a professor of Psychology and a Senior Learning Scientist at the Learning Sciences Institute at ASU. Carol’s research focuses on teaching and learning in preschool through fifth grade classrooms – with a particular emphasis on reading comprehension, executive functioning, and behavioral regulation development, especially for low-income children.

Joseph “Jay” Connor, JD/MBA, is the Founder/CEO of Learning Ovations, Inc, the developer of the platform that has enabled the A2i intervention to scale.  Jay has 20+ years of experience in senior business management at the multi-billion dollar corporate level, and has experience in the nonprofit and public policy arenas.

This interview was produced by Edward Metz of the Institute of Education Sciences.

Lexia RAPID Assessment: From Research to Practice at Scale in Education

With a 2010 measurement grant award and a 2010 Reading for Understanding subaward from IES, a team at Florida State University (FSU) led by Barbara Foorman, developed a web-based literacy assessment for Kindergarten to Grade 12 students.

Years of initial research and development of the assessment method, algorithms, and logic model at FSU concluded in 2015 with a fully functioning prototype assessment called RAPID, the Reading Assessment for Prescriptive Instructional Data. A body of research demonstrates its validity and utility. In 2014, to ready the prototype for use in schools and to disseminate on a wide-scale basis, FSU entered into licensing agreements with the Florida Department of Education (FLDOE) to use the prototype assessment royalty-free as the Florida Assessment for Instruction in Reading—Florida Standards (FAIR-FS), and with Lexia Learning Systems LLC, a Rosetta Stone company (Lexia), to create its commercial solution: Lexia® RAPID™ Assessment program.  Today, RAPID (watch video) consists of adaptive screening and diagnostic tests for students as they progress in areas such as word recognition, vocabulary knowledge, syntactic knowledge and reading comprehension. Students use RAPID up to three times per year in sessions of 45 minutes or less, with teachers receiving results immediately to inform instruction.

RAPID is currently used by thousands of educators and students across the U.S. RAPID has been recommended in Massachusetts as a primary screening tool for students ages 5 and older, is on both the Ohio Department of Education List of Approved Screening Assessments and the Michigan Lists of Initial and Extensive Literacy Assessments.

Interview with Barbara Foorman (BF) of Florida State University and Liz Brooke (LB) of Lexia Learning  

Photograph of Barbara Foorman, PhD

From the start of the project, was it always a goal for the assessment to one day be ready to be used widely in schools?

BF: Yes!

How was the connection made with the Florida Department of Education?   

BF: FSU authors (Yaacov Petscher, Chris Schatschneider, and I) gave the assessment royalty-free in perpetuity to the FLDOE, with the caveat that they had to host and maintain it. The FLDOE continues to host and maintain the Grade 3 to 12 system but never completed the programming on the K to 2 system prototype. The assessment we provided to the FLDOE is called the Florida Assessment for Instruction in Reading (FAIR—FS).  We also went to FSU’s Office of Commercialization to create royalty and commercialization agreements.

How was the connection made with Lexia? 

BF: Dr. Liz (Crawford) Brooke, Chief Learning Officer of Lexia/Rosetta Stone, and Dr. Alison Mitchell, Director of Assessment at Lexia, had both previously worked at the Florida Center for Reading Research (FCRR). Liz served as the Director of Interventions, as well as a doctoral student under me, and Alison was a postdoctoral assistant in research. Both Liz and Alison had worked on previous versions of the assessment.

Photograph of Liz Brooke, PhD

LB: Also, both Yaacov and Chris had done some previous work with me on the Assessment Without Testing® technology, which was embedded in our K to 5 literacy curriculum solution, the Lexia® Core5 Reading® program.

Did Lexia have to do additional R&D to develop the FSU assessment into RAPID as a commercial offering for larger scale use? Were resources provided?  

LB: To build and scale the FSU prototype assessment into a commercial platform, our team of developers worked closely with the developers at FSU to reprogram certain software application and databases. We’ve also spent the last several years at Lexia working to translate the valuable results that RAPID generates into meaningful, dynamic and usable data and tools for schools and educators.  This meant designing customized teacher and administrator reports for our myLexia® administrator dashboard, creating a library of offline instructional materials for teachers, as well as developing both online and in-person training materials specifically designed to support our RAPID solution.

BF: They also hired a psychometrician to submit RAPID to the National Center for Intensive Intervention, and had their programmers develop capabilities to support access to RAPID via iPads as well as through the web-based application.

What kind of licensing agreement did you (or FSU) work out?  

BF: The prototype assessment method, algorithms, and logic model that were used to develop RAPID are licensed to Lexia by FSU. Some of these may also be available for FSU to license to other interested companies.  Details of FSU’s licensing agreement terms to Lexia are confidential, however, royalties received by FSU through its licensing arrangements are shared between authors, academic units, and the FSU Research Foundation, according to FSU policies. (Read here for more about commercialization of FSU technologies and innovations.)

Does FSU receive royalties from the sale of RAPID?

BF: Yes. The revenue flows through FSU’s royalty stream—percentages to the three authors and the colleges and departments that we three authors are housed in.

What factors did Lexia consider when determining to partner with FSU to develop RAPID?

LB: We considered the needs of our customers and the fact that we wanted to develop and offer a commercial assessment solution that would provide a great balance between efficiency from the adaptive technology, but also insight based on an emphasis on reading and language skills. At Lexia, we are laser-focused on literacy and supporting the skills students need to be proficient readers. The value of the research foundation of the assessment was a natural fit for that reason. RAPID emphasizes Academic Language skills in a way that many other screening tools miss - often you’d need a specialized assessment given by a speech language pathologist to assess the skills that RAPID captures in a relatively short period of time for a whole classroom of students.

Describe how RAPID is marketed and distributed to schools?

LB: The Lexia RAPID Assessment was designed and is offered as a K-12 universal screening tool that schools can use up to three times per year. We currently offer RAPID as a software as a service -based subscription on an annual cost per license basis that can be either purchased per student or per school.  We also encourage schools that utilize RAPID to participate in a yearlong Lexia Implementation Support Plan that includes professional learning opportunities and data coaching specific to the RAPID solution, to really understand and maximize the value of the data and instructional resources that they receive as part of using RAPID.

Do you have advice for university researchers seeking to move their laboratory research into wide-spread practice?

BF: Start working with your university’s office of commercialization sooner than later to help identify market trends and create Non-Disclosure Agreements. In the case of educational curricula and assessments, researchers need to be (a) knowledgeable about competing products, (b) able to articulate what’s unique and more evidence-based than the competitors’ products, and (c) know that educators will find their product useful.

LB: As Barbara noted, it is critical to identify the specific, real-world need that your work is addressing and to be able to speak to how that’s different than other solutions out there.  It’s also really important to make sure that the research you’ve done has really validated that it does meet the need you are stating, as this will be the foundation of your claims in the market.

____________________________________________________________________________________________

Barbara Foorman, Ph.D., is the Frances Eppes Professor of Education, Director Emeritus of FCRR, and Director of the Regional Educational Laboratory Southeast at FSU. Barbara is an internationally known expert in reading with over 150 peer reviewed publications. Barbara was co-editor of the Journal of Research on Educational Effectiveness and is a co-founder and on the board of the Society for Research on Educational Effectiveness

Liz Brooke, Ph.D., CCC-SLP is the Chief Learning Officer for Rosetta Stone/Lexia Learning. Dr. Liz Brooke is responsible for setting the educational vision for the company's Language and Literacy products, including the Adaptive Blended Learning (ABL) strategy that serves as the foundation for Rosetta Stone’s products and services. Liz has been working in the education sector for over 25 years and has been published in several scholarly journals. Liz joined Lexia in 2010. Prior to that, she worked as the Director of Interventions at the FCRR and she has also served as a speech-language pathologist at Massachusetts General Hospital and in the public school setting. Liz began her career in the classroom as a first-grade teacher.

This interview was produced by Edward Metz of the Institute of Education Sciences. This post is the second in an ongoing series of blog posts examining moving from university research to practice at scale in education.

New International Comparisons of Reading, Mathematics, and Science Literacy Assessments

The Program for International Student Assessment (PISA) is a study of 15-year-old students’ performance in reading, mathematics, and science literacy that is conducted every 3 years. The PISA 2018 results provide us with a global view of U.S. students’ performance compared with their peers in nearly 80 countries and education systems. In PISA 2018, the major domain was reading literacy, although mathematics and science literacy were also assessed.

In 2018, the U.S. average score of 15-year-olds in reading literacy (505) was higher than the average score of the Organization for Economic Cooperation and Development (OECD) countries (487). Compared with the 76 other education systems with PISA 2018 reading literacy data, including both OECD and non-OECD countries, the U.S. average reading literacy score was lower than in 8 education systems, higher than in 57 education systems, and not measurably different in 11 education systems. The U.S. percentage of top performers in reading was larger than in 63 education systems, smaller than in 2 education systems, and not measurably different in 11 education systems. The average reading literacy score in 2018 (505) was not measurably different from the average score in 2000 (504), the first year PISA was administered. Among the 36 education systems that participated in both years, 10 education systems reported higher average reading literacy scores in 2018 compared with 2000, and 11 education systems reported lower scores.

The U.S. average score of 15-year-olds in mathematics literacy in 2018 (478) was lower than the OECD average score (489). Compared with the 77 other education systems with PISA 2018 mathematics literacy data, the U.S. average mathematics literacy score was lower than in 30 education systems, higher than in 39 education systems, and not measurably different in 8 education systems. The average mathematics literacy score in 2018 (478) was not measurably different from the average score in 2003 (483), the earliest year with comparable data. Among the 36 education systems that participated in both years, 10 systems reported higher mathematics literacy scores in 2018 compared with 2003, 13 education systems reported lower scores, and 13 education systems reported no measurable changes in scores.  

The U.S. average score of 15-year-olds in science literacy (502) was higher than the OECD average score (489). Compared with the 77 other education systems with PISA 2018 science literacy data, the U.S. average science literacy score was lower than in 11 education systems, higher than in 55 education systems, and not measurably different in 11 education systems. The average science literacy score in 2018 (502) was higher than the average score in 2006 (489), the earliest year with comparable data. Among the 52 education systems that participated in both years, 7 education systems reported higher average science literacy scores in 2018 compared with 2006, 22 education systems reported lower scores, and 23 education systems reported no measurable changes in scores.

PISA is conducted in the United States by NCES and is coordinated by OECD, an intergovernmental organization of industrialized countries. Further information about PISA can be found in the technical notes, questionnaires, list of participating OECD and non-OECD countries, released assessment items, and FAQs.

 

By Thomas Snyder

IES at the Conference on Computing and Sustainable Societies

Over the summer, researchers, technologists, and policymakers gathered in Accra, Ghana for the Association for Computing Machinery’s Conference on Computing and Sustainable Societies (ACM COMPASS) to discuss the role of information technologies in international development.

Two IES-funded researchers from Carnegie Mellon University’s Program in Interdisciplinary Education Research, Michael Madaio and Dr. Amy Ogan, shared their research on developing voice-based early literacy technologies and evaluating their efficacy with low-literate, bilingual families in the Ivory Coast. 

Their research draws on methods from human-computer interaction, the learning sciences, and information-communication technology for development, to design educational technologies that are culturally and contextually appropriate.

Although the COMPASS conference focused on cross-cultural applications and technology for development, the research presented has implications for U.S. based education researchers, practitioners, and policymakers.

For instance, while research provides evidence for the importance of parental involvement in early literacy, parents with low literacy in the target language – as in many bilingual immigrant communities in the U.S. – may not be able to support their children with the explicit, instrumental help suggested by prior research (for example, letter naming or bookreading). This suggests that there may be opportunities for technology to scaffold low-literate or English Learners (EL) parental support in other ways.

At the conference, researchers described interactive voice-based systems (known as “IVR”) that help low-literate users find out about crop yields, understand local government policies, and engage on social media.  

This body of work has implications for designers of learning technologies in the U.S. Many families may not have a smartphone, but basic feature phones are ubiquitous worldwide, including in low-income, immigrant communities in the U.S. Thus, designers of learning technologies may consider designing SMS- or voice-based (such as IVR) systems, while schools or school districts may consider how to use voice-based systems to engage low-literate or EL families who may not have a smartphone or who may not be able to read SMS information messages.

In a rapidly changing, increasingly globalized world, research at IES may benefit from increased international engagement with international research, both focusing specifically on education, as well as information technology research that has implications for educational research, practice, and policy.

This guest blog was written by Michael Madaio. He is an IES Predoctoral Fellow in the Program in Interdisciplinary Education Research at Carnegie Mellon University. He is placed in the Human-Computer Interaction Institute.