Inside IES Research

Notes from NCER & NCSER

SELweb: From Research to Practice at Scale in Education

With a 2011 IES development grant, researchers at Rush University Medical Center, led by Clark McKown, created SELweb, a web-based system to assess the social-emotional skills in children in Kindergarten to Grade 3. The system (watch the video demo) includes illustrated and narrated modules that gauge children’s social acceptance with peers and assess their ability to understand others’ emotions and perspectives, solve social problems, and self-regulate. The system generates teacher reports with norm-referenced scores and classroom social network maps. Field trials with 8,881 children in seven states demonstrate that system produces reliable and valid measures of social-emotional skills. Findings from all publications on SELweb are posted here.

In 2016, with support from the university, McKown launched a company called xSEL Labs, to further develop and ready SELweb for use at scale and to facilitate the launch SELweb into the school marketplace. SELweb is currently used in 21 school districts in 16 states by over 90,000 students per year.

Interview with Clark McKown of Rush University Medical Center and xSEL Labs

 

From the start of the project, was it always a goal for SELweb to one day be ready to be used widely in schools?

CM: When we started our aspiration was to build a usable, feasible, scientifically sound assessment and it could be done. When the end of the grant got closer, we knew that unless we figured out another way to support the work, this would be yet another good idea that would wither on the vine after showing evidence of promise. In the last year and a half of the grant, I started thinking about how to get this into the hands of educators to support teaching and learning, and how to do it in a large-scale way.

 

By the conclusion of your IES grant to develop SELweb, how close were you to the version that is being used now in schools? How much more time and money was it going to take?

CM: Let me answer that in two ways. First is how close I thought we were to a scalable version. I thought we were pretty close. Then let me answer how close we really were. Not very close. We had built SELweb in a Flash based application that was perfectly suited to small-scale data collection and was economical to build. But for a number of reasons, there was no way that it would work at scale. So we needed capital, time, and a new platform. We found an outstanding technology partner, the 3C Institute, who have a terrific ed tech platform well-suited to our needs, robust, and scalable. And we received funding from the Wallace Foundation to migrate the assessment from the original platform to 3C’s. The other thing I have learned is that technology is not one and done. It requires continued investment, upkeep, and improvement.

What experiences led you to start a company? How were you able to do this as an academic researcher?

CM: I could tell you that I ran a children’s center, had a lot of program development experience, had raised funds, and all that would be true, and some of the skills I developed in those roles have transferred. But starting a company is really different than anything I’d done before. It’s exciting and terrifying. It requires constant effort, a willingness to change course, rapid decision-making, collaboration, and a different kind of creativity than the academy. Turns out I really like it. I probably wouldn’t have made the leap except that the research led me to something that I felt required the marketplace to develop further and to realize its potential. There was really only so far I could take SELweb in the academic context. And universities recognize the limitations of doing business through the university—that’s why they have offices of technology transfer—to spin off good ideas from the academy to the market. And it’s a feather in their cap when they help a faculty member commercialize an invention. So really, it was about finding out how to use the resources at my disposal to migrate to an ecosystem suited to continuing to improve SELweb and to get it into the hands of educators.

How did xSEL Labs pay for the full development of the version of SELweb ready for use at scale?

CM: Just as we were getting off the ground, we developed

 a partnership with a research funder (the Wallace Foundation) who was interested in using SELweb as an outcome measure in a large-scale field trial of an SEL initiative. They really liked SELweb, but it was clear that in its original form, it simply wouldn’t work at the scale they required. So we worked out a contract that included financial support for improving the system in exchange for discounted fees in the out years of the project.

What agreement did you make with the university in order to start your company and commercial SELweb?

CM: I negotiated a license for the intellectual property from Rush University with the university getting a royalty and a small equity stake in the company.

Did anyone provide you guidance on the business side?

CM: Yes. I lucked into a group of in-laws who happen to be entrepreneurs, some in the education space. And my wife has a sharp business mind. They were helpful. I also sought and found advisors with relevant expertise to help me think through the initial licensing terms, and then pricing, marketing, sales, product development, and the like. One of the nice things about business is that you aren’t expected to know everything. You do need to know how and when to reach out to others for guidance, and how to frame the issues so that guidance is relevant and helpful.

How do you describe the experience of commercializing SELWeb?

CM: Commercialization is, in my experience, an exercise in experimentation and successive approximations. How will you find time and money to test the waters? Commercialization is an exciting and challenging leap from the lab to the marketplace. In my experience, you can’t do it alone, and even with great partners, competitive forces and chance factors make success scale hard to accomplish. Knowing what you don’t know, and finding partners who can help, is critical.

I forgot who described a startup as a temporary organization designed to test whether a business idea is replicable and sustainable. That really rings true. The experience has been about leaving the safe confines of the university and entering the dynamic and endlessly interesting bazaar beyond the ivory tower to see if what I have to offer can solve a problem of practice.

In one sentence (or two!), what would say is most needed for gaining traction in the marketplace?

CM: Figure out who the customer is, what the customer needs, and how what you have to offer addresses those needs. Until you get that down, all the evidence in the world won’t lead to scale.

Do you have advice for university researchers seeking to move their laboratory research into wide-spread practice?

CM: It’s not really practical for most university researchers to shift gears and become an entrepreneur. So I don’t advise doing what I did, although I’m so glad I did. For most university researchers, they should continue doing great science, and when they recognize a scalable idea, consider commercialization as an important option for bringing the idea to scale. My impression is that academic culture often finds commerce to be alien and somewhat grubby, which can get in the way. The truth is, there are whip-smart people in business who have tremendous expertise. The biggest hurdle for many university researchers will be to recognize that they lack expertise in bringing ideas to market, they will need to find that expertise, respect it, and let go of some control as the idea, program, or product is shaped by market forces. It’s also a hard truth for researchers, but most of the world doesn’t care very much about evidence of efficacy. They have much more pressing problems of practice to attend to. Don’t get me wrong—evidence of efficacy is crucial. But for an efficacious idea to go to scale, usability and feasibility are the biggest considerations.

For academics, getting the product into the marketplace requires a new set of considerations, such as: Universities and granting mechanisms reward solo stars; the marketplace rewards partnerships. That is a big shift in mindset, and not easily accomplished. Think partnerships, not empires; listening more than talking.

Any final words of wisdom in moving your intervention from research to practice?

CM: Proving the concept of an ed tech product gets you to the starting line, not the finish. Going to scale benefits from, probably actually requires, the power of the marketplace. Figuring out how the marketplace works and how to fit your product into it is a big leap for most professors and inventors. Knowing the product is not the same as knowing how to commercialize it.

 ____________________________________________________________________________

Clark McKown is a national expert on social and emotional learning (SEL) assessments. In his role as a university faculty member, Clark has been the lead scientist on several large grants supporting the development and validation of SELweb, Networker, and other assessment systems. Clark is passionate about creating usable, feasible, and scientifically sound tools that help educators and their students.

This interview was produced by Ed Metz of the Institute of Education Sciences. This post is the third in an ongoing series of blog posts examining moving from university research to practice at scale in education.

Lexia RAPID Assessment: From Research to Practice at Scale in Education

With a 2010 measurement grant award and a 2010 Reading for Understanding subaward from IES, a team at Florida State University (FSU) led by Barbara Foorman, developed a web-based literacy assessment for Kindergarten to Grade 12 students.

Years of initial research and development of the assessment method, algorithms, and logic model at FSU concluded in 2015 with a fully functioning prototype assessment called RAPID, the Reading Assessment for Prescriptive Instructional Data. A body of research demonstrates its validity and utility. In 2014, to ready the prototype for use in schools and to disseminate on a wide-scale basis, FSU entered into licensing agreements with the Florida Department of Education (FLDOE) to use the prototype assessment royalty-free as the Florida Assessment for Instruction in Reading—Florida Standards (FAIR-FS), and with Lexia Learning Systems LLC, a Rosetta Stone company (Lexia), to create its commercial solution: Lexia® RAPID™ Assessment program.  Today, RAPID (watch video) consists of adaptive screening and diagnostic tests for students as they progress in areas such as word recognition, vocabulary knowledge, syntactic knowledge and reading comprehension. Students use RAPID up to three times per year in sessions of 45 minutes or less, with teachers receiving results immediately to inform instruction.

RAPID is currently used by thousands of educators and students across the U.S. RAPID has been recommended in Massachusetts as a primary screening tool for students ages 5 and older, is on both the Ohio Department of Education List of Approved Screening Assessments and the Michigan Lists of Initial and Extensive Literacy Assessments.

Interview with Barbara Foorman (BF) of Florida State University and Liz Brooke (LB) of Lexia Learning  

Photograph of Barbara Foorman, PhD

From the start of the project, was it always a goal for the assessment to one day be ready to be used widely in schools?

BF: Yes!

How was the connection made with the Florida Department of Education?   

BF: FSU authors (Yaacov Petscher, Chris Schatschneider, and I) gave the assessment royalty-free in perpetuity to the FLDOE, with the caveat that they had to host and maintain it. The FLDOE continues to host and maintain the Grade 3 to 12 system but never completed the programming on the K to 2 system prototype. The assessment we provided to the FLDOE is called the Florida Assessment for Instruction in Reading (FAIR—FS).  We also went to FSU’s Office of Commercialization to create royalty and commercialization agreements.

How was the connection made with Lexia? 

BF: Dr. Liz (Crawford) Brooke, Chief Learning Officer of Lexia/Rosetta Stone, and Dr. Alison Mitchell, Director of Assessment at Lexia, had both previously worked at the Florida Center for Reading Research (FCRR). Liz served as the Director of Interventions, as well as a doctoral student under me, and Alison was a postdoctoral assistant in research. Both Liz and Alison had worked on previous versions of the assessment.

Photograph of Liz Brooke, PhD

LB: Also, both Yaacov and Chris had done some previous work with me on the Assessment Without Testing® technology, which was embedded in our K to 5 literacy curriculum solution, the Lexia® Core5 Reading® program.

Did Lexia have to do additional R&D to develop the FSU assessment into RAPID as a commercial offering for larger scale use? Were resources provided?  

LB: To build and scale the FSU prototype assessment into a commercial platform, our team of developers worked closely with the developers at FSU to reprogram certain software application and databases. We’ve also spent the last several years at Lexia working to translate the valuable results that RAPID generates into meaningful, dynamic and usable data and tools for schools and educators.  This meant designing customized teacher and administrator reports for our myLexia® administrator dashboard, creating a library of offline instructional materials for teachers, as well as developing both online and in-person training materials specifically designed to support our RAPID solution.

BF: They also hired a psychometrician to submit RAPID to the National Center for Intensive Intervention, and had their programmers develop capabilities to support access to RAPID via iPads as well as through the web-based application.

What kind of licensing agreement did you (or FSU) work out?  

BF: The prototype assessment method, algorithms, and logic model that were used to develop RAPID are licensed to Lexia by FSU. Some of these may also be available for FSU to license to other interested companies.  Details of FSU’s licensing agreement terms to Lexia are confidential, however, royalties received by FSU through its licensing arrangements are shared between authors, academic units, and the FSU Research Foundation, according to FSU policies. (Read here for more about commercialization of FSU technologies and innovations.)

Does FSU receive royalties from the sale of RAPID?

BF: Yes. The revenue flows through FSU’s royalty stream—percentages to the three authors and the colleges and departments that we three authors are housed in.

What factors did Lexia consider when determining to partner with FSU to develop RAPID?

LB: We considered the needs of our customers and the fact that we wanted to develop and offer a commercial assessment solution that would provide a great balance between efficiency from the adaptive technology, but also insight based on an emphasis on reading and language skills. At Lexia, we are laser-focused on literacy and supporting the skills students need to be proficient readers. The value of the research foundation of the assessment was a natural fit for that reason. RAPID emphasizes Academic Language skills in a way that many other screening tools miss - often you’d need a specialized assessment given by a speech language pathologist to assess the skills that RAPID captures in a relatively short period of time for a whole classroom of students.

Describe how RAPID is marketed and distributed to schools?

LB: The Lexia RAPID Assessment was designed and is offered as a K-12 universal screening tool that schools can use up to three times per year. We currently offer RAPID as a software as a service -based subscription on an annual cost per license basis that can be either purchased per student or per school.  We also encourage schools that utilize RAPID to participate in a yearlong Lexia Implementation Support Plan that includes professional learning opportunities and data coaching specific to the RAPID solution, to really understand and maximize the value of the data and instructional resources that they receive as part of using RAPID.

Do you have advice for university researchers seeking to move their laboratory research into wide-spread practice?

BF: Start working with your university’s office of commercialization sooner than later to help identify market trends and create Non-Disclosure Agreements. In the case of educational curricula and assessments, researchers need to be (a) knowledgeable about competing products, (b) able to articulate what’s unique and more evidence-based than the competitors’ products, and (c) know that educators will find their product useful.

LB: As Barbara noted, it is critical to identify the specific, real-world need that your work is addressing and to be able to speak to how that’s different than other solutions out there.  It’s also really important to make sure that the research you’ve done has really validated that it does meet the need you are stating, as this will be the foundation of your claims in the market.

____________________________________________________________________________________________

Barbara Foorman, Ph.D., is the Frances Eppes Professor of Education, Director Emeritus of FCRR, and Director of the Regional Educational Laboratory Southeast at FSU. Barbara is an internationally known expert in reading with over 150 peer reviewed publications. Barbara was co-editor of the Journal of Research on Educational Effectiveness and is a co-founder and on the board of the Society for Research on Educational Effectiveness

Liz Brooke, Ph.D., CCC-SLP is the Chief Learning Officer for Rosetta Stone/Lexia Learning. Dr. Liz Brooke is responsible for setting the educational vision for the company's Language and Literacy products, including the Adaptive Blended Learning (ABL) strategy that serves as the foundation for Rosetta Stone’s products and services. Liz has been working in the education sector for over 25 years and has been published in several scholarly journals. Liz joined Lexia in 2010. Prior to that, she worked as the Director of Interventions at the FCRR and she has also served as a speech-language pathologist at Massachusetts General Hospital and in the public school setting. Liz began her career in the classroom as a first-grade teacher.

This interview was produced by Edward Metz of the Institute of Education Sciences. This post is the second in an ongoing series of blog posts examining moving from university research to practice at scale in education.

Measuring Social and Emotional Learning in Schools

Social and emotional learning (SEL) has been embraced by many schools and districts around the country. Yet in the rush to adopt SEL practices and support student SEL competencies, educators often lack assessment tools that are valid, reliable, and easy to use.

 

Washoe County School District in Nevada has moved the needle on SEL assessment with support from an IES Researcher-Practitioner Partnership grant. The district partnered with the Collaborative for Academic, Social, and Emotional Learning (CASEL) to develop the Social and Emotional Competency Assessments (WCSD-SECAs)—free, open-source instruments that schools can use to measure SEL competencies of students in 5th through 12th grade.

Long and short versions of the SECA are available to download from the school district’s website, along with a bank of 138 items across 8 SEL domains that schools around the country can use to modify SECA assessments for their local context. The long-form version has been validated and aligned to the CASEL 5 SEL competency clusters and WCSD SEL standards (self-awareness, self-management, social awareness, relationship skills, and responsible decision making). The assessment is also available in Spanish, and the Metro Nashville Public schools offer the assessment in 8 additional languages.  

Students complete the long-form SECA as part of Washoe’s Annual Student Climate Survey by rating how easy or difficult SEL skills are for them. Under the Social Awareness domain, students respond to items such as “Knowing what people may be feeling by the look on their face” or “Learning from people with different opinions than me.” Under the Responsible Decision Making domain, students rate themselves on skills such as “Saying ‘no’ to a friend who wants to break the rules” and “Thinking of different ways to solve a problem.”

The SECA is one component of Washoe County’s larger School Climate Survey Project that is marking its 10th anniversary this year. Washoe provides district-level and school-level reports on school climate to support the district’s commitment to providing safe, caring, and engaging school environments for all of Washoe’s students and families.  

Written by Emily Doolittle, NCER’s Team Lead for Social Behavioral Research

Equity Through Innovation: New Models, Methods, and Instruments to Measure What Matters for Diverse Learners

In today’s diverse classrooms, it is both challenging and critical to gather accurate and meaningful information about student knowledge and skills. Certain populations present unique challenges in this regard – for example, English learners (ELs) often struggle on assessments delivered in English. On “typical” classroom and state assessments, it can be difficult to parse how much of an EL student’s performance stems from content knowledge, and how much from language learner status. This lack of clarity makes it harder to make informed decisions about what students need instructionally, and often results in ELs being excluded from challenging (or even typical) coursework.

Over the past several years, NCER has invested in several grants to design innovative assessments that will collect and deliver better information about what ELs know and can do across the PK-12 spectrum. This work is producing some exciting results and products.

  • Jason Anthony and his colleagues at the University of South Florida have developed the School Readiness Curriculum Based Measurement System (SR-CBMS), a collection of measures for English- and Spanish-speaking 3- to 5-year-old children. Over the course of two back-to-back Measurement projects, Dr. Anthony’s team co-developed and co-normed item banks in English and Spanish in 13 different domains covering language, math, and science. The assessments are intended for a variety of uses, including screening, benchmarking, progress monitoring, and evaluation. The team used item development and evaluation procedures designed to assure that both the English and Spanish tests are sociolinguistically appropriate for both monolingual and bilingual speakers.

 

  • Daryl Greenfield and his team at the University of Miami created Enfoque en Ciencia, a computerized-adaptive test (CAT) designed to assess Latino preschoolers’ science knowledge and skills. Enfoque en Ciencia is built on 400 Spanish-language items that cover three science content domains and eight science practices. The items were independently translated into four major Spanish dialects and reviewed by a team of bilingual experts and early childhood researchers to create a consensus translation that would be appropriate for 3 to 5 year olds. The assessment is delivered via touch screen and is equated with an English-language version of the same test, Lens on Science.

  • A University of Houston team led by David Francis is engaged in a project to study the factors that affect assessment of vocabulary knowledge among ELs in unintended ways. Using a variety of psychometric methods, this team explores data from the Word Generation Academic Vocabulary Test to identify features that affect item difficulty and explore whether these features operate similarly for current, former, as well as students who have never been classified as ELs. The team will also preview a set of test recommendations for improving the accuracy and reliability of extant vocabulary assessments.

 

  • Researchers led by Rebecca Kopriva at the University of Wisconsin recently completed work on a set of technology-based, classroom-embedded formative assessments intended to support and encourage teachers to teach more complex math and science to ELs. The assessments use multiple methods to reduce the overall language load typically associated with challenging content in middle school math and science. The tools use auto-scoring techniques and are capable of providing immediate feedback to students and teachers in the form of specific, individualized, data-driven guidance to improve instruction for ELs.

 

By leveraging technology, developing new item formats and scoring models, and expanding the linguistic repertoire students may access, these teams have found ways to allow ELs – and all students – to show what really matters: their academic content knowledge and skills.

 

Written by Molly Faulkner-Bond (former NCER program officer).

 

CAPR: Answers to Pressing Questions in Developmental Education

Since 2014, IES has funded the Center for the Analysis of Postsecondary Readiness (CAPR) to answer questions about the rapidly evolving landscape of developmental education at community colleges and open-access four-year institutions. CAPR is providing new insights into how colleges are reforming developmental education and how their reforms are impacting student outcomes through three major studies:

  • A survey and interviews about developmental education practices and reform initiatives
  • An evaluation of the use of multiple measures for assessing college readiness
  • An evaluation of math pathways.

Preliminary results from these studies indicate that some reforms help more students finish their developmental requirements and go on to do well in college-level math and English.

National Study of Developmental Education Policies and Practices

CAPR has documented widespread reform in developmental education at two- and four-year colleges through a national survey and interviews on developmental education practices and reforms. Early results from the survey show that colleges are moving away from relying solely on standardized tests for placing students into developmental courses. Colleges are also using new approaches to delivering developmental education including shortening developmental sequences by compressing or combining courses, using technology to deliver self-paced instruction, and placing developmental students into college-level courses with extra supports, often called corequisite remediation.

Developmental Math Instructional Methods in Public Two-Year Colleges (Percentages of Colleges Implementing Specific Reform Strategies)

Notes: Percentages among two-year public colleges that reported offering developmental courses. Colleges were counted as using an instructional method if they used it in at least two course sections. Categories are not mutually exclusive.

Evaluation of Developmental Math Pathways and Student Outcomes

CAPR has teamed up with the Charles A. Dana Center at the University of Texas at Austin to evaluate the Dana Center Mathematics Pathways (DCMP) curriculum at four community colleges in Texas. The math pathways model tailors math courses to particular majors, with a statistics pathway for social science majors, a quantitative reasoning pathway for humanities majors, and an algebra-to-calculus pathway for STEM majors. DCMP originally compressed developmental math into one semester, though now the Dana Center is recommending corequisite models. Instructors seek to engage students by delving deeply into math concepts, focusing on real-world problems, and having students work together to develop solutions.

Interim results show that larger percentages of students assigned to DCMP (versus the traditional developmental sequence) enrolled in and passed developmental math. More of the DCMP students also took and passed college-level math, fulfilling an important graduation requirement. After three semesters, 25 percent of program group students passed a college-level math course, compared with 17 percent of students assigned to traditional remediation.

Evaluation of Alternative Placement Systems and Student Outcomes (aka Multiple Measures)

CAPR is also studying the impact of using a combination of measures—such as high school GPA, years out of high school, and placement test scores—to predict whether students belong in developmental or college-level courses. Early results from the multiple measures study show that, in English and to a lesser extent in math, the multiple measures algorithms placed more students into college-level courses, and more students passed those courses (compared to students placed with a single test score).

 

College-Level English Course Placement, Enrollment, and Completion in CAPR’s Multiple Measures Study (Percentages Compared Across Placement Conditions)

 

College-Level Math Course Placement and Completion in CAPR’s Multiple Measures Study

Looking Ahead to the Future of Developmental Education

These early results from CAPR’s evaluations of multiple measures and math pathways suggest that those reforms are likely to be important pieces of future developmental education systems. CAPR will release final results from its three studies in 2019 and 2020.

Guest blog by Nikki Edgecombe and Alexander Mayer

Nikki Edgecombe is the principal investigator of the Center for the Analysis of Postsecondary Readiness, an IES-funded center led by the Community College Research Center (CCRC) and MDRC, and a senior research scientist at CCRC. Alexander Mayer is the co-principal investigator of CAPR and deputy director of postsecondary education at MDRC.