Inside IES Research

Notes from NCER & NCSER

Literacy and Deafness: Helping Students who are D/HH Improve Language and Writing Skills

Dr. Hannah Dostal (left), University of Connecticut, and Dr. Kimberly Wolbers (right), University of Tennessee
Image: Dr. Hannah Dostal (left), University of Connecticut, and Dr. Kimberly Wolbers (right), University of Tennessee

September is National Literacy Month and Deaf Awareness Month. To celebrate both occasions, we spoke with two IES-funded principal investigators about their intervention aimed at increasing the writing and language skills of students who are deaf or hard of hearing through teacher professional development targeting writing instruction and use of multilingual strategies. Together with their team, Dr. Kimberly Wolbers (University of Tennessee) and Dr. Hannah Dostal (University of Connecticut) developed Strategic and Interactive Writing Instruction (SIWI) and tested SIWI for efficacy. The team is now analyzing effects of SIWI on both student and teacher outcomes in the D/HH space.

What are some challenges facing deaf and hard of hearing (D/HH) students in the area of literacy? How does your project address these student-related challenges?

Children who are D/HH are highly diverse with respect to language modality (spoken, sign) and proficiency. Understanding this diversity is the foundation to their literacy learning and academic engagement. Working between languages and across modalities when engaged in literacy tasks is a unique challenge for D/HH writers. For example, a student may use American Sign Language (ASL), which does not have a written form, while learning and using English text as they read and write. Strategies used during writing instruction that scaffold bilingual and multilingual development have the potential to leverage student knowledge of languages to support literacy development. During SIWI, teachers engage students in explicitly comparing and contrasting ASL and English with the intention of increasing metalinguistic knowledge and translation abilities.

Another unique challenge is that a number of D/HH students lack consistent exposure to accessible language at home and school. They may not hear sufficient amounts of spoken language to acquire its complexities and may not have sufficient exposure to sign language early in their lives to acquire visual language. Such language deprivation directly impacts literacy development. Based on what we are seeing and learning from our school partners, online learning due to COVID-19 exacerbated delays in academic progress when D/HH students experienced greater language isolation during this time.

Teachers implementing SIWI tackle expressive language delays head on. They use a designated space in the classroom, called the Language Zone, to develop, translate, and revise ideas generated in ASL and English.

What are some challenges facing teachers of D/HH in the area of literacy? How does your project address these teacher-related challenges?

It is becoming increasingly more challenging to find qualified teachers of the deaf. Not only are there shortages in the field, but many current teachers also point to limited preparation and a lack of assessment materials, curriculum, and instructional resources specifically designed for D/HH students with distinct languages histories.

The SIWI professional development (PD) program is designed to address these challenges facing teachers. It is a multi-component PD program that is intensive and sustained over a 3-year period and consists of a summer institute, site visits, and individual biweekly online coaching. Teachers not only learn about effective approaches but also how to flexibly enact the approaches with students who have diverse language histories and literacy skills.

What have you found so far?

SIWI has been implemented across settings with D/HH students and studies so far suggest SIWI results in significant language and literacy growth. Results from IES-funded studies using a variety of methods demonstrate SIWI’s positive impact on student outcomes. For example, we found a relationship between SIWI and positive student gains in the effective use of genre-related writing traits and grammar and conventions, including an increase in the length of writing as well as written language clarity and complexity. Recent analyses, currently in press, demonstrate that in one academic year, students participating in SIWI gained an average of 1.2 grade levels as measured by the Woodcock Johnson IV.

The SIWI PD program has also resulted in notable outcomes for SIWI teachers. The randomized control trial demonstrated significant increases in teachers’ knowledge of writing instruction, efficacy in teaching writing, and use of evidence-based practices compared to teachers in the business-as-usual control group (a manuscript is currently in progress). 

What are the next steps for your research?

Analyses of student outcomes in the efficacy trial are currently underway. In addition to analyzing the impact of SIWI on writing and language outcomes, we are also examining the impact on reading comprehension, vocabulary knowledge, handwriting, and motivation to write.

Additionally, we are investigating whether implementation fidelity of SIWI is positively associated with student outcomes. We intend to examine whether teachers with higher implementation fidelity in their second or third year of teaching SIWI demonstrate a significantly greater impact on their students’ writing and language growth.

Dr. Kimberly Wolbers is a Deaf Education Professor and Co-Director of Undergraduate Studies for the Department of Theory & Practice in Teacher Education at the University of Tennessee and Dr. Hannah Dostal is an Associate Professor of Reading Education and an advisory board member of the Aetna Chair of Writing at the University of Connecticut. This interview was produced and edited by Julianne Kasper, Virtual Student Federal Service Intern at IES and graduate student in Education Policy & Leadership at American University.

Working to Understand the Policy Process in the Development of Michigan’s Read by Grade Three Law

In recent decades, there has been an emphasis on quantitative, causal research in education policy. These methods are best suited for answering questions about the effects of a policy and whether it achieved its intended outcomes. While the question of, “Did it work?” remains critical, there is a need for research that also asks, “Why did it work? For whom? In what contexts?” To answer these types of questions, researchers must incorporate rigorous qualitative methods into their quantitative studies. Education research organizations like the Association for Education Finance and Policy have explicitly invited proposals using qualitative and mixed methodologies in an effort to elevate research addressing a range of critical education policy questions. Funding organizations, including IES, encourage applicants to incorporate qualitative methods into their research process. In this guest blog, Amy Cummings, Craig De Voto, and Katharine Strunk discuss how they are using qualitative methods in their evaluation of a state education policy.

 

In our IES-funded study, we use qualitative, survey, and administrative data to understand the implementation and impact of Michigan’s early literacy law—the Read by Grade Three Law. Like policies passed in 19 other states, the Read by Grade Three Law aims to improve K-3 student literacy skills and mandates retention for those who do not meet a predetermined benchmark on the state’s third-grade English language arts assessment. Although the retention component of these policies remain controversial, similar laws are under consideration in several other states, including Alaska, Kentucky, and New Mexico. Below are some of the ways that we have integrated qualitative methods in our evaluation study to better understand the policy process in the development of the Read by Grade Three Law.  

Collecting qualitative sources helped us understand how the policy came to be, thereby assisting in the structure of our data collection for examining the law’s implementation and subsequent effects. In our first working paper stemming from this study, we interviewed 24 state-level stakeholders (policymakers, state department of education officials, early literacy leaders) involved in the development of the law and coded state policy documents related to early literacy to assess the similarity between Michigan’s policy and those of other states. Understanding the various components of the Law and how they ended up in the policy led us to ensure that we asked educators about their perceptions and implementation of these components in surveys that are also part of our evaluation. For example, because our interviews made clear the extent to which the inclusion of the retention component of the Law was controversial during its development, we included questions in the survey to assess educators’ perceptions and intended implementation of this component of the Law. In addition, it confirmed the importance of our plan to use state administrative retention and assessment data to evaluate the effect of retention on student literacy outcomes.

To trace the Read by Grade Three Law’s conception, development, and passage, we analyzed these qualitative data using two theories of the policy process: Multiple Streams Framework (MSF) and policy transfer. MSF says that policy issues emerge on government agendas through three streams: problem, policy, and political. When these streams join, a policy window is opened during which there is a greater opportunity for passing legislation. Meanwhile, policy transfer highlights how policies enacted in one place are often used in the development of policies in another.

We found that events in the problem and political streams created conditions ripe for the passage of an early literacy policy in Michigan:

  • A national sentiment around improving early literacy, including a retention-based third-grade literacy policy model that had been deemed successful in Florida
  • A pressing problem took shape, as evidenced by the state’s consistently below average fourth-grade reading scores on the National Assessment of Educational Progress
  • A court case addressing persistently low-test scores in a Detroit-area district
  • Previous attempts by the state to improve early literacy

As a result of these events, policy entrepreneurs—those willing to invest resources to get their preferred policy passed—took advantage of political conditions in the state and worked with policymakers to advance a retention-based third-grade literacy policy model. The figure below illustrates interviewee accounts of the Read by Grade Three Law’s development. Our policy document analysis further reveals that Michigan’s and Florida’s policies are very similar, only diverging on nine of the 50 elements on which we coded.

 

 

Although this study focuses on the development and passage of Michigan’s early literacy law, our findings highlight both practical and theoretical elements of the policy process that can be useful to researchers and policymakers. To this end, we show how particular conditions, coupled by policy entrepreneurs, spurred Michigan’s consideration of such a policy. It is conceivable that many state education policies beyond early literacy have taken shape under similar circumstances: a national sentiment combined with influential brokers outside government. In this way, our mixed-methods study provides a practical model of what elements might manifest to enact policy change more broadly.

From a theoretical standpoint, this research also extends our understanding of the policy process by showing that MSF and the theory of policy transfer can work together. We learned that policy entrepreneurs can play a vital role in transferring policy from one place to another by capitalizing on conditions in a target location and coming with a specific policy proposal at the ready.

There is, of course, more to be learned about the intersection between different theories of the policy process, as well as how external organizations as opposed to individuals operate as policy entrepreneurs. As the number of education advocacy organizations continues to grow and these groups become increasingly active in shaping policy, this will be an exciting avenue for researchers to continue to explore.

This study is just one example of how qualitative research can be used in education policy research and shows how engaging in such work can be both practically and theoretically valuable. The most comprehensive evaluations will use different methodologies in concert with one another to understand education policies, because ultimately, how policies are conceptualized and developed has important implications for their effectiveness.


Amy Cummings is an education policy PhD student and graduate research assistant at the Education Policy Innovation Collaborative (EPIC) at Michigan State University (MSU).

Craig De Voto is a visiting research assistant professor in the Learning Sciences Research Institute at the University of Illinois at Chicago and an EPIC affiliated researcher.

Katharine O. Strunk is the faculty director of EPIC, the Clifford E. Erickson Distinguished Chair in Education, and a professor of education policy and by courtesy economics at MSU.

Towards a Better Understanding of Middle-Schoolers’ Argumentation Skills

What is the difference between fact and opinion? How do you find relevant evidence and use it to support a position? Every day, teachers help students practice these skills by fostering critical discussions, a form of argumentation that encourages students to use reasoning to resolve differences of opinion.

In their IES-funded study, Exploring and Assessing the Development of Students' Argumentation Skills, Yi Song and her colleagues are uncovering activities (both teacher led and technology supported) that can improve middle-school students’ ability to generate better oral and written arguments.

This project began in 2019 and is working in classrooms and with teachers and students. The researchers have created a series of videos that describe their work. In this series, Dr. Song and her co-PIs, Dr. Ralph Ferretti and Dr. John Sabatini, discuss why the project is important to education, how they will conduct the research plan, and how educators can apply what they are learning in classrooms.

 

 


For questions and more information, contact Meredith Larson (Meredith.Larson@ed.gov), Program Officer, NCER

Better Reading Comprehension When You Know That You Don’t Know

The more you already know about a topic, the easier it may be to comprehend and learn from texts about that topic. But knowledge has to start somewhere. So how can we help students learn from texts when they may have low background knowledge?

In their exploratory study, researchers from ETS found that lack of knowledge is not necessarily a barrier to comprehension. Rather, they suggest that students who can identify their lack of background knowledge are more likely to comprehend and learn new information than students who do not acknowledge they lack background knowledge. In other words, knowing that you might not know may lead to better outcomes.

To determine the role of background knowledge, the researchers pretested middle and high school students’ background knowledge through questions related to topics the students may have some but not complete knowledge of, such as ecology, immigration, and wind power. The pretest included an “I don’t know” option, along with correct and incorrect responses.

Students then took a scenario-based assessment in which they read multiple sources about each of the topics. This type of assessment mirrors real-world learning by encouraging readers to build their own interpretations of a topic, which helps researchers determine whether students comprehend what they read.

They found that students who selected “I don’t know” when answering background knowledge questions had better understanding of the content than those who provided wrong answers on these questions. In fact, students who selected “I don’t know” rather than answering incorrectly were nearly three times as likely to learn from sources that provided the correct information than students who had answered the pretest incorrectly. Students who selected “I don’t know” may also learn more than students who had a comparable level of weak background knowledge. The researchers suggest that the “I don’t know” readers may have set different reading goals prior to engaging with the sources than those who guessed incorrectly.

 

Possible Implications for Teaching and Learning

The results from this work support the idea that having and building background knowledge is key. Thus, teachers may want to assess existing knowledge and address knowledge gaps prior to instruction.

Teachers may also want to provide an “I don’t know” option or options that allow students to rate their level of certainty. Doing so may help teachers distinguish between students who recognize their own gaps in knowledge from those who may not be aware that they are wrong or that they simply do not know. This latter group of students may need more help in determining the accuracy of their judgments or may have incorrect knowledge that could interfere with learning.

The researchers further suggest that teachers may want to go beyond the role of background knowledge by teaching students how to set appropriate reading goals and use strategic reading approaches to learn new facts or correct existing misunderstandings.

 


The research reported here was conducted under NCER grant R305A150176: What Types of Knowledge Matters for What Types of Comprehension? Exploring the Role of Background Knowledge on Students' Ability to Learn from Multiple Texts.

This blog was written by Dr. Meredith Larson. Contact her for more information about this project.

Building a Reading Comprehension Measure for Postsecondary Students

Assessments of both U.S. adults and 12th-grade students indicate that millions of learners may have significant reading skill gaps. Because these students may lack the fundamental reading and comprehension skills needed to thrive in college, postsecondary institutions need valid reading measures that accurately determine the source of student difficulties.

An IES-funded research team is developing and validating such a measure: Multiple-choice Online Causal Comprehension Assessment for Postsecondary Students (MOCCA-College). MOCCA-College aims to assess the reading comprehension abilities of postsecondary students and distinguish between common comprehension difficulties. This information could help students, faculty, and programs better determine who might need what type of additional reading instruction.

The current version of MOCCA-College is still being validated, but it already contains components that may interest postsecondary institutions, faculty, and students. For example, it suggests classroom interventions based on a student’s results and allows for different user roles, such as student, faculty member, or administrator. 

Results from pilot work indicate that MOCCA-College can reliably distinguish between postsecondary readers with strong comprehension skills and those who may need to build these skills. MOCCA-College uses both narrative and expository texts to determine student performance. The results indicate that both types of passages measure a single dimension of ability, though narrative passages may more easily and accurately discriminate between those who have good comprehension skills and those who do not.

This finding is in keeping with meta-analysis work that finds a similar pattern for narrative and expository items. Narrative passages appear to consistently measure inferential comprehension more accurately than expository passages for both younger and older readers. This holds even after matching texts for readability and demands on background knowledge.

As the researchers continue to validate MOCCA-College, we will continue to learn more about the needs of postsecondary readers, as well as how to identify and address these needs.

 


This research and articles referenced above are supported through NCER grant R305A180417: Multiple-choice Online Causal Comprehension Assessment for Postsecondary Students (MOCCA-College).

Dr. Meredith Larson, program officer for postsecondary and adult education, wrote this blog. Contact her at Meredith.Larson@ed.gov for additional information about MOCCA-College and postsecondary teaching and learning research.