IES Blog

Institute of Education Sciences

Trading the Number 2 Pencil for 2.0 Technology

Although traditional pencil and paper tests provide good information for many purposes, technology presents the opportunity to assess students on tasks that better elicit the real world skills called for by college and career standards. IES supports a number of researchers and developers who are using technology to develop better assessments through grants as well as the Small Business Innovation Research program. 

A screenshot of the GISA assessment intro page One example of the power of technology to support innovative assessment  is the Global, Integrated, Scenario-based Assessment (known as ‘GISA’), developed by John Sabatini and Tenaha O’Reilly at the Educational Testing Service as part of a grant supported by the Reading for Understanding Research Initiative.

Each GISA scenario is structured to resemble a timely, real-world situation. For example, one scenario begins by explaining that the class has been asked to create a website on green schools. The student is assigned the task of working with several students (represented by computer avatars) to create the website. In working through the scenario, the student engages in activities that are scaffolded to support students in summarizing information, completing a graphic organizer, and collaborating to evaluate whether statements are facts or opinions. The scenario provides a measure of each student’s ability to learn from text through assessing his or her knowledge of green schools before and after completing the scenario. This scenario is available on the ETS website along with more information about the principles on which GISA was built.

Karen Douglas, of the National Center for Education Research, recently spoke to Dr. Sabatini and Dr. O’Reilly on the role of technology in creating GISA, what users think of it, and their plans for continuing to develop technology-based assessments.

How did the use of technology contribute to the design of GISA?

Technological delivery creates many opportunities over more traditional paper and pencil test designs. On the efficiency side of the argument, items and tasks can be delivered over the internet in a standardized way and there are obvious advantages for automated scoring. However, the real advantage has to do with both the control over test environment and what can be assessed. We can more effectively simulate the digital environments that students use in school, leisure and, later, in the workforce. GISA uses scenario-based assessment to deliver items and tasks. During a scenario-based assessment students are given a plausible reason for reading a collection of thematically related materials. The purpose defines what is important to focus on as students work towards a larger goal. The materials are diverse and may reflect different perspectives and quality of information. 

Screenshot of a GISA forum on Green Schools

The student not only needs to understand these materials but also needs to evaluate and integrate them as they solve problems, make decisions, or apply what they learn to new situations. This design is not only more like the activities that occur in school, but also affords the opportunity for engaging students in deeper thinking. GISA also includes simulated students that may support or scaffold the test taker's understanding with good habits of mind such as the use of reading strategies. Items are sequenced to build up test takers’ understanding and to examine what parts of a more complex task students can or cannot do. In this way, the assessment serves as a model for learning while simultaneously assessing reading. Traditionally, the areas of instruction and assessment have not been integrated in a seamless manner.

What evidence do you have that GISA provides useful information about reading skills?

We have a lot more research to conduct, but thus far we have been able to create a new technology- delivered assessment that updates the aspects of reading that are measured and introduces a variety of new features.

Despite the novel interface, items, tasks, and format, students are able to understand what is expected of them. Our analyses indicate the test properties are good and that students can do a range of tasks that were previously untested in traditional assessments. While students may be developing their skills on more complex tasks, there is evidence they can do many of the components that feed into it. In this way the assessment may be more instructionally relevant.

Informally, we have received positive feedback on GISA from both teachers and students. Teachers view the assessment as better matching the types of activities they teach in the classroom, while students seem to enjoy the more realistic purpose for reading, the more relevant materials, and the use of simulated peers. 

What role do you think technology will play in future efforts to create better assessments?

We believe technology will play a greater role in how assessments are designed and delivered. Being able to provide feedback to students and better match the test to student needs are some areas where future assessments will drive innovation. More interactive formats, such as intelligent tutoring and gaming, will also grow over time. With new forms of technology available, the possibilities for meeting students’ educational needs increases dramatically.

What’s next for GISA?

We are using GISA in two additional grants. In one grant, we leverage the GISA designs for use with adults, a group for which there are few viable assessments. In the other grant we are using GISA to get a better understanding of how background knowledge affects reading comprehension.

For more information about the Reading for Understanding Research Initiative, read this post on the IES blog.

By Karen Douglas, Education Research Analyst, NCER, who oversees the Reading for Understanding Research Initiative

A Conversation about Reading for Understanding

There is a lot of discussion these days about the importance of bringing many voices to the table when designing, implementing, and interpreting research studies. The usefulness of educational research is only enhanced when teachers, policmakers, and researchers work together to design studies and understand the findings.

The Educational Testing Service and the Council of Chief State School Officers (CCSSO) sponsored such an opportunity on May 18 and 19 by bringing together over 150 researchers, practitioners, instructional specialists, federal staff, and policy makers to reflect on the efforts of the of the Reading for Understanding Research Initiative (RfU).

Researchers from the six RfU teams shared what they learned about how to work together to design and implement new approaches and improve reading for understanding from PreK through high school. They also discussed what needs to be done to build on this effort.  (Check out this recent blog post to learn how some researchers are building on the work of the RfU Initiative.)

Sessions at the meeting were designed to promote discussion among attendees in order to take advantage of different viewpoints and  better understand the implications of the RfU findings and the relevance for practice and policy. The meeting culminated with two panels tasked with summarizing key points from these discussions. One panel focused on implications for research, and the other on policy implications. The presence of practitioners from the field, researchers, and policymakers led to a well-rounded conversation about how we can build upon the research of RfU and put it into action in the classroom.

The agenda, presentation slides, and webcasts of the closing panels are now available for viewing on the meeting website.  

Written by Karen Douglas, Education Research Analyst, NCER

What’s Next for the Reading for Understanding Research Initiative?

After years of intense collaboration and research, the Reading for Understanding (RfU) Research Initiative is coming to an end. But the initiative’s work continues through recently announced IES-funded grants.

Over six years, research teams in the RfU network designed and tested new interventions that aim to improve reading comprehension in students in all grade levels and developed new measures of reading comprehension and component skills that support it. The initiative led to several new and important findings. In the coming years, several teams will build on that work through new research projects funded by IES’ National Center for Education Research (NCER) and National Center for Special Education Research (NCSER).  

During the RfU initiative, the Promoting Adolescents’ Comprehension of Text (PACT) team found positive effects in improving the content-area reading comprehension of middle school students. The PACT intervention uses social studies content to engage students and teach them to build coherent representations of the ideas in texts. Through a new grant from NCER, the PACT team will be testing the effectiveness of the intervention in middle school social studies classrooms in eight states.

Another group of researchers from the PACT team are starting a new project with funding from NCSER to design and test a technology-based intervention aimed at improving how middle school students with reading disabilities make inferences while reading.  

The RfU assessment team is also launching a new NCER-funded project to develop a digital assessment appropriate for adults, in particular those reading between the 3rd- to 8th-grade levels. Building on the Global, Integrated Scenario-Based Assessment (GISA) developed in RfU, the team intends for this new assessment to help determine an adult reader's strengths and weaknesses, inform instruction, and improve programs and institutional accountability. In addition, this team is using assessment items developed with RfU funding to explore the relationship between high school students' background knowledge and their reading comprehension.  

Finally, the Florida State University RfU team is continuing to explore which combination of interventions will improve the early language skills that are foundational to mastery of reading. In this new project, the researchers will examine the relative efficacy and sustained impacts of a language and vocabulary intervention for prekindergarten and kindergarten students, with variations on when and how long the intervention is used.

Written by Elizabeth Albro, Associate Commissioner, NCER

When It’s Good to Talk in Class

Most people remember being told not to talk in class or risk a trip to the principal’s office or a note sent home. But researchers in the Reading for Understanding Research Initiative (RfU) want students to talk in class as a way to improve reading comprehension.

Five research teams in the RfU network have designed and tested new interventions intended to provide a strong foundation for reading comprehension in students from pre-kindergarten through high school. And promoting high quality language use and talk among students is a central feature of many of these interventions. The goal is to improve reading outcomes by building students’ understanding of rich syntax and academic language to express and evaluate complex ideas.

RfU researchers have conducted studies in 29 states and interventions developed by the RfU network have been tested for efficacy with over 30,000 students (see the chart to the right for more information on the grantees and the map below to see where they conducted research).

While findings from these studies are still forthcoming, some interventions already show promise toward improving reading for understanding and/or supporting skills. New assessments have been field-tested with over 300,000 students across the country and have documented their capacity to collect valid and useful information for teachers, schools, and researchers.

Support for informative and instructional talk by students was provided in a variety of ways across different academic areas, including social studies, science, and English language arts classes. Some teams developed new classroom activities to structure whole class discussion through student debate on current topics of interest. Using a program like 

Word Generation, students discuss a focal question to stimulate various opinions on current topics, such as ‘Should students be required to wear school uniforms?’ or ‘Are green technologies worth the investment?’  In other interventions, such as PACT, students spend time talking in pairs or small groups to reinforce a new concept or idea.

Teachers are understandably concerned about how to manage a classroom in which students are talking. As part of RfU, curricula and materials were created to help teachers to improve their skills in managing constructive student talk, and several teams also provided extensive professional development for teachers.

Attention to the importance of student talk was also evident in a computer-based assessment called GISA developed by ETS which uses a scenario-based approach. Rather than talking with their peers during the assessments, students interact with avatars on a task that simulates a realistic classroom-based task.

Using student talk to improve reading comprehension is just one of many supports that have been explored by the RfU teams in their extensive body of work over the past six years. The RfU teams provided an update on their research during an event in May. You can watch a webcast of the event until July 31, 2016.

Visit the IES website to see a detailed agenda for the May event and to learn more about the work of the Reading for Understanding Research Initiative. In addition to providing an overview of the work, the abstracts include links to RfU team websites and many of these have examples of their materials. Materials for the Word Generation and PACT interventions are available for free on their websites, and several other RfU grantees will be making their materials freely available in the coming year.

Written by Karen Douglas, project lead, Reading for Understanding Research Initiative, National Center for Education Research