IES Blog

Institute of Education Sciences

Trading the Number 2 Pencil for 2.0 Technology

Although traditional pencil and paper tests provide good information for many purposes, technology presents the opportunity to assess students on tasks that better elicit the real world skills called for by college and career standards. IES supports a number of researchers and developers who are using technology to develop better assessments through grants as well as the Small Business Innovation Research program. 

A screenshot of the GISA assessment intro page One example of the power of technology to support innovative assessment  is the Global, Integrated, Scenario-based Assessment (known as ‘GISA’), developed by John Sabatini and Tenaha O’Reilly at the Educational Testing Service as part of a grant supported by the Reading for Understanding Research Initiative.

Each GISA scenario is structured to resemble a timely, real-world situation. For example, one scenario begins by explaining that the class has been asked to create a website on green schools. The student is assigned the task of working with several students (represented by computer avatars) to create the website. In working through the scenario, the student engages in activities that are scaffolded to support students in summarizing information, completing a graphic organizer, and collaborating to evaluate whether statements are facts or opinions. The scenario provides a measure of each student’s ability to learn from text through assessing his or her knowledge of green schools before and after completing the scenario. This scenario is available on the ETS website along with more information about the principles on which GISA was built.

Karen Douglas, of the National Center for Education Research, recently spoke to Dr. Sabatini and Dr. O’Reilly on the role of technology in creating GISA, what users think of it, and their plans for continuing to develop technology-based assessments.

How did the use of technology contribute to the design of GISA?

Technological delivery creates many opportunities over more traditional paper and pencil test designs. On the efficiency side of the argument, items and tasks can be delivered over the internet in a standardized way and there are obvious advantages for automated scoring. However, the real advantage has to do with both the control over test environment and what can be assessed. We can more effectively simulate the digital environments that students use in school, leisure and, later, in the workforce. GISA uses scenario-based assessment to deliver items and tasks. During a scenario-based assessment students are given a plausible reason for reading a collection of thematically related materials. The purpose defines what is important to focus on as students work towards a larger goal. The materials are diverse and may reflect different perspectives and quality of information. 

Screenshot of a GISA forum on Green Schools

The student not only needs to understand these materials but also needs to evaluate and integrate them as they solve problems, make decisions, or apply what they learn to new situations. This design is not only more like the activities that occur in school, but also affords the opportunity for engaging students in deeper thinking. GISA also includes simulated students that may support or scaffold the test taker's understanding with good habits of mind such as the use of reading strategies. Items are sequenced to build up test takers’ understanding and to examine what parts of a more complex task students can or cannot do. In this way, the assessment serves as a model for learning while simultaneously assessing reading. Traditionally, the areas of instruction and assessment have not been integrated in a seamless manner.

What evidence do you have that GISA provides useful information about reading skills?

We have a lot more research to conduct, but thus far we have been able to create a new technology- delivered assessment that updates the aspects of reading that are measured and introduces a variety of new features.

Despite the novel interface, items, tasks, and format, students are able to understand what is expected of them. Our analyses indicate the test properties are good and that students can do a range of tasks that were previously untested in traditional assessments. While students may be developing their skills on more complex tasks, there is evidence they can do many of the components that feed into it. In this way the assessment may be more instructionally relevant.

Informally, we have received positive feedback on GISA from both teachers and students. Teachers view the assessment as better matching the types of activities they teach in the classroom, while students seem to enjoy the more realistic purpose for reading, the more relevant materials, and the use of simulated peers. 

What role do you think technology will play in future efforts to create better assessments?

We believe technology will play a greater role in how assessments are designed and delivered. Being able to provide feedback to students and better match the test to student needs are some areas where future assessments will drive innovation. More interactive formats, such as intelligent tutoring and gaming, will also grow over time. With new forms of technology available, the possibilities for meeting students’ educational needs increases dramatically.

What’s next for GISA?

We are using GISA in two additional grants. In one grant, we leverage the GISA designs for use with adults, a group for which there are few viable assessments. In the other grant we are using GISA to get a better understanding of how background knowledge affects reading comprehension.

For more information about the Reading for Understanding Research Initiative, read this post on the IES blog.

By Karen Douglas, Education Research Analyst, NCER, who oversees the Reading for Understanding Research Initiative

An IES-funded “Must Read” on Writing and Reading Disabilities

A paper based on an IES-funded grant has been recognized as a “must read” by the Council for Learning Disabilities.

IES-funded researcher, Stephen Hooper, and his colleagues were recently recognized by the Council for their paper: Writing disabilities and reading disabilities in elementary school students: Rates of co-occurrence and cognitive burden (PDF). The paper was written by Lara-Jeane Costa, Crystal Edwards, and Dr. Hooper and published in Learning Disability Quarterly. Every year, the Council for Learning Disabilities acknowledges outstanding work published in its journals and selected this paper as one of two Must Read pieces for 2016. The authors will present on the paper at the Council's annual conference in San Antonio this week (October 13-14, 2016).

This paper was funded through a grant from the National Center for Education Research (NCER) to examine written language development and writing problems, and the efficacy of an intervention aimed at improving early writing skills. The results of the paper found that the rate of students with both writing and reading disabilities increased from first to fourth grade and these students showed lower ability in language, fine motor skills and memory compared with students with neither disability or only a writing disability.  

The team continues their IES-funded work by looking at the efficacy of the Self-Regulated Strategy Development intervention on struggling middle school writers’ academic outcomes.

Written by Becky McGill-Wilkinson, Education Research Analyst, NCER

Pinning Down the Use of Research in Education

There are plenty of great ideas to be found on Pinterest: recipes for no-bake, allergen-friendly cookies; tips for taking better photos; and suggestions for great vacation spots in Greece. Lots of teachers use Pinterest as a way to share classroom ideas and engaging lessons. But where do teachers, education leaders and decision makers turn when they need evidence-based instructional practices that may work to help struggling readers, or want to use research to address other educational challenges?

Since 2014, the National Center for Education Research (NCER) has funded two National Research and Development Centers on Knowledge Utilization in an effort to find out. They are well on their way to answering questions about how and why teachers and schools use—or do not use—research in their decision making. They are also exploring ways the research community can increase interest in and actual use of research-based practices.

We are beginning to see the first results of their efforts to answer two key questions. First, are educators and schools using education research in their decision making, and if they aren’t, why not? The second question is: If educators are not using evidence as a part of their work, what can the research community do to make it more likely they will?

The National Center for Research in Policy and Practice (NCRPP) was awarded to the University of Colorado Boulder and is led by Principal Investigator Bill Penuel (University of Colorado Boulder), and Co-Principal Investigators Derek Briggs (University of Colorado Boulder), Jon Fullerton (Harvard University), Heather Hill (Harvard University), Cynthia Coburn (Northwestern University), and Jim Spillane (Northwestern University).

NCRPP has recently released their first technical report which covers the descriptive results from their nationally-representative survey of school and district leaders. Results from the report show that school and district leaders do use research evidence for activities such as designing professional development, expanding their understanding of specific issues, or convincing others to agree with a particular point of view on an education issue. Instrumental uses of research, when district leaders apply research to guide or inform a specific decision, were most commonly reported. Overall, school and district leaders were positive about the relevance and value of research for practice. When asked to report what specific piece of research was most useful, school and district leaders named books, policy reports, and peer-reviewed journal articles. You can get more information on the center's website, http://ncrpp.org. They are also very active on Twitter.

The Center for Research Use in Education (CRUE) was awarded to the University of Delaware and is led by Principal Investigator Henry May (University of Delaware), and Co-Principal Investigator Liz Farley-Ripple (University of Delaware). This team is currently working on drafting their measures of research use, which will include a set of surveys for researchers and another set for practitioners. They are especially interested in understanding which factors contribute to deep engagement with research evidence, and how gaps in perceptions and values between researchers and practitioners may be associated with frequency of deep research use. You can learn more about the work of CRUE on their website, http://www.research4schools.org/ and follow them on Twitter

While the Centers were tasked with tapping into use of research evidence specifically, both are interested in understanding all sources of evidence that practitioners use, whether it’s from peer-reviewed research articles, the What Works Clearinghouse, a friend at another school, or even Pinterest. There is certainly a wealth of research evidence to support specific instructional practices and programs, and these two Centers will begin to provide answers to questions about how teachers and leaders are using this research.

So, it’s possible that, down the road, Pinterest will become a great place for homemade, toxic-free finger paint and evidence-based practices for improving education.

Written by Becky McGill-Wilkinson, NCER Program Officer for the Knowledge Utilization Research and Development Centers

Recognizing Our Outstanding Predoctoral Fellows

Each year, the Institute of Education Sciences (IES) recognizes an outstanding fellow from its Predoctoral Interdisciplinary Research Training Programs in the Education Sciences for academic accomplishments and contributions to education research. For the first time, IES has selected joint recipients for the 2015 award: Meghan McCormick and Eric Taylor. They will receive their awards and present their research at the annual IES Principal Investigators meeting in Washington, D.C. in December 2016.

Meghan completed her Ph.D. in Applied Psychology at New York University and wrote her dissertation on the efficacy of INSIGHTS, a social emotional learning intervention aimed at improving low-income urban students’ academic achievement. She is currently a research associate at MDRC. Eric completed his PhD in the Economics of Education from Stanford University and wrote his dissertation on the contributions of the quality and quantity of classroom instruction to student learning. Eric is currently an assistant professor at the Harvard University Graduate School of Education.

We asked Meghan and Eric how participating in an IES predoctoral training program helped their development as researchers.  For more information about the IES predoctoral training program, visit our website.

Meghan McCormick

Having the opportunity to be part of the IES Predoctoral Training Program helped me to develop a set of theoretical, quantitative, and practical skills that I would not have had the opportunity to develop otherwise. 

I was drawn to attend NYU (New York University) for my doctoral studies specifically because the school of education at NYU offered the (IES) predoctoral training program, in addition to hosting a core set of faculty with research interests very much aligned with my own. In my first year of graduate school, I quickly became aware that being a part of the IES program allowed me the freedom to study with an interdisciplinary set of scholars who could support multiple components of my training through a diverse set of experiences.

For example, in my work with Elise Cappella, Erin O’Connor, and Sandee McClowry, I was able to learn about the logistics of implementing a cluster-randomized trial across a broad set of schools, and conducting impact analyses to evaluate the efficacy of one social-emotional learning program called INSIGHTS. My experience working with Jim Kemple and Lori Nathanson at the Research Alliance for New York City schools showed me how to use research in a way that was responsive to the needs and goals of education policy makers. Quantitative coursework with Jennifer Hill and Sharon Weinberg helped me to apply rigorous quantitative methods to the data that were collected in schools, and to think concretely about the implications of research design for my future work. Coursework with developmental psychologists conducting policy-relevant research, such as Pamela Morris and Larry Aber, helped me to apply comprehensive theoretical framing when examining research questions of interest, and interpreting results. In addition, I have always been primarily interested in conducting interdisciplinary research that is responsive to policy and practice. My dissertation research grew out of my interest in learning about interdisciplinary methods for causal inference and applying them to research questions I had about how, for whom, and under what circumstances the social-emotional learning program I helped to evaluate effected outcomes for low-income students.

Most importantly, perhaps, having been part of the IES program’s collaborative and interdisciplinary community helped me to identify the type of research I wanted to do after finishing graduate school. Primarily, I knew that I wanted to conduct policy-relevant research, using the most rigorous quantitative methods available, with a team of researchers coming from different backgrounds. This realization led me to work at MDRC, where I have been working with JoAnn Hsueh and other colleagues to apply my skills from the predoctoral training program in new research design work that is responsive to critical policy questions in early education policy and practice right now. I feel prepared for this new work given the opportunities that the IES program afforded me across the last five years. 

Eric Taylor

I would emphasize two benefits. First, the IES program at Stanford helped me create and strengthen professional relationships with other education researchers and practitioners. Those relationships provided important opportunities to learn skills in ways that could not happen in the classroom but also complemented the excellent classroom instruction. The new relationships were diverse: other graduate students in different disciplines, Stanford faculty and faculty at other institutions, and, critically, practitioners and policy makers. For example, supported by my fellowship, I joined faculty at (the University of) Michigan and Columbia (University) working with the DC Public Schools to improve teacher applicant screening and hiring.

Second, those relationships combined with the financial support of the fellowship made it possible to work on new and timely research projects. During my time as an IES predoc, with collaborators at Brown, we started a researcher-practitioner partnership with colleges at the Tennessee Department of Education. The resulting work has taught me much about the day-to-day realities of school policy making and management, and how research can and cannot help. The partnership with Tennessee also grew into a five-year grant from IES, which began last year, to study state policy and teacher development through evaluation.

In short, I am certain my career is much further along today than it would have been without the IES predoc fellowship.

By Katina Stapleton, Education Research Analyst, National Center for Education Research 

 

A Conversation about Reading for Understanding

There is a lot of discussion these days about the importance of bringing many voices to the table when designing, implementing, and interpreting research studies. The usefulness of educational research is only enhanced when teachers, policmakers, and researchers work together to design studies and understand the findings.

The Educational Testing Service and the Council of Chief State School Officers (CCSSO) sponsored such an opportunity on May 18 and 19 by bringing together over 150 researchers, practitioners, instructional specialists, federal staff, and policy makers to reflect on the efforts of the of the Reading for Understanding Research Initiative (RfU).

Researchers from the six RfU teams shared what they learned about how to work together to design and implement new approaches and improve reading for understanding from PreK through high school. They also discussed what needs to be done to build on this effort.  (Check out this recent blog post to learn how some researchers are building on the work of the RfU Initiative.)

Sessions at the meeting were designed to promote discussion among attendees in order to take advantage of different viewpoints and  better understand the implications of the RfU findings and the relevance for practice and policy. The meeting culminated with two panels tasked with summarizing key points from these discussions. One panel focused on implications for research, and the other on policy implications. The presence of practitioners from the field, researchers, and policymakers led to a well-rounded conversation about how we can build upon the research of RfU and put it into action in the classroom.

The agenda, presentation slides, and webcasts of the closing panels are now available for viewing on the meeting website.  

Written by Karen Douglas, Education Research Analyst, NCER