IES Blog

Institute of Education Sciences

Trading the Number 2 Pencil for 2.0 Technology

Although traditional pencil and paper tests provide good information for many purposes, technology presents the opportunity to assess students on tasks that better elicit the real world skills called for by college and career standards. IES supports a number of researchers and developers who are using technology to develop better assessments through grants as well as the Small Business Innovation Research program. 

A screenshot of the GISA assessment intro page One example of the power of technology to support innovative assessment  is the Global, Integrated, Scenario-based Assessment (known as ‘GISA’), developed by John Sabatini and Tenaha O’Reilly at the Educational Testing Service as part of a grant supported by the Reading for Understanding Research Initiative.

Each GISA scenario is structured to resemble a timely, real-world situation. For example, one scenario begins by explaining that the class has been asked to create a website on green schools. The student is assigned the task of working with several students (represented by computer avatars) to create the website. In working through the scenario, the student engages in activities that are scaffolded to support students in summarizing information, completing a graphic organizer, and collaborating to evaluate whether statements are facts or opinions. The scenario provides a measure of each student’s ability to learn from text through assessing his or her knowledge of green schools before and after completing the scenario. This scenario is available on the ETS website along with more information about the principles on which GISA was built.

Karen Douglas, of the National Center for Education Research, recently spoke to Dr. Sabatini and Dr. O’Reilly on the role of technology in creating GISA, what users think of it, and their plans for continuing to develop technology-based assessments.

How did the use of technology contribute to the design of GISA?

Technological delivery creates many opportunities over more traditional paper and pencil test designs. On the efficiency side of the argument, items and tasks can be delivered over the internet in a standardized way and there are obvious advantages for automated scoring. However, the real advantage has to do with both the control over test environment and what can be assessed. We can more effectively simulate the digital environments that students use in school, leisure and, later, in the workforce. GISA uses scenario-based assessment to deliver items and tasks. During a scenario-based assessment students are given a plausible reason for reading a collection of thematically related materials. The purpose defines what is important to focus on as students work towards a larger goal. The materials are diverse and may reflect different perspectives and quality of information. 

Screenshot of a GISA forum on Green Schools

The student not only needs to understand these materials but also needs to evaluate and integrate them as they solve problems, make decisions, or apply what they learn to new situations. This design is not only more like the activities that occur in school, but also affords the opportunity for engaging students in deeper thinking. GISA also includes simulated students that may support or scaffold the test taker's understanding with good habits of mind such as the use of reading strategies. Items are sequenced to build up test takers’ understanding and to examine what parts of a more complex task students can or cannot do. In this way, the assessment serves as a model for learning while simultaneously assessing reading. Traditionally, the areas of instruction and assessment have not been integrated in a seamless manner.

What evidence do you have that GISA provides useful information about reading skills?

We have a lot more research to conduct, but thus far we have been able to create a new technology- delivered assessment that updates the aspects of reading that are measured and introduces a variety of new features.

Despite the novel interface, items, tasks, and format, students are able to understand what is expected of them. Our analyses indicate the test properties are good and that students can do a range of tasks that were previously untested in traditional assessments. While students may be developing their skills on more complex tasks, there is evidence they can do many of the components that feed into it. In this way the assessment may be more instructionally relevant.

Informally, we have received positive feedback on GISA from both teachers and students. Teachers view the assessment as better matching the types of activities they teach in the classroom, while students seem to enjoy the more realistic purpose for reading, the more relevant materials, and the use of simulated peers. 

What role do you think technology will play in future efforts to create better assessments?

We believe technology will play a greater role in how assessments are designed and delivered. Being able to provide feedback to students and better match the test to student needs are some areas where future assessments will drive innovation. More interactive formats, such as intelligent tutoring and gaming, will also grow over time. With new forms of technology available, the possibilities for meeting students’ educational needs increases dramatically.

What’s next for GISA?

We are using GISA in two additional grants. In one grant, we leverage the GISA designs for use with adults, a group for which there are few viable assessments. In the other grant we are using GISA to get a better understanding of how background knowledge affects reading comprehension.

For more information about the Reading for Understanding Research Initiative, read this post on the IES blog.

By Karen Douglas, Education Research Analyst, NCER, who oversees the Reading for Understanding Research Initiative

Pinning Down the Use of Research in Education

There are plenty of great ideas to be found on Pinterest: recipes for no-bake, allergen-friendly cookies; tips for taking better photos; and suggestions for great vacation spots in Greece. Lots of teachers use Pinterest as a way to share classroom ideas and engaging lessons. But where do teachers, education leaders and decision makers turn when they need evidence-based instructional practices that may work to help struggling readers, or want to use research to address other educational challenges?

Since 2014, the National Center for Education Research (NCER) has funded two National Research and Development Centers on Knowledge Utilization in an effort to find out. They are well on their way to answering questions about how and why teachers and schools use—or do not use—research in their decision making. They are also exploring ways the research community can increase interest in and actual use of research-based practices.

We are beginning to see the first results of their efforts to answer two key questions. First, are educators and schools using education research in their decision making, and if they aren’t, why not? The second question is: If educators are not using evidence as a part of their work, what can the research community do to make it more likely they will?

The National Center for Research in Policy and Practice (NCRPP) was awarded to the University of Colorado Boulder and is led by Principal Investigator Bill Penuel (University of Colorado Boulder), and Co-Principal Investigators Derek Briggs (University of Colorado Boulder), Jon Fullerton (Harvard University), Heather Hill (Harvard University), Cynthia Coburn (Northwestern University), and Jim Spillane (Northwestern University).

NCRPP has recently released their first technical report which covers the descriptive results from their nationally-representative survey of school and district leaders. Results from the report show that school and district leaders do use research evidence for activities such as designing professional development, expanding their understanding of specific issues, or convincing others to agree with a particular point of view on an education issue. Instrumental uses of research, when district leaders apply research to guide or inform a specific decision, were most commonly reported. Overall, school and district leaders were positive about the relevance and value of research for practice. When asked to report what specific piece of research was most useful, school and district leaders named books, policy reports, and peer-reviewed journal articles. You can get more information on the center's website, http://ncrpp.org. They are also very active on Twitter.

The Center for Research Use in Education (CRUE) was awarded to the University of Delaware and is led by Principal Investigator Henry May (University of Delaware), and Co-Principal Investigator Liz Farley-Ripple (University of Delaware). This team is currently working on drafting their measures of research use, which will include a set of surveys for researchers and another set for practitioners. They are especially interested in understanding which factors contribute to deep engagement with research evidence, and how gaps in perceptions and values between researchers and practitioners may be associated with frequency of deep research use. You can learn more about the work of CRUE on their website, http://www.research4schools.org/ and follow them on Twitter

While the Centers were tasked with tapping into use of research evidence specifically, both are interested in understanding all sources of evidence that practitioners use, whether it’s from peer-reviewed research articles, the What Works Clearinghouse, a friend at another school, or even Pinterest. There is certainly a wealth of research evidence to support specific instructional practices and programs, and these two Centers will begin to provide answers to questions about how teachers and leaders are using this research.

So, it’s possible that, down the road, Pinterest will become a great place for homemade, toxic-free finger paint and evidence-based practices for improving education.

Written by Becky McGill-Wilkinson, NCER Program Officer for the Knowledge Utilization Research and Development Centers

C-SAIL: Studying the Impact of College- and Career-Readiness Standards

The nationwide effort to implement college- and career-ready standards is designed to better prepare students for success after high school, whether that means attending a postsecondary institution, entering the work force, or some combination of both. But there is little understanding about how these standards have been implemented across the country or the full impact they are having on student outcomes.  

To fill that void, the Institute of Education Sciences (IES) funded a new five-year research center, the Center on Standards, Alignment, Instruction, and Learning (C-SAIL). The center is studying the implementation of college- and career-ready standards and assessing how the standards are related to student outcomes. The center is also developing and testing an intervention that supports standards-aligned instruction.

Andy Porter (pictured right), of the University of Pennsylvania’s Graduate School of Education, is the director of C-SAIL and recently spoke with James Benson, the IES project officer for the center. Here is an edited version of that conversation.

You have been studying education standards for over 30 years. What motivated you to assemble a team of researchers and state partners to college- and career-readiness standards?

Standards-based reform is in a new and promising place with standards that might be rigorous enough to close achievement gaps that advocates have been fighting to narrow for the last 30 years. And with so many states implementing new standards, researchers have an unprecedented opportunity to learn about how standards-based reform is best done. We hypothesize that the only modest effects of standards-based reform thus far are largely due to the fact that those reforms stalled at the classroom door, so a focus of the Center will be how implementation is achieved and supported among teachers.

What are the main projects within the Center, and what are a few of the key questions that they are currently addressing?

We have four main projects. The first, an implementation study, asks, “How are state, district, and school-level educators making sense of the new standards, and what kinds of guidance and support is available to them?” We’re comparing and contrasting implementation approaches in four states—Kentucky, Massachusetts, Ohio and Texas. In addition to reviewing state policy documents, we’re surveying approximately 280 district administrators, 1,120 principals, and 6,720 teachers across (the same) four states, giving special attention to the experiences of English language learners and students with disabilities.

The second project is a longitudinal study that asks, “How are college- and career-readiness standards impacting student outcomes across all 50 states?” and “How are English language learners and students with disabilities affected by the new standards?” We’re analyzing data from the National Assessment of Education Progress (NAEP) and other sources to estimate the effects of college- and career-readiness standards on student achievement, high school completion, and college enrollment. Specifically, we’re examining whether implementing challenging state academic standards led to larger improvements in student outcomes in states with lower prior standards than in states with higher prior standards.

The third project is the Feedback on Alignment and Support for Teachers (FAST) intervention study, in which we are building an original intervention designed to assist teachers in providing instruction aligned to their state’s standards. FAST features real-time, online, personalized feedback for teachers, an off-site coach to assist teachers in understanding and applying aligned materials, and school-level collaborative academic study teams in each school.

The fourth project is a measurement study to determine the extent to which instruction aligns with college- and career-readiness standards. C-SAIL is developing new tools to assess alignment between teachers' instruction and state standards in English language arts and math.

How do you envision working with your partner states in the next few years? How do you plan to communicate with states beyond those partnering with the Center?

We’ve already collaborated with our partner states–Kentucky, Massachusetts, Ohio, and Texas–on our research agenda, and the chief state school officer from each state, plus a designee of their choice, sits on our advisory board. Additionally, we’re currently working with our partner states on our implementation study and plan to make our first findings this summer on effective implementation strategies immediately available to them.

All states, however, will be able to follow our research progress and access our findings in myriad ways, including through our website (pictured left). Our Fact Center features downloadable information sheets and the C-SAIL blog offers insights from our researchers and network of experts. We also invite practitioners, policymakers, parents and teachers to stay up-to-date on C-SAIL activities by subscribing to our newsletter, following us on Twitter, or liking us on Facebook.

Looking five years into the future, when the Center is finishing its work, what do you hope to understand about college- and career-readiness standards that we do not know now?

Through our implementation study, we will have documented how states are implementing new, challenging state academic standards; how the standards affect teacher instruction; what supports are most valuable for states, districts, and schools; and, how the new standards impact English language learners and students with disabilities.

Through our longitudinal study, we will have combined 50-state NAEP data with high school graduation rates, and college enrollment in order to understand how new standards impact student learning and college- and career-readiness.

Through our FAST Intervention, we will have created and made available new tools for teachers to monitor in real-time how well-aligned the content of their enacted curriculum is to their states’ college- and career-readiness standards in ELA and math.

Finally, but not least, we will have led policymakers, practitioners and researchers in a national discussion of our findings and their implications for realizing the full effects of standards-based reform. 

 

Researching Minority-Serving Institutions

By Katina Stapleton and James Benson, NCER Program Officers

A core problem for research on minority-serving institutions (MSIs) is that they have been defined inconsistently. Through the IES-funded Center for Analysis of Postsecondary Education and Employment (CAPSEE) at Teachers College, Columbia University, researcher Valerie Lundy-Wagner is leading two research projects that aim to provide the definitional and contextual information necessary for carrying out more comprehensive and rigorous research on MSIs and the ethnic/racial and low-income students they disproportionately serve.

We spoke with Valerie about her motivation for studying MSIs and the challenges that face MSI researchers.

How did you become interested in studying minority-serving institutions (MSIs)?

Photo: Valerie Lundy-Wagner

My interest in MSIs was brought about by two experiences in graduate school. While in a master’s program at Stanford University, I met ten African American students pursuing doctoral degrees in one of the science, technology, engineering and mathematics (STEM) fields. I quickly learned that nearly all had one thing in common—they had attended a historically Black college or university (HBCU) for their undergraduate degree. I was intrigued by this and began to wonder about the extent to which their having attended an HBCU contributed to their undergraduate success and subsequent decision to pursue higher education beyond the baccalaureate.

MSIs also came up during my first year of the doctoral program at the University of Pennsylvania where I was assigned to a qualitative research project focused on the contribution of MSIs to the preparation of African American women in STEM fields, and specifically at Spelman College (Atlanta, Georgia)—one of two all-women’s historically Black colleges. Based my master’s research, I had some ideas on the academic, psychological, financial, and structural reasons why students failed to persist in STEM; yet, until that project, I had not seen the numbers. In preparation for our site visit, I ran the descriptive statistics on HBCUs—in particular, their Black undergraduate enrollment but also the number and percentage of degrees they conferred to African American students each year by gender. The disproportionate contribution these institutions were making was surprising. Since then I’ve been interested in learning more about how these and other MSIs (e.g., Hispanic-serving institutions, tribal colleges and universities, predominately Black institutions, Asian American and Native American Pacific Islander-serving institutions) contribute to postsecondary access and completion by minority and low-income students. Now that I am working on this CAPSEE project, I am especially interested in understanding how these institutions might be meaningfully incorporated into higher education research and into policy interventions that will help close postsecondary attainment gaps by ethnicity/race.

How are MSIs important in the postsecondary system and why should researchers and policymakers be interested in research on MSIs?

Based on the extant research, MSIs are a critical part of the postsecondary system. According to some reports, these institutions comprise 20% of all colleges and universities, and on average, 70% of their undergraduate enrollment are ethnic/racial minority students. While poor K-12 preparation and achievement are significant factors in this reality, the fact that many MSIs are open-access institutions makes them an important site for students seeking a chance at increasing proficiency and pursuing higher education credentials. For researchers, we have the opportunity to better understand how these institutions are successfully transitioning underprepared students into high achievers, but also how their lack of resources may be contributing to less-than-ideal outcomes.

What are the greatest challenges in conducting research on MSIs?

There are at least two major challenges in conducting research on MSIs. First, the institutional status or designation of an MSI has not been consistent over time. What many people do not realize about MSIs is that some were established by the federal government to acknowledge and help address historical and ongoing inequality in access to education (e.g., historically Black college and universities) while others were established to address contemporary inequality (e.g., Asian American and Native American Pacific Islander-serving institutions). Second, and in a similar vein, MSIs have become a large and growing topic of higher education research, yet this body of work largely discusses institutions eligible for MSI designation and those that are actually funded under a federal program as though they are one and the same. In effect, including institutions simply eligible for MSI status with those that have deliberately made an effort to better support an ethnic/racial minority group by applying for and receiving MSI-specific funds convolutes the contribution of the federal MSI programs. This complicates a researcher’s ability to make relevant comparisons between institutions disproportionately serving minority students but also work seeking to compare MSIs to non-MSIs.

Your current IES-funded research project on MSIs utilizes data from NCES’ Integrated Postsecondary Education Data System (IPEDS). What kind of questions about MSIs can IPEDS help answer?

IPEDS is an important and critical resource for postsecondary education research. In the descriptive analysis of this project, five annual IPEDS surveys are being used to help provide basic aggregate-level information on the characteristics of postsecondary institutions and the students they serve. Some of the questions IPEDs will help answer include, “How does percent Pell receipt among undergraduates vary among institutions eligible for and designated as MSIs? And how does this compare across MSI designations and to non-MSIs?” In effect, these questions seek to identify the extent to which there is a relationship between institutional characteristics and minority student outcomes among MSIs and non-MSIs. IPEDS will also provide me with an opportunity to clarify differences and similarities between MSIs and non-MSIs at the institution-level. This is necessary for subsequently developing more rigorous research on the effect of MSI status or funding on minority student outcomes.

Given the projected increases in postsecondary enrollment of minority students, do you see MSIs becoming more or less important to the postsecondary system in the future?

Yes.  Despite the technical issues associated with identifying which set(s) of institutions are MSIs, the fact of the matter is that there are a growing number of institutions that are disproportionately educating students of color and low-income students. Given the gaps in postsecondary access and attainment by ethnic/racial minority students, stakeholders in research, policy, and postsecondary institutions must better understand the challenges and the mechanisms for success occurring at these institutions, as well as how successful initiatives and reforms supporting similar students at predominately White institutions could be brought to MSIs. 


Interested in learning more about this topic? CAPSEE and the Center for Minority Serving Institutions at the University of Pennsylvania recently published On Their Own Terms: Two-Year Minority Serving Institutions, a report that looks at the role of two-year Minority Serving Institutions (MSIs) in improving postsecondary access and degree completion for disadvantaged students in the United States.

Comments or questions for IES? Please send them to IESResearch@ed.gov.  

Rural Education Research: Current Investments and Future Directions

By Emily Doolittle, NCER Program Officer

In school year 2010-11, over half of all operating regular school districts and about one-third of all public schools were in rural areas, while about one-quarter of all public school students were enrolled in rural schools.(The Status of Rural Education)

 

About 12 million students are educated in rural settings in the United States. Teaching and learning in these settings generates unique challenges, both for the schools operating in rural areas and for the researchers who want to learn more about rural schools and their needs. Recognizing this, NCER has made targeted investments in rural education research through two of its National Education Research and Development (R&D) Centers.

The National Research Center on Rural Education Support focused on the educational challenges created by limited resources in rural settings, such as attracting and retaining appropriately and highly qualified teachers and providing them with high-quality professional development. Specific projects included:

  • The Targeted Reading Intervention (TRI) program, which seeks to help rural teachers, who often work in isolation, turn struggling early readers (kindergarten and 1st grade) into fluent ones. Using a laptop and a webcam, a TRI Consultant supports the classroom teacher as they provide diagnostically-driven instruction in one-on-one sessions.
  • The Rural Early Adolescent Learning Program (REAL) professional development model, which helps teachers consider the academic, behavioral, and social difficulties that together contribute to school failure and dropout for adolescent students. Accordingly, REAL is designed to provide teachers with strategies to help students make a successful transition into middle school.
  • The Rural Distance Learning and Technology Program, which examined the role of distance in advanced level courses for students and professional development for teachers; and
  • The Rural High School Aspirations Study (RHSA), which examined rural high school students’ postsecondary aspirations and preparatory planning.

The National Center foResearch on Rural Education (R2Ed) examined ways to design and deliver teacher professional development to improve instruction and support student achievement in reading and science in rural schools through three projects:

  • The Teachers Speak Survey Study investigated (1) variations in existing rural professional development (PD) experiences; (2) differences in PD practices between rural and non-rural settings; and (3) the potential influence of PD characteristics on teacher knowledge, perceptions, and practices in one of four instructional content areas: reading, mathematics, science inquiry, or using data-based decision making to inform reading instruction/intervention.
  • Project READERS evaluated the impact of distance-provided coaching on (1) teachers' use of differentiated reading instruction following a response-to-intervention (RTI) model and (2) their students' acquisition of reading skills in early elementary school.
  • Coaching Science Inquiry (CSI) evaluated the impact of professional development with distance-provided coaching for teaching science using explicit instruction with guided inquiry and scaffolding on teacher instructional practice and science achievement in middle and high school.

R2Ed also conducted two related sets of studies.

  • The first set explored ecological influences and supports that may augment educational interventions and outcomes in rural schools. The goal of this work is to understand contextual influences of rurality and how they interact to influence parent engagement in education and child cognitive and social-behavioral outcomes.  
  • The second set explored methodological and statistical solutions to challenges associated with the conduct of rigorous experimental research in rural schools.

As R2Ed completes its work, NCER is considering how to support rural education research going forward. As a first step, we hosted a technical working group meeting in December 2014 to identify research objectives of importance to rural schools and to reflect on the success of the R&D Center model to advance our understanding of rural education. A summary of the meeting is available here on the IES website.  The ideas shared during this meeting will help guide future IES investments in rural education research.  

Please send any comments or questions to IESResearch@ed.gov.