Inside IES Research

Notes from NCER & NCSER

Equity Through Innovation: New Models, Methods, and Instruments to Measure What Matters for Diverse Learners

In today’s diverse classrooms, it is both challenging and critical to gather accurate and meaningful information about student knowledge and skills. Certain populations present unique challenges in this regard – for example, English learners (ELs) often struggle on assessments delivered in English. On “typical” classroom and state assessments, it can be difficult to parse how much of an EL student’s performance stems from content knowledge, and how much from language learner status. This lack of clarity makes it harder to make informed decisions about what students need instructionally, and often results in ELs being excluded from challenging (or even typical) coursework.

Over the past several years, NCER has invested in several grants to design innovative assessments that will collect and deliver better information about what ELs know and can do across the PK-12 spectrum. This work is producing some exciting results and products.

  • Jason Anthony and his colleagues at the University of South Florida have developed the School Readiness Curriculum Based Measurement System (SR-CBMS), a collection of measures for English- and Spanish-speaking 3- to 5-year-old children. Over the course of two back-to-back Measurement projects, Dr. Anthony’s team co-developed and co-normed item banks in English and Spanish in 13 different domains covering language, math, and science. The assessments are intended for a variety of uses, including screening, benchmarking, progress monitoring, and evaluation. The team used item development and evaluation procedures designed to assure that both the English and Spanish tests are sociolinguistically appropriate for both monolingual and bilingual speakers.

 

  • Daryl Greenfield and his team at the University of Miami created Enfoque en Ciencia, a computerized-adaptive test (CAT) designed to assess Latino preschoolers’ science knowledge and skills. Enfoque en Ciencia is built on 400 Spanish-language items that cover three science content domains and eight science practices. The items were independently translated into four major Spanish dialects and reviewed by a team of bilingual experts and early childhood researchers to create a consensus translation that would be appropriate for 3 to 5 year olds. The assessment is delivered via touch screen and is equated with an English-language version of the same test, Lens on Science.

  • A University of Houston team led by David Francis is engaged in a project to study the factors that affect assessment of vocabulary knowledge among ELs in unintended ways. Using a variety of psychometric methods, this team explores data from the Word Generation Academic Vocabulary Test to identify features that affect item difficulty and explore whether these features operate similarly for current, former, as well as students who have never been classified as ELs. The team will also preview a set of test recommendations for improving the accuracy and reliability of extant vocabulary assessments.

 

  • Researchers led by Rebecca Kopriva at the University of Wisconsin recently completed work on a set of technology-based, classroom-embedded formative assessments intended to support and encourage teachers to teach more complex math and science to ELs. The assessments use multiple methods to reduce the overall language load typically associated with challenging content in middle school math and science. The tools use auto-scoring techniques and are capable of providing immediate feedback to students and teachers in the form of specific, individualized, data-driven guidance to improve instruction for ELs.

 

By leveraging technology, developing new item formats and scoring models, and expanding the linguistic repertoire students may access, these teams have found ways to allow ELs – and all students – to show what really matters: their academic content knowledge and skills.

 

Written by Molly Faulkner-Bond (former NCER program officer).

 

Weighted Student Funding Is On The Rise. Here’s What We Are Learning.

Weighted student funding (WSF) is a funding method that aims to allocate funding based on individual student needs. While large districts are increasingly using WSF systems, little research exists to assess their effectiveness. In this guest blog, Dr. Marguerite Roza, Georgetown University, discusses her team’s ongoing IES-funded research study that seeks to document and understand WSF designs and features as implemented in the field, and to gauge the extent to which WSF designs are associated with reducing achievement gaps. The study’s initial findings chart the WSF landscape across 19 U.S. school districts that used WSF in 2017-18.

Over the last two decades, dozens of big districts (including those in New York City, Boston, Denver, Houston, and Chicago) have shifted to using a weighted student formula to distribute some portion of their total budget. Instead of distributing resources via uniform staffing formulas, these districts use a student-based formula to allocate some fixed sum of dollars to schools for each student based on need (for example, allocations are typically higher for students with disabilities and students with limited English proficiency). The 2015 Every Student Succeeds Act (ESSA) authorized a WSF pilot, allowing up to 50 districts to incorporate key federal program dollars into a district’s formula.

As WSF systems now serve millions of K–12 students—and the number of WSF districts continues to grow—our research begins to document the range of these WSF formulas and gather details around how they are being implemented in school systems around the nation.

Why do districts adopt WSF?

Our study of school board and budget documentation indicates that nearly all districts identify equity (89%) and flexibility for school principals (79%) as a key rationale, with nearly half also citing a goal of transparency (49%). Interestingly, not one of the 19 districts cite “choice” (whereby families choose their school) as a driving factor in the rationale for using WSF even though much of the literature links choice and WSF. Despite the goal of transparency, only a third of the districts actually post their formulas online (like this posting from Houston ISD)—a finding that surprised us and them.  In fact, after we shared the finding with our study districts, several updated their online budget materials to include their formulas. Whether districts are meeting their goals of equity and flexibility will be more fully investigated in Phase 2 of the project.

Is there a typical WSF model that districts are using?

No. We find that there is no standard WSF: Each district has developed a home-grown formula and differences are substantial. On one end of the spectrum, Prince George’s County deploys only 20% of its total budget via its WSF, while Orleans Parish deploys 89%. Most districts deploy some 30-50% of their annual funds via their WSF formula, indicating that they are adopting a hybrid approach. They deploy the rest of their funds via staff allocations, program allocations, or in whatever ways they did before moving to WSF.

 

 

Districts define their “base” allocations differently, and no two districts use the same student weights. Most commonly, districts use grade level as a student weight category, but they do not agree on which level of schooling warranted the highest weight. Seven districts give their highest grade-level weight to elementary grades, four give it to middle school grades, and four give the highest weight to high schoolers.

Two thirds of districts use weights for students identified as English Language Learners (ELL) and as having disabilities, while half use weights for poverty. Even the size of the weights differs, with ELL weights ranging from 10% to 70%. Several districts use tiered weights.

We also found a range of unique weights designed within the districts for categories of locally identified need (for example, Boston uses a weight for students with interrupted formal learning, and Houston uses a weight for students who are refugees).

What other trends exist in districts implementing WSF?

We found that non-formula features and exemptions reflect local context. Small school subsidies, magnet allocations, and foundation amounts are common examples of non-formula features that several districts use. Some districts exempt some schools from the formula, grant weights for school types (vs student types), or fund selected staffing positions outside the formula. Districts seem to be layering their WSF formulas on top of long-standing allocations, like subsidies for small schools. Clearly, it is difficult for most districts to deploy a strict formula, and these exemptions or adjustments have the effect of mitigating the formula’s effects on some schools.

We also found that nearly all districts continue to use average salaries in their budgeting, likely limiting their goals for equity. In this practice, schools are charged for their teaching staff based on district-wide average salaries, not the actual salaries of teachers in the building. Districts in Boston and Denver have experimented with the use of real salaries for a subset of their schools (allowing for roughly one-third of their schools to budget and account for spending based on actual salaries).  Both the formula exceptions and this continued reliance on average salaries may be limiting the extent to which WSF is making progress on equity. Analysis in Phase 2 of the project will quantify the effects of these formula adjustments on spending.

What kinds of budget flexibilities do principals have?

With WSF, districts give principals flexibility in staffing, stipends, and contracts, but not base compensation. In virtually all WSF districts, principals had at least some flexibility in choosing the number and type of staff in their buildings and in awarding stipends. Interestingly, most principals had power to issue contracts with their funds, and half could carry over funds from one year to the next.  Despite these flexibilities, base teacher compensation is generally off limits for principals and continues to be controlled centrally.

How difficult is it for districts to design and implement their own versions of WSF?

Changing district allocations is hard work. At each point in our study, we find districts building “homegrown” approaches to WSF that reflect their own spending history and local context. We could see this as a practical transition of sorts between old and new allocation strategies, where district leaders straddle both the desires to change allocations and the pressures to keep allocations the way they are.

What are the next steps in this research?

Future analysis in this project will explore the degree to which WSF is delivering on the goal of increasing equity and outcomes for poor and at-risk students. However, the homegrown nature of WSF makes it tough to generalize about the WSF model or its effects. Undoubtedly, the variation poses problems for research. Clearly there’s no way to analyze WSF as a single model. Also challenging is that districts use different definitions (even on formula items such as the “base” and what constitutes a student weight). Perhaps this is unsurprising as there is no common training on the WSF model, and no prevailing terminology or budgeting procedures for district leaders to use in their work.

We see our study as a first step in a broader research agenda that will explore the scope and range of implementation of WSF in U.S. school districts and offer deeper analysis of the extent to which WSF is helping systems meet commonly cited goals of greater equity, flexibility and transparency. Meantime, we hope WSF systems and those considering shifting to WSF will be able to learn from this work and what peer systems are doing, perhaps with the ultimate effect of creating a common vocabulary for this financial model. 

 

 

Rethinking Connections Between Research and Practice in Education

IES-funded researchers from the Center for Research Use in Education (CRUE) at the University of Delaware recently learned that their article, “Rethinking Connections Between Research and Practice in Education: A Conceptual Framework,” was the 8th most-read article in any AERA journal in 2018!

In the article, the authors argue that “Recent efforts to improve the quality and availability of scientific research in education, coupled with increased expectations for the use of research in practice, demand new ways of thinking about connections between research and practice.”

Under the Every Student Succeeds Act (ESSA), education leaders must use evidence to inform their practice. However, the CRUE researchers argue that this simple framing “risks reducing evidence use to an administrative task rather than multiple activities constituting a political and social practice within a complex organizational process.” In fact, “the field lacks a comprehensive understanding of what evidence-based decision-making looks like in practice—for example, when is evidence brought into the decision-making process? Who engages with it? How is it understood in the local context? How often is it reviewed?”

CRUE’s research on this issue reveals gaps in the assumptions and perspectives of the research and practice communities, including the usefulness of research products; the nature and quality of research; the problems that research addresses; the structures, processes, and incentives for research production and use; and the specific relationships between the communities. They present a conceptual framework that highlights how these differences in understanding affect both the depth of research use and the depth of research production. Their article in Educational Researcher explains each of these and how they work together.

The framework shows that increasing education research use in practice is a complex, bidirectional issue, in which characteristics of both communities play a part: researchers need to produce work that is “decision-relevant,” and practitioners need to make decisions that are “research-attuned.” 

 

 

Inequity Persists in Gifted Programs

The National Center for Research on Gifted Education (NCRGE) at the University of Connecticut, in Phase I of a rigorous research agenda, examined how academically-gifted students are identified and served in three states in order to provide systematic information for the field. The research team focused especially on the representation of historically underserved groups in gifted education.

NCER recently spoke with the Center’s Principal Investigator, Del Siegle, a nationally-recognized expert on gifted education. 

What is the biggest challenge facing gifted educators today?

Unfortunately, many of our nation’s brightest students from underserved populations (e.g., Black, Hispanic, English Learner, and/or free and reduced-price lunch eligible) are not being identified as gifted and do not receive gifted education services. About 80% of states that completed the most recent National Association for Gifted Children’s State of the States survey indicated that underrepresentation of students from underserved populations was an important or very important issue in their state.

What did you find in your study of identification of underserved students for gifted programs?

During Phase I of our work, we analyzed standardized student achievement test data from three states that mandate gifted identification and programming. We found that schools were less likely to identify students from underserved groups as gifted—even in cases where the underserved child had similar achievement test scores. For example, students with similar test scores who received free and reduced price lunch were less than half as likely to be identified as gifted as students who didn’t receive free or reduced price lunch.

What identification practices are schools using?

Cognitive tests and teacher nominations were the most common identification tools across the three states we studied. The majority (90% to 96%) of the districts in all three states used these practices to select students. Identification for gifted services occurs most often in third grade. Districts seldom reassess identified students once they are identified and only about half reassess non-identified students in elementary schools at regular intervals. Screening all children and using a variety of identification criteria showed promise for reducing under-identification in one of our states.

How are students being serviced in gifted programs?

In the three states we studied, schools primarily focused on critical thinking and creativity followed by communication skills, research skills, and self-directed projects.  Mathematics and reading language arts acceleration was much less of a focus and were ranked among the bottom third of focus areas. Gifted students seldom receive gifted programming in core academic areas. Only 29% of the schools provided a separate gifted curriculum in reading/language arts. Only 24% of the schools had a separate gifted curriculum in mathematics. Gifted students spent 5 hours or more each week in regular education mathematics and reading/language arts classrooms. Of the 74% of schools reporting using pull-out services, only 32% offered separate gifted curriculum in reading/language arts and 28% offered separate gifted curriculum in math. 

What about gifted student growth in mathematics and reading?

In 3rd grade, gifted students are approximately 2 grade levels ahead of students not identified as gifted, but gifted students grow more slowly than non-gifted students between 3rd and 5th grade. Most grouping arrangements for gifted students had no impact on the growth of academic achievement. We believe much of this has to do with the limited advanced mathematics and reading instruction gifted students receive in their classrooms and gifted programs.

What is the next step in your research?

We are examining the effect of attending dedicated gifted classes in core content areas on academic achievement in reading/language arts and mathematics in a large, ethnically, economically, and linguistically diverse urban school district. Our research will compare the reading/language arts and mathematics achievement of gifted students in three different settings: schools offering a full-time gifted-only program with gifted classes in all subject areas, schools offering a part-time gifted-only program with gifted classes in mathematics, and schools offering a part-time gifted-only program with gifted classes in reading/language arts.

IES Announces Forthcoming Funding Opportunity For the R&D of an “ROI Tool” to Inform Students’ Postsecondary Education and Career Decision Making

Students with electronic devices sitting against a wall.

Overview

On or about February 15, 2019, the Small Business Innovation Research Program at the US Department of Education’s Institute of Education Sciences (ED/IES SBIR) anticipates releasing a Special Topic Solicitation #91990019R0016 in Postsecondary Education. The solicitation will be announced through an IES Newsflash and will be posted here. It will request Phase I proposals for awards of up to $200,000 for 8 months to develop a prototype of a "ROI tool.” The tool will be designed to measure the costs versus benefits (the return on investment) of different postsecondary education and training programs to help students make well-informed choices about options to pursue after they complete high school.

Applicants must be a for-profit business 500 employees or less, and U.S. owned and operated. Applicants may partner with entities or organizations working on related initiatives in the field of postsecondary education, or may subcontract to non-profit researchers or individuals with specialized expertise as needed. The due date for submission for proposals will likely be on or about April 15, 2019, with awards in mid-June, and projects beginning shortly thereafter. All Phase I awardees will be eligible to apply for a Phase II award in 2020, for $900,000 for full scale development and research to test and validate the ROI tool.

Background

While many websites provide ways for students to explore colleges or careers and occupations of interest (e.g., such as College Scorecard and CareerOneStop), there is currently no tool that helps students understand the costs and benefits of individual postsecondary programs in an integrated, customizable, and user-friendly manner.  An ROI tool would likely combine information on individual program’s tuition and fees, time needed to complete, and expected earnings. Because these characteristics can vary significantly across programs and institutions, creating a single estimated measure of ROI would allow students to more easily compare postsecondary program options. If it helps students make better choices, it could lead to improved program completion rates, higher levels of employment and earnings, less education-related debt, and more satisfaction with their selected education and career paths. 

The ED/IES SBIR Special Topic intends to fund up to five (5) Phase I projects to (a) develop and research a prototype of an ROI tool, and (b) conduct planning and concept testing for a fully developed ROI tool that provides a user-friendly experience for students. The prototype of the ROI tool developed in Phase I shall integrate with one or more existing technology systems, data sets, data standards, or resources (such as CareerOneStop or College Scorecard), and add new data elements provided by an end-user.  After a successful Phase I project, it is anticipated that small businesses that win Phase II awards will complete the full-scale development of the ROI tool that was started in Phase I, including developing an interface to improve the experience of students using the ROI tool.

Because data for ROI at the program level may only be available from some states, regions, or sets of institutions at this time, it is expected that the scope of the ROI tool developed in Phase I & II would be limited and would not be an attempt to calculate ROI for every program and institution in the country. Applicants must propose a project scope that appropriately reflects the datasets that are to be integrated within the new ROI tool, and the amount of funding and time allotted for development and research of the SBIR awards in Phase I and II.  Small businesses that are interested in this solicitation must have expertise with related efforts in the field to enhance student choices by linking education and workforce information.

Potential applicants may submit questions to ED’s Contracting Specialist Kevin.Wade@ed.gov. All questions and responses will be posted publically on the same website where the solicitation is posted as Amendments to the Solicitation.