Skip Navigation
archived information
Skip Navigation

Back to Ask A REL Archived Responses

REL Midwest Ask A REL Response

September 2020

Question:

What research resources are available on aligned systems of curriculum, instruction, and assessment?



Response:

Following an established Regional Educational Laboratory (REL) Midwest protocol, we conducted a search for research reports, descriptive studies and literature reviews on aligned systems of curriculum, instruction, and assessment. In particular, we looked for resources on the relationship between aligned systems and academic achievement. For details on the databases and sources, keywords, and selection criteria used to create this response, please see the methods section at the end of this memo.

Below, we share a sampling of the publicly accessible resources on this topic. The search conducted is not comprehensive; other relevant references and resources may exist. We have not evaluated the quality of references and resources provided in this response, but offer this list to you for your information only.

Research References

Achieve. (2015). Closing the expectations gap: 2014 annual report on the alignment of state K-12 policies and practice with the demands of college and careers. Washington, DC: Author. Retrieved from https://eric.ed.gov/?id=ED554563.

From the ERIC abstract: “Today’s economy demands that ‘all’ young people develop high-level literacy, quantitative reasoning, problem solving, communication and collaboration skills, all grounded in a rigorous, content-rich K-12 curriculum. Acquiring this knowledge and these skills ensures that high school graduates are academically prepared to pursue the future of their choosing. Since 2005, Achieve has annually tracked states’ progress in adopting college- and career-ready (CCR) policies: specifically, the adoption of CCR standards in the foundational subjects of English language arts/literacy and mathematics, graduation requirements that ensure that students have access and students, have completed a rigorous course of study than have students in states that require students to opt into rigor. As in past years, Achieve’s 2014 50-state survey of high school policies focused on aligned standards, graduation requirements, assessments, and accountability and data systems. This process included a survey states completed in summer 2014. Forty-nine states and the District of Columbia participated in this year’s survey. As this year’s annual report shows, states have made substantial progress in some areas but still have a long way to go. More than half of the states still have not made completing a college- and career-preparatory course of study, fully and verifiably aligned with state standards, a requirement for high school graduation. High school assessment systems in many states are in a period of transition. It is not yet clear that most states will ultimately have a coherent and streamlined assessment system that both measures how well students are meeting state standards and lets high school students and postsecondary institutions know whether exposure to all standards, assessments that test whether students have attained the academic knowledge and skills they need to be prepared, and accountability systems that value college and career readiness. Until states have coherent systems of standards, course-taking requirements, assessments and performance indicators in place, students, educators, parents, policymakers and the public will not know whether the system is preparing all young people for postsecondary success.”

Bae, S. (2018). Redesigning systems of school accountability: A multiple measures approach to accountability and support. Education Policy Analysis Archives, 26(8). Retrieved from https://eric.ed.gov/?id=EJ1169485.

From the ERIC abstract: “The challenges facing our children in the 21st century are rapidly changing. As a result, schools bear a greater responsibility to prepare students for college, career, and life and must be held accountable for more than just testing and reporting on a narrow set of outcomes aimed at minimum levels of competency. Thus, scholars, educators, and reform advocates are calling for a more meaningful next phase of school accountability, one that promotes continuous support and improvement rather than mere compliance and efforts to avoid punishment (Center for American Progress & CCSSO, 2014; Darling-Hammond, Wilhoit, & Pittenger, 2014). This paper reviews state and district level accountability systems that incorporate a multiple measures approach to accountability and highlights the following features that represent redesigned systems of accountability: 1) broader set of outcome measures, 2) mix of state and local indicators, 3) measures of opportunities to learn, 4) data dashboards, and 5) School Quality Reviews. The paper concludes with guidance for policymakers and practitioners on ways to support the development and implementation of a multiple measures system of accountability so that school accountability becomes synonymous with responsibility for deeper learning and support for continuous improvement.”

Borman, G. D. (2009). The past, present, and future of comprehensive school reform (Research Brief). Washington, DC: Center for Comprehensive School Reform and Improvement. Retrieved from https://eric.ed.gov/?id=ED507566.

From the ERIC abstract: “The last major review of the achievement outcomes of comprehensive school reform (CSR) models was conducted in 2003. Despite the growing evidence base supporting CSR, the program was discontinued by the federal government in 2007. Now, six years after the 2003 meta-analysis, the study’s lead author, Geoffrey Borman, revisits the results and interprets how the policy and research landscape has evolved over the years. He concludes that in terms of increased student achievement, CSR appears to: (1) have an overall positive effect; (2) be effective whether a school is relatively lower or higher on poverty measures; (3) increase its effectiveness for an individual school the longer it is implemented there; (4) include a variety of models, with a number of them generating strong evidence of effectiveness over the years; and (5) depend for its effectiveness more on program implementation than on whether it contains a predetermined set of federally required components. Schools and district continue to employ a number of CSR models and fund them with Title I and other monies.”

Council of the Great City Schools. (2017). Supporting excellence: A framework for developing, implementing, and sustaining a high-quality district curriculum. Washington, DC: Author. Retrieved from https://eric.ed.gov/?id=ED580881.

From the ERIC abstract: “In the ongoing effort to improve instructional standards in our nation’s urban public schools, the Council of the Great City Schools has released resources to help districts determine the quality and alignment of instructional materials at each grade level; to ensure that materials for English language learners are rigorous and aligned to district standards; to help districts provide targeted professional development for teachers, principals, and district staff; to assist districts in their outreach to parents, the media, and the community; to coordinate the adoption and implementation efforts of various central office departments and stakeholder groups; and to self-assess their progress in implementing college- and career-readiness standards systemwide. In the summer of 2016, the Council of the Great City Schools gathered a team of school and district academic leaders, along with representatives from Student Achievement Partners (SAP), to develop a curriculum reference tool that lays out the criteria for developing a coherent curriculum aligned to district- and state-defined college- and career-readiness standards and capable of guiding instruction in the district. The teams also met in smaller groups to discuss key components of a quality curriculum and to address issues of implementation. Based on these discussions, as well as the experience and expertise Council staff have developed over the years working with scores of academic departments in large urban districts, this guide aims to present instructional leaders and staff with a core set of criteria for what a high-quality curriculum entails. This guide includes annotated samples and exemplars from districts around the country. It also provides actionable recommendations for developing, implementing, and continuously improving upon a district curriculum, ensuring that it reflects shared instructional beliefs and common, high expectations for all students, and that it focuses the instructional work in every school throughout the district.”

Darling-Hammond, L., Wilhoit, G., & Pittenger, L. (2014). Accountability for college and career readiness: Developing a new paradigm. Education Policy Analysis Archives, 22(86), 1. Retrieved from https://eric.ed.gov/?id=EJ1050070.

From the ERIC abstract: “As schools across the country prepare for new standards under the Common Core, states are moving toward creating more aligned systems of assessment and accountability. This paper recommends an accountability approach that focuses on meaningful learning, enabled by professionally skilled and committed educators, and supported by adequate and appropriate resources, so that all students regardless of background are prepared for both college and career when they graduate from high school. Drawing on practices already established in other states and on the views of policymakers and school experts, this paper proposes principles for effective accountability systems and imagines what a new accountability system could look like in an imagined ‘51st state’ in the United States. While considerable discussion and debate will be needed before a new approach can take shape, this paper’s objective is to get the conversation started so the nation can meet its aspirations for preparing college- and career-ready students.”

Debarger, A. H., Penuel, W. R., Moorthy, S., Beauvineau, Y., Kennedy, C. A., & Boscardin, C. K. (2017). Investigating purposeful science curriculum adaptation as a strategy to improve teaching and learning. Science Education, 101(1), 66–98. Retrieved from https://eric.ed.gov/?id=EJ1123110.

From the ERIC abstract: “In this paper, we investigate the potential and conditions for using curriculum adaptation to support reform of science teaching and learning. With each wave of reform in science education, curriculum has played a central role and the contemporary wave focused on implementation of the principles and vision of the ‘Framework for K-12 Science Education’ (National Research Council, 2012) is no exception. Curriculum adaptation—whereby existing curriculum materials are purposefully modified—may provide an important strategy for teacher leaders in schools and districts to support changes to teacher practice aligned with the vision of the ‘Framework.’ Our study provides empirical evidence that under supportive district conditions and within a research-practice partnership, purposefully adapted curriculum materials can improve student understanding of science and that these are linked to shifts teachers make in classroom culture facilitated by augmented curriculum materials.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Foorman, B., Kershaw, S., & Petscher, Y. (2013). Evaluating the screening accuracy of the Florida Assessments for Instruction in Reading (FAIR). (REL 2013-008). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southeast. Retrieved from https://eric.ed.gov/?id=ED544195.

From the ERIC abstract: “Florida requires that students who do not meet grade-level reading proficiency standards on the end-of-year state assessment (Florida Comprehensive Assessment Test, FCAT) receive intensive reading intervention. With the stakes so high, teachers and principals are interested in using screening or diagnostic assessments to identify students with a strong likelihood of failing to meet grade-level proficiency standards on the FCAT. Since 2009 Florida has administered a set of interim assessments (Florida Assessments for Instruction in Reading, FAIR) three times a year (fall, winter, and spring) to obtain information on students’ probability of meeting grade-level standards on the end-of-year FCAT. In 2010/11 the Florida Department of Education aligned the FCAT to new standards (Next Generation Sunshine State Standards) and renamed it the FCAT 2.0 but retained the 2009/10 cutscores. In 2011/12 it changed the FCAT 2.0 cutscores. The share of students meeting grade-level standards on the FCAT 2.0 fell to 53 percent in 2012 from 72 percent in 2011. This drop led the Florida Department of Education to partner with the Regional Educational Laboratory Southeast to analyze student performance on the FAIR reading comprehension screen and FCAT 2.0 to determine how well the FAIR and the 2011 FCAT 2.0 scores predict 2012 FCAT 2.0 performance. The study addresses two research questions: (1) What is the association between performance on the 2012 FCAT 2.0 and two scores from the FAIR reading comprehension screen across grades 4—10 and the three FAIR assessment periods (predictive validity)?; and (2) How much does adding the FAIR reading comprehension screen affect identification errors beyond those identified through 2011 FCAT 2.0 scores (screening accuracy)? Performance on the 2012 FCAT 2.0 was found to have a stronger correlation with FCAT success probability scores than with FAIR reading comprehension ability scores. In addition, using 2011 FCAT 2.0 scores alone to predict 2012 FCAT 2.0 scores underidentified 16-24 percent of students as at risk. Adding FAIR reading comprehension ability scores dropped the underidentification rate by 12-20 percentage points.”

Hale, S., Dunn, L., Filby, N., Rice, J., & Van Houten, L. (2017). Evidence-based improvement: A guide for states to strengthen their frameworks and supports aligned to the evidence requirements of ESSA. San Francisco, CA: WestEd. Retrieved from https://eric.ed.gov/?id=ED573213

From the ERIC abstract: “One of the broad intents of the Elementary and Secondary Education Act (ESEA) as amended by the Every Student Succeeds Act (ESSA) is to encourage evidence-based decision-making as a way of doing business. Nonregulatory guidance issued in September 2016 by the U.S. Department of Education (ED) clarifies and expands on both the nature of evidence-based improvement and the levels of evidence that are specified in the law. This guide builds on that ED guidance and provides an initial set of tools to help states and districts understand and plan for implementing evidence-based improvement strategies. This guide recognizes school and district improvement as a continuous, systemic, and cyclical process, and emphasizes the use of evidence in decision-making throughout continuous improvement. In other words, the guide is not aimed at isolated decisions; rather, it is meant to support evidence-based decision-making that is nested within a larger improvement process. The primary audience for this guide is state education agency (SEA) staff who are responsible for understanding and implementing the evidence-based provisions of ESSA. The purpose of the guide is to build capacity of SEAs and their intermediaries to support LEAs in understanding the evidence-related requirements of ESSA and, consequently, selecting and implementing interventions that are evidence-based and that have strong potential to improve student outcomes. Specifically, the guide is intended to: (1) increase readers’ understanding of the expectations and opportunities for evidence-based school and district improvement in the context of ESSA; (2) encourage a broad understanding of the elements of evidence-based decision-making, including how needs, context, implementation strategies, desired outcomes, and sustainability considerations inform choices of evidence-based interventions, and how formative and summative evaluation are integral to an evidence-based improvement cycle; and (3) offer guiding information and a starter set of six tools to support this work, with an emphasis on the process of selecting evidence-based interventions. The materials presented in the guide offer SEAs and their LEAs opportunities to conduct a review of their approach to school and district improvement, including selection of evidence-based interventions, and to develop action steps for strengthening the guidance and supports that SEAs offer to their LEAs and that LEAs offer to their schools.”

Hamilton, L., Halverson, R., Jackson, S. S., Mandinach, E., Supovitz, J. A., & Wayman, J. C. (2009). Using student achievement data to support instructional decision making (IES Practice Guide; NCEE 2009-4067). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance. Retrieved from https://eric.ed.gov/?id=ED506645.

From the ERIC abstract: “The purpose of this practice guide is to help K-12 teachers and administrators use student achievement data to make instructional decisions intended to raise student achievement. The panel believes that the responsibility for effective data use lies with district leaders, school administrators, and classroom teachers and has crafted the recommendations accordingly. This guide focuses on how schools can make use of common assessment data to improve teaching and learning. For the purpose of this guide, the panel defined common assessments as those that are administered in a routine, consistent manner by a state, district, or school to measure students’ academic achievement. These include: (1) annual statewide accountability tests such as those required by No Child Left Behind; (2) commercially produced tests—including interim assessments, benchmark assessments, or early-grade reading assessments—administered at multiple points throughout the school year to provide feedback on student learning; (3) end-of-course tests administered across schools or districts; and (4) interim tests developed by districts or schools, such as quarterly writing or mathematics prompts, as long as these are administered consistently and routinely to provide information that can be compared across classrooms or schools. This guide includes five recommendations that the panel believes are a priority to implement: (1) Make data part of an ongoing cycle of instructional improvement; (2) Teach students to examine their own data and set learning goals; (3) Establish a clear vision for schoolwide data use; (4) Provide supports that foster a data-driven culture within the school; and (5) Develop and maintain a districtwide data system.”

Herman, R., Dawson, P., Dee, T., Greene, J., Maynard, R., Redding, S., et al. (2008). Turning around chronically low-performing schools (IES Practice Guide; NCEE 2008-4020). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance. Retrieved from https://eric.ed.gov/?id=ED501241.

From the ERIC abstract: “This guide identifies practices that can improve the performance of chronically low-performing schools—a process commonly referred to as creating ‘turnaround schools.’ The four recommendations in this guide work together to help failing schools make adequate yearly progress. These recommendations are: (1) signal the need for dramatic change with strong leadership; (2) maintain a consistent focus on improving instruction; (3) provide visible improvements early in the turnaround process (quick wins); and (4) build a committed staff. The guide includes a checklist showing how each recommendation can be carried out. It uses examples from case studies which illustrate practices noted by schools as having had a positive impact on the school turnaround.”

Honig, M. I., & Rainey, L. R. (2015). How school districts can support deeper learning: The need for performance alignment (Deeper Learning Research Series). Boston, MA: Jobs For the Future. Retrieved from https://eric.ed.gov/?id=ED560756.

From the ERIC abstract: “School district leaders nationwide aspire to help their schools become vibrant places for learning—where students have meaningful academic opportunities and develop critical thinking and problem-solving skills. Historically, though, school district central offices have been ill-equipped to support such ambitious goals. A new wave of research suggests that central offices have a key role to play in creating the conditions that make deeper learning possible, and they can do so by making deliberate efforts to align the work of each and every part of the school system to a set of common priorities. This paper addresses the following questions: (1) Why should district central office leaders make performance alignment a key part of their efforts to help all students learn deeply?; and (2) What, more specifically, does performance alignment entail, and how might district leaders move in that direction? This paper: (1) identifies several challenges that district central offices often face when they try to support the improvement of teaching and learning districtwide; (2) describes how pioneering districts are pursing performance alignment; and (3) recommends specific strategies that can help school districts to realize deeper learning at scale. Findings and observations point to the need for a fundamental redesign of most central office functions, as well as some major departures from business-as-usual for most, if not all, central office staff, especially those in human resources, curriculum and instruction, and principal supervision. Such reforms can be challenging, but they are likely to be necessary for school systems to realize deeper learning in all schools and for all students.”

Konstantopoulos, S., Li, W., Miller, S., & van der Ploeg, A. (2015). Effects of interim assessments on the achievement gap: Evidence from an experiment. Paper presented at the Society for Research on Educational Effectiveness Conference, Washington, DC. Retrieved from https://eric.ed.gov/?id=ED562166.

From the ERIC abstract: “Motivated by the passage of the No Child Left Behind (NCLB) Act, all states operate accountability systems that measure and report school and student performance annually. The purpose of this study is to examine the effects of interim assessments on the achievement gap. The authors examine the impact of interim assessments throughout the distribution of student achievement with a focus on the lower tail of the achievement distribution. Specifically, they investigated the effects of two interim assessment programs (i.e., ‘mCLASS’ and ‘Acuity’) on mathematics and reading achievement for high- median- and low-achievers. They use data from a large-scale experiment conducted in the state of Indiana in the 2009-2010 school year. Quantile regression is used to analyze student data. The study was a large-scale experiment conducted in Indiana during the 2009-2010 academic year and included K-8 public schools that had volunteered to participate in the intervention in the spring of 2009. From a stratified (by school urbanicity) pool of 116 schools the authors randomly selected 70 schools. Ten of the 70 schools had used one or both assessment programs the prior year and were excluded from the pool. Two other schools closed and another school did not provide any student data. Thus, the final sample included 57 schools, 35 in treatment and 22 in control condition. Overall, nearly 20,000 students participated in the study during the 2009-2010 school year. The design was a two-level cluster randomized design. Students were nested within schools, and schools were nested within treatment and control conditions. Schools were randomly assigned to a treatment (interim assessment) or a control condition. The schools in the treatment condition received ‘mCLASS’ and ‘Acuity’, and the training associated with each program. The control schools operated under business-as-usual conditions. Overall, the findings suggest that the treatment effect was positive, but not consistently significant across all grades. Significant treatment estimates were observed in the grade 3-8 analysis in mathematics. The estimates were typically larger for low-achievers and in some cases significant. These results are consistent in terms of the sign of the effect (i.e., positive), but inconsistent in terms of statistical significance. The authors observed positive, statistically significant effects for grades 3-8 especially in mathematics. It seems that ‘Acuity’ affected mathematics and reading achievement positively and in some instances considerably in grades 3-6.”

Konstantopoulos, S., Miller, S. R., & van der Ploeg, A. (2013). The impact of Indiana’s system of interim assessments on mathematics and reading achievement. Educational Evaluation and Policy Analysis, 35(4), 481–499. Retrieved from https://eric.ed.gov/?id=EJ1019178.

From the ERIC abstract: “Interim assessments are increasingly common in U.S. schools. We use high-quality data from a large-scale school-level cluster randomized experiment to examine the impact of two well-known commercial interim assessment programs on mathematics and reading achievement in Indiana. Results indicate that the treatment effects are positive but not consistently significant. The treatment effects are smaller in lower grades (i.e., kindergarten to second grade) and larger in upper grades (i.e., third to eighth grade). Significant treatment effects are detected in Grades 3 to 8, especially in third- and fourth-grade reading and in fifth- and sixth-grade mathematics.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Additional Organizations to Consult

Center on Standards, Alignment, Instruction, and Learning – https://www.c-sail.org/

From the website: “The Center on Standards, Alignment, Instruction, and Learning (C-SAIL) examines how college- and career-readiness standards are implemented, if they improve student learning, and what instructional tools measure and support their implementation.”

Council of Chief State School Officers – https://ccsso.org/about

From the website: “The Council of Chief State School Officers (CCSSO) is a nonpartisan, nationwide, nonprofit organization of public officials who head departments of elementary and secondary education in the states, the District of Columbia, the Department of Defense Education Activity, the Bureau of Indian Education and the five U.S. extra-state jurisdictions.”

Tools and Resources for Standards Implementation – https://ccsso.org/tools-and-resources-standards-implementation

From the website: “CCSSO developed this list of free tools and resources to support state education agencies, districts, and educators during the process of implementing the College- and Career-Ready Standards. The resources are developed by CCSSO and other leading organizations and are not intended to be a comprehensive list of all available resources. CCSSO does not endorse any for-profit products.”

Methods

Keywords and Search Strings

The following keywords and search strings were used to search the reference databases and other sources:

  • Alignment “educational improvement”

  • Curriculum policy

Databases and Search Engines

We searched ERIC for relevant resources. ERIC is a free online library of more than 1.6 million citations of education research sponsored by the Institute of Education Sciences (IES). Additionally, we searched IES and Google Scholar.

Reference Search and Selection Criteria

When we were searching and reviewing resources, we considered the following criteria:

  • Date of the publication: References and resources published over the last 15 years, from 2005 to present, were included in the search and review.

  • Search priorities of reference sources: Search priority is given to study reports, briefs, and other documents that are published or reviewed by IES and other federal or federally funded organizations.

  • Methodology: We used the following methodological priorities/considerations in the review and selection of the references: (a) study types—randomized control trials, quasi-experiments, surveys, descriptive data analyses, literature reviews, policy briefs, and so forth, generally in this order, (b) target population, samples (e.g., representativeness of the target population, sample size, volunteered or randomly selected), study duration, and so forth, and (c) limitations, generalizability of the findings and conclusions, and so forth.
This memorandum is one in a series of quick-turnaround responses to specific questions posed by educational stakeholders in the Midwest Region (Illinois, Indiana, Iowa, Michigan, Minnesota, Ohio, Wisconsin), which is served by the Regional Educational Laboratory (REL Midwest) at American Institutes for Research. This memorandum was prepared by REL Midwest under a contract with the U.S. Department of Education’s Institute of Education Sciences (IES), Contract ED-IES-17-C-0007, administered by American Institutes for Research. Its content does not necessarily reflect the views or policies of IES or the U.S. Department of Education nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.