Skip Navigation
archived information
REL Appalachia

[Return to Blogs]

Improving MTSS/RTI implementation through measurement

March 24, 2020

SRI International
   Kirby Chow, REL Appalachia
   Jennifer Nakamura, REL Appalachia
Magnolia Consulting
   Stephanie Baird Wilkerson, REL Appalachia

Most state education agencies (SEAs) and districts have a multi-tiered system of support (MTSS) or response to intervention (RTI) framework to improve the quality of core instruction for all students and address the needs of students at risk for poor learning outcomes. Despite state and district investments in supporting MTSS/RTI frameworks, little is known about what states and districts are doing to measure their implementation and effectiveness. Without measuring implementation, we don't know if these investments are benefiting students.

How can educators and policymakers know if schools are implementing an MTSS/RTI framework as intended? Collecting data using an implementation assessment tool is one approach. An implementation assessment tool can help identify areas where schools and districts are implementing MTSS/RTI as planned and areas to strengthen implementation. Tool results can inform school and district action planning, which can lead to a greater likelihood of improving instructional practice and student outcomes.

To better understand states' approaches for assessing an MTSS/RTI implementation framework, Regional Educational Laboratory Appalachia (REL AP) researchers examined the tools states were using as of May 2018. Here we share what we learned from 21 state education agencies that developed or adapted 31 publicly available MTSS/RTI implementation assessment tools, providing 5 suggestions to support stronger implementation. We hope this information encourages state and district staff and educators to reflect on how they measure MTSS/RTI implementation and ways to improve measurement.

Five suggested practices for developing MTSS/RTI implementation assessment tools and supporting their use

  1. Measure specific MTSS/RTI practices. Including as much specificity as possible when defining MTSS/RTI practices can help ensure educators share a common understanding when reflecting on a given practice (that is, things are not vague and left up to individual interpretation). In addition, specificity helps educators understand the concrete practices they are expected to implement as part of their school's MTSS/RTI framework. Specific MTSS/RTI practices provide details about how to implement a practice, with whom, or how often. We recommend checking out the following tools that measure many specific MTSS/RTI practices that are informed by the research literature: Minnesota's Reading Tiered Fidelity Inventory, Wyomings MTSS implementation resources, Pennsylvania's Using Response to Intervention for SLD Determination: School Building Application (K–12), and New York's Self-Assessment Tool for RtI Readiness.

  2. Use a tool format that describes specific practices at each level of implementation. Tools that describe specific practices at each level of implementation can help users determine next steps for working toward ideal implementation. Tools that take the format of a rubric ask users to indicate their level of implementation based on a description of practices along a continuum, for example, “not implementing” to “optimizing.” One of the benefits of using a rubric is that it has the potential to improve implementation because it describes for users what specific practices are needed for each level of implementation, and the practices educators need to implement in order to make strides toward ideal implementation.
    A table with five columns describing the specific practices at each level of implementation of the Self-Assessment of MTSS Implementation rubric tool.

    Figure 1: A snapshot of the Self-Assessment of MTSS Implementation—Illinois (SAM-I) rubric tool that describes specific practices at each level of implementation.

  3. Request evidence to justify tool ratings. High-quality tools ask users to provide evidence, such as a school master schedule or interventions logs, to justify their ratings on the tool. This practice ensures that tool ratings are valid because they are supported by evidence rather than tool users' perceptions. This also helps to increase the legitimacy of the ratings and helps to ensure that tool raters are taking into consideration the same sources of evidence and a similar rationale when assigning ratings.
    A table with five columns labeled: SAM item & examples of supporting evidence; Not Implementing; Emerging/Describing; Operationalizing; and Optimizing. Each column provides examples of supporting evidence of North Carolina's Self-Assessment of MTSS (Version 1, October 2015) rubric tool.

    Figure 2: A snapshot of North Carolina's Self-Assessment of MTSS (Version 1, October 2015) rubric tool that describes specific practices at each level of implementation and provides examples of supporting evidence.

  4. Establish tool reliability and validity. It is important that the tool assesses what it is supposed to assess (has demonstrated validity) and that it produces similar results under consistent conditions (is reliable). Establishing the reliability and validity when developing or extensively adapting a tool confirms that the tool is capturing the information of interest and is consistent among users. It also signals that a tool generates information that users can trust. See Florida's Self-Assessment of MTSS technical manual to learn more about how Florida documented the tool's validity and reliability through pilot testing the tool with potential users and by running analyses on tool data.

  5. Train educators on using the tool. Having a tool is only the first step—to maximize its value, it is critical to train educators on using the tool. State staff can train regional or district staff to help school teams use the tools and the resulting data. In addition to initial trainings about how to use the tool, states and districts can also provide educators with continued coaching, allowing for multiple opportunities to practice and receive feedback. For example, coaches might help district and building leadership teams review resulting data from the tool to develop an action plan for improving MTSS/RTI implementation.
    A snapshot of an Action Planning and Guiding Questions worksheet from Florida's Self-Assessment of MTSS technical manual. The worksheet lists six questions and a table with five columns and five rows (labeled: Action/Activity; Who is responsible?; When will it be started?; When will it be completed?; and When/how will we evaluate it?) to help district leadership teams develop an action plan for improving MTSS/RTI implementation.

    Figure 3: A snapshot of an Action Planning and Guiding Questions worksheet from Florida's Self-Assessment of MTSS technical manual.

Resources for ongoing learning

This blog draws on the implications from the REL AP research study What tools have states developed or adapted to assess schools' implementation of a multi-tiered system of supports/response to intervention framework? Read the full study for more information about what tools states are using and how they are supporting educators to use these tools. And check out the resources below to learn more about RTI.