Skip Navigation

Regional Educational Laboratory Program


The Effects of Success in Sight as a School Improvement InterventionThe Effects of Success in Sight as a School Improvement Intervention

Regional need and study purpose

There is a regional and national need to assist schools that fail to make adequate yearly progress under the No Child Left Behind (NCLB) Act. A school improvement program with demonstrated effectiveness can respond to this need by helping to raise student achievement in low-performing schools.

The study's primary focus is to provide an unbiased estimate of the impact of Success in Sight, a comprehensive approach to school improvement, on increasing student academic achievement. It also intends to determine whether the approach builds school-level capacity to engage in four reform practices: data-based decisionmaking, purposeful community, shared leadership, and effective school practices. In prior research, these practices have been linked to improved student achievement (Marzano 2003; Marzano, Pickering, and Pollock 2001; Marzano, Waters, and McNulty 2005).

Addressing the study's intended outcomes in student achievement and school capacity to engage in the four reform practices, the research asks:

  • Does implementation of Success in Sight significantly improve student achievement?
  • Does it have a significant impact on the extent to which schools engage in data-based decisionmaking?
  • Does it have a significant impact on the extent to which schools engage in effective school practices associated with improved student achievement?
  • Does it have a significant impact on the extent to which schools develop and maintain a purposeful community?
  • Does it have a significant impact on the extent to which leadership is shared in schools?

This study is expected to provide rigorous evidence on the effectiveness of Success in Sight in raising student achievement and an unbiased estimate of its impact. This evidence will be useful for schools seeking systemic change using programs proven to improve student performance.

Intervention description

Success in Sight is a two-year facilitated, comprehensive school improvement process that focuses on practices that increase student achievement. It differs from other approaches to school improvement because it is designed to build on improvement efforts already under way in schools by teaching schools how and when to make changes. It does not require schools to "start over" with a whole new model of schooling, use specific strategies, or focus on a specific subject area or population of students. Instead, it helps schools achieve their improvement goals by building on their strengths, identifying the best ways to make short- and long-term progress, and paring away unnecessary activities. Success in Sight is intended to teach schools how to set priorities in their efforts for school improvement and optimize the resources available in the school and broader school community.

Over a two-year period, school leadership teams are taught how to balance the science of effective schooling with the art of continuous improvement by attending 6 two-day face-to-face professional development sessions with other participating schools and meeting with the two change facilitators assigned to their school during 10 onsite mentoring sessions between the large group sessions.

Success in Sight is the result of years of research by the Mid-continent Research for Education and Learning (McREL). It has been presented in the "What Works" books (Marzano 2003; Marzano, Pickering, and Pollock 2001; Marzano, Waters, and McNulty 2005) and pilot tested across grade levels and settings, including low- and high-performing schools and urban and rural schools. Pilot-test data from schools implementing the Success in Sight model, in which McREL staff serve as the change facilitators,1 show 9 of the 11 low-performing elementary and middle schools making improvements during the first two years—either by making adequate yearly progress after failing to do so or by significantly increasing the percentage of students scoring at or above proficiency.

1 The Success in Sight approach, which includes professional development for school leadership teams and regular support from change facilitators, has been implemented in two ways: with McREL acting as the external change facilitators and with McREL training qualified staff at participating schools to be change facilitators. McREL has tested the first model in 12 schools and the second model in 48 schools.

Study design

The impact of Success in Sight is being evaluated through an experimental study in which schools are randomly assigned to either the treatment or control condition. Random assignment at the school level was required because the intervention is a whole-school approach to school improvement.

Fifty-two elementary schools were recruited for the study. The researchers matched these schools on key characteristics to create matched pairs (through a matched-pair block design in which the pair is the blocking factor). Schools were then randomly assigned within the matched pair, with one school assigned to the treatment group and one to the control group. The 52 schools were recruited from eight districts in Minnesota and Missouri and represent a diverse sample of schools—rural, suburban, and urban and small and large. The sample includes 26 treatment and 26 control schools, nearly split between the two states (24 schools in one state and 28 in the other). The sample of 52 schools will provide the statistical power (›.80) to detect an impact of around .20 standard deviation on student achievement and .30 standard deviation on scales denoting engagement in school-level reform practices.

The two-year intervention for this study began in summer 2008, with the first two-day professional development session led by the Success in Sight McREL intervention team. Participants were introduced to the Success in Sight approach to comprehensive school improvement and began planning their first fractal experience (small change initiative). Following the four- to six-week fractal experience, the assigned Success in Sight facilitators began meeting with the school leadership team onsite. Each year, schools participate in 3 two-day professional development sessions and five onsite mentoring sessions with their Success in Sight change facilitators. Schools must participate for two years in Success in Sight to produce the hypothesized effects on student learning.

Schools in the control group participate in their regular school improvement activities and continue with business as usual. Control schools do not receive any of the added support provided by Success in Sight.

Because the intervention being investigated was developed by McREL, assessing its effectiveness poses a potential conflict of interest for Regional Educational Laboratory (REL) Central. To mitigate this, REL Central staff are implementing the intervention and have hired an external research organization, ASPEN Associates, to conduct the study. REL Central has built a "firewall" between the researchers conducting the study and the program staff delivering the intervention, using policies, structures, and procedures that function like a network system firewall. To provide unbiased answers to the research questions, the firewall limits communication between the researchers and the change facilitators, as well as access to data to maintain security of the information collected.

Key outcomes and measures

Success in Sight uses five outcomes to represent how the intervention builds whole-school capacity for change. The primary outcome of this study is student academic achievement. The other four outcomes relate to the use of four school improvement practices: data-based decisionmaking, purposeful community, shared leadership, and effective school practices. These outcomes reflect the contexts, structures, and procedures associated with the organizational changes that the intervention targets. The effects of Success in Sight are examined one to two years after the intervention using scale scores from state assessments in reading and math and from the Teacher Survey of Policy and Practices on the extent to which a school engages in each of the four school improvement practices.

Data-based decisionmaking refers to an understanding of and engagement in activities that use data to support teaching and learning, including monitoring of student progress at various levels within the school. Purposeful community denotes a shared belief in the staff's ability to accomplish goals and access resources, structures, and procedures that support teachers in working together to accomplish them. Shared leadership refers to having a common vision and distributing leadership to attain it. Effective school practices include school improvement practices shown to be related to improved student achievement and targeted by the intervention: differentiated instruction based on data, safe and orderly school climate, and a focus on improving academic achievement.

Data for the four outcome measures on school improvement practices are assessed using the Teacher Survey of Policies and Practices (Apthorp et al. 2004; Mid-continent Research for Education and Learning 2005). This instrument was developed by McREL in 2004 to assess the characteristics of low-performing schools that perform better than expected, not to assess the impact of Success in Sight or the five stages of this intervention. Instead, the survey reflects common school characteristics related to student achievement: instruction, school environment, professional community, and leadership.

Other instruments were identified and reviewed for scales that might be used in the study. But the Teacher Survey of Policies and Practices was selected because it addressed all four outcomes for this study in the most comprehensive manner, was designed for use in low-performing schools, has questions worded appropriately for the school as the unit of analysis, and demonstrated the highest quality technical characteristics (reliability and validity). To fully capture the construct of purposeful community, a Collective Efficacy Scale developed by Goddard (2002) was included in this survey.

Data collection approach

Data collected throughout the study are used to describe the local context of the treatment and control groups, estimate the intervention impact on the student achievement and school-level reform practices, and describe the fidelity of the intervention implementation.

The researchers conduct baseline site visits to all treatment and control schools to document the local conditions and context through a focus group with the school leadership team, a focus group with a cross-section of staff, and an interview with the school principal. After the intervention the researchers will conduct short interviews with key contacts in schools to determine whether the local conditions at baseline have changed.

The online teacher survey is used to assess the extent to which schools engage in the four key reform practices during the study. All classroom teachers and specialists—treatment and control—complete the survey at baseline and at the end of years 1 and 2. In addition, teachers report general demographic information (such as certification and years of teaching).

Student scores from the statewide NCLB assessments in each state are used to measure student achievement. Scores from reading and math are collected for grades 3, 4, and 5. The researchers request other demographic data on students (such as race/ethnicity, eligibility for free or reduced-price lunch, and disability status) for inclusion in the achievement data files.

The Success in Sight program records and baseline site visits by the researchers capture data on implementation fidelity. The Success in Sight intervention team tracks dates, duration, and participation in the professional development sessions and onsite mentoring sessions.

Analysis plan

Data will be analyzed to determine the effects of Success in Sight on student achievement in participating schools after two years of the intervention. Further analyses are conducted to estimate the effects of Success in Sight on school-level reform practices. Outcome data are collected at the student and teacher levels. No subgroup analyses—race/ethnicity or socioeconomic status—will occur. Intervention effects will be estimated at the school level using multilevel modeling to account for the random assignment of schools and the sources of variability in the data that result from the nested structure of the school environment.

Effects of Success in Sight on student outcomes will be analyzed using a three-level hierarchical model. The level 1 model nests students within schools and includes grade level (3 or 4) as a predictor. This model also includes a student level covariate to account for baseline achievement. The level 2 model includes an indicator for assignment to intervention or control as a predictor of mean school achievement to estimate the effect of the intervention on student achievement. This model also includes a school-level covariate (school mean on achievement) to explain additional between-school variance not explained in the level 1 model, to control for prior achievement, and to improve the power of the estimation of the intervention's effect. This model is run on two student samples—students who remained in the school for the entire two years ("stayers") and the complete school population that includes all students ("stayers" and "in-movers"). Because the achievement data come from two different state assessments, they will be analyzed by state, and the estimates are combined using a meta-analytic approach.

Effects of Success in Sight on the four school-level reform practices are also analyzed using three-level hierarchical models. Four separate models are run, one for each reform practice (data-based decisionmaking, purposeful community, shared leadership, and effective school practices). The level 1 model nests teachers within schools and includes a teacher-level covariate to account for baseline reform practices. The level 2 model includes an indicator for assignment to treatment or control condition; district and school size are treated as predictors of mean engagement in the reform practice to estimate the effect of the intervention on these practices. The level 2 model also includes a school-level covariate (school mean on the reform practice) to explain additional between-school variance not explained in the level 1 model, to control for baseline engagement in the reform practice, and to improve the power of the estimation of the intervention's effect. The level 3 model includes indicators for each of the 26 matched pairs.

Analyses are conducted to examine how attrition affects the findings. Results are presented that show the percentage of teachers in each group for whom outcome data could not be obtained and the number of schools in each group that did not complete the study. Analyses compare baseline results for the initial sample in each group with baseline results for the final sample, to determine whether the final sample differs from the initial sample.

Principal investigators

Elisabeth Palmer, PhD
ASPEN Associates, Inc.

Helen Apthorp, PhD
Mid-continent Research for Education and Learning

Contact information

Dr. Helen Apthorp
Mid-continent Research for Education and Learning
4601 DTC Blvd., Suite 500
Denver, CO 80237-2596
Phone: (303) 632-5622
Fax: (303) 337-3005
hapthorp@mcrel.org

Region: Central

References

Apthorp, H., Barley, Z., Englert, K., Gamache, L., Lauer, P., Van Buhler, R., and Martin-Glenn, M. (2004). McREL's study of academic success in high-needs schools: Mid-point progress and measurement viability (Deliverable #2004-11). Aurora, CO: Mid-continent Research for Education and Learning.

Goddard, R. (2002). A theoretical and empirical analysis of the measurement of collective efficacy: The development of a short form. Educational and Psychological Measurement, 62(1), 97–110.

Marzano, R. J. (2003). What works in schools: Translating research into action. Alexandria, VA: Association for Supervision and Curriculum Development.

Marzano, R. J., Pickering, D. J., and Pollock, J. E. (2001). Classroom instruction that works. Alexandria, VA: Association for Supervision and Curriculum Development.

Marzano, R. J., Waters, T., and McNulty, B. A. (2005). School leadership that works: From research to results. Alexandria, VA: Association for Supervision and Curriculum Development.

Mid-continent Research for Education and Learning. (2005). Final report: high-needs schools—what does it take to beat the odds? (Technical Report). Aurora, CO: Mid-continent Research for Education and Learning.

Return to Index