Skip Navigation
National Evaluation of the Comprehensive Technical Assistance Centers

NCEE 2011-4031
August 2011

Executive Summary

This final report presents findings from a multi-year evaluation of the Comprehensive Technical Assistance Centers, a federally funded program that provides technical assistance to states in connection with the Elementary and Secondary Education Act, as reauthorized by the No Child Left Behind (NCLB) Act of 2001. The law authorizing the Centers, the Educational Technical Assistance Act of 2002, mandated that a national evaluation of the program be conducted by the Institute of Education Sciences (IES). The legislation indicated that the evaluation should "include an analysis of the services provided . . . [and] the extent to which each of the comprehensive centers meets the objectives of its respective plan, and whether such services meet the educational needs of State educational agencies, local educational agencies, and schools in the region." The program evaluation was conducted by Branch Associates, Inc., Decision Information Resources, Inc., and Policy Studies Associates, Inc.

With the redesign of the Center program, the primary focus of technical assistance was directed to states. In order to build states' capacity for carrying out NCLB responsibilities, which include assistance to struggling school districts and schools as well as other areas of NCLB program administration, the Center program was designed to supply ongoing technical assistance in using research knowledge and promising practices. There are two types of Centers:

  • Sixteen Regional Comprehensive Centers (RCCs) are responsible for providing ongoing technical assistance to states assigned to their region, working with a range of one to eight states per Center
  • Five Content Centers (CCs) are expected to supply knowledge to RCCs and work with RCCs to assist states in the CC's specialty area: Assessment and Accountability, Instruction, Teacher Quality, Innovation and Improvement, or High Schools

Given this program design, the evaluation provides a description of Center operations. It also reports on assistance delivery and contributions to state capacity as judged by managers in state education agencies (SEAs), on quality as judged by panels of subject-matter experts, and on relevance and usefulness as judged by practitioners who participated in Center activities or received Center products. The evaluation data, collected annually, pertain to the Center program years 2006–07, 2007–08, and 2008–09, covering three of the five program years starting with the second year of program funding.1

  • The operations of the RCCs and CCs were consistent with the Center program design. RCCs and CCs assessed client needs annually to determine their technical assistance plans, with informal communications as the mode most commonly reported for 2008–09. The most common activity found in sampled RCC projects2 was "ongoing consultation and follow up" (82, 93, and 91 percent of the sampled RCC projects in years 2006–07, 2007–08, and 2008–09, respectively), consistent with the charge to provide frontline assistance on an ongoing basis to states. In CC projects the most common activity was "research collections and synthesis" (more than 70 percent of sampled projects in each year), consistent with the CCs' prescribed focus on synthesizing, translating, and delivering knowledge to RCCs and states. Across the three years studied, both RCCs and CCs were more involved in each other's projects. Among sampled RCC projects, the percentage that included direct assistance from CC staff was 18 percent in 2006–07, 22 percent in 2007–08 and 30 percent in 2008–09. The percentage of CC projects that included RCC direct assistance was 11 percent in 2006–07, 12 percent in 2007–08, and 38 percent in 2008–09. In addition, by 2008–09 all 16 RCCs reported receiving knowledge resources from CCs and all 5 CCs reported providing knowledge resources to RCCs.
  • Centers addressed the most frequently cited state priority of "statewide systems of support," and an increasing number of state managers reported each year that Center assistance served their purposes. "Systems of support" consists of an infrastructure for the delivery of onsite assistance, and strategies and materials designed to help struggling schools and districts improve student performance. The most widespread NCLB-related priority for state managers was "statewide systems of support or school support teams," which was identified as a major or moderate priority for technical assistance by more than 90 percent of managers, weighted, in each year. Of this group of state managers, more than 90 percent reported each year that the Centers delivered assistance related to this responsibility. "Systems of support" was not only the most widely reported state priority but also the topic addressed in more Center projects in each year than any other topic, according to the inventories compiled by the Centers (19 percent of all projects in 2006–07, 25 percent in 2007–08, and 21 percent in 2008–09, compared with 10 percent or fewer projects addressing any other topic). With each state weighted equally in the analysis, the proportion of state agency managers reporting that assistance from the Centers had "served the state's purposes completely" rose from about one-third (36 percent) in 2006–07 to more than half (56 percent) in 2008–09.
  • Center assistance was reported by state managers as having expanded state capacity in "statewide systems of support," which has been a predominant focus of Center assistance. Among state managers who reported statewide systems of support or school support teams as a state priority for technical assistance in 2008–09, 82 percent credited Center assistance with a "great" or "moderate" expansion of state capacity in this area. In other areas of state responsibility identified by state managers to be a priority for technical assistance, the percentage reporting a great or moderate expansion of state capacity in 2008–09 ranged from 77 percent (for research-based curriculum, instruction, or professional development in academic subjects) to 39 percent (for NCLB's provisions on supplemental educational services and choice).
  • On average across each of the three years, expert panels rated sampled project materials between "moderate" and "high" for quality, and project participants rated the sampled projects "high" for relevance and usefulness. Program-wide average ratings, on a 5-point scale with 5 at the high end, were 3.34 in 2006–07, 3.51 in 2007–08, and 3.57 in 2008–09 for technical quality; 3.94, 4.08, and 4.15, respectively, for relevance; and 3.69, 3.95, 3.96, respectively, for usefulness3. In addition, the average quality rating was consistently higher among CC projects than RCC projects by more than one-half of a standard deviation while RCC ratings went up each year.4 The average ratings of relevance were higher for RCC than CC projects in 2006–07 and 2007–08 although CC ratings went up each year; there were no consistent differences in the usefulness ratings between RCCs and CCs.

Top

1 Notice Inviting Applications for New Awards for Fiscal Year 2005. Federal Register. (2005, June 3). 70(106), 32583–94. The awards were subsequently extended.
2 For the purposes of this evaluation, the team identified "projects" as a common level of aggregation of Center activities that would constitute units large enough for review and rating, but focused enough for coherence. A "project" was defined as a group of closely related activities and/or deliverables designed to achieve a specific outcome for a specific audience.
3 This averaging procedure across Centers and across projects was designed so that each Center contributed equally to the overall mean for the program (or for its type of Center, where RCC means were compared with CC means), and each project sampled from a Center contributed equally to the Center mean.
4 All project-level differences described in this report (e.g., more, higher) reflect a difference of one-half of one pooled standard deviation between groups of projects. Using a metric derived from Cohen (1988), the evaluation team estimated Cohen's d (an estimate of the effect size defined as the difference in means divided by the pooled standard deviation) and adopted the logic of Cohen for what would be considered a moderate difference. For this study, inferential tests of statistical significance were not conducted to examine project-level differences in these non-probability samples. All participant-level differences described in this report reflect statistical test of significance with a criterion value of p<.05.