Skip Navigation
National Evaluation of the Comprehensive Technical Assistance Centers

NCEE 2011-4031
August 2011

Research Questions and Methods

The research priorities for the evaluation were primarily driven by the statute and focused on the following key research questions:

  1. How did the Regional Comprehensive Centers and Content Centers operate as part of the Comprehensive Technical Assistance Center program?
    • How did Centers develop, refine, and carry out their plans for technical assistance? How did they define their clients' educational needs and priorities?
    • What were the objectives of the technical assistance the Centers offered? What kinds of products and services were provided by the Centers?
    • How did the Regional Comprehensive Centers and Content Centers coordinate their work?
  2. What was the performance of the Comprehensive Centers in addressing state needs and priorities? How did their performance change over the period of time studied?
    • How did the Centers' state clients define their needs and priorities?
    • To what extent, as reported by states, did Center assistance expand state capacity to address underlying needs and priorities and meet the goals of NCLB?
    • To what extent did states rely on other sources of technical assistance besides the Centers? What were other sources of technical assistance that states used? How did the usefulness of Center assistance compare with the usefulness of assistance from other sources?
  3. To what extent was the assistance provided by the Centers of high quality, high relevance, and high usefulness?
    • Did the quality, relevance, or usefulness of Center assistance change over the period of time studied?
    • What was the variation in the quality, relevance, and usefulness of Center assistance across types of projects and participants?

The evaluation gathered information annually on the Center program for the years 2006–07, 2007–08, and 2008–09 from six data sources in order to address the research questions above. Data sources included:

  • Management plans. The evaluation reviewed these plans as a data source for each Center's intended focus at the beginning of the year, drawing from the plans a list of topics as foci of Center objectives.
  • Project inventory forms and cover sheets. Each Center completed an inventory of its work that grouped related activities and deliverables into "projects," with the project defined as a group of closely related activities and/or deliverables designed to achieve a specific outcome for a specific audience. Projects were in turn classified by the Centers into major, moderate, and minor projects on the basis of the relative level of effort they reflected. The Centers also classified the projects, according to the topics addressed, into 22 topical categories.7 At each stage, the evaluation team provided written guidance and training for inventory development, reviewed the Centers' drafts, and clarified definitions as needed. For projects sampled for the evaluation, the Centers prepared "cover sheets" providing brief descriptions and contexts for the activities and resources included in the project. The evaluation team used the cover sheets as a data source for coding project activities and resources.
  • Center staff interviews. Using structured response categories, Center staff were asked about how they planned their programs of work; how their plans evolved during the program year; and what they offered to clients with respect to the topics addressed, the delivery modes used, and their sources for content expertise.
  • Survey of senior state managers. SEA managers were surveyed about their state's technical-assistance needs and what was provided by the Centers (including their RCC and any CCs with whom their state had worked).
  • Expert panel review. The same sample of major and moderate projects was reviewed for quality by a panel of experts. Content experts were recruited and trained to use standard criteria to rate the technical quality of the sampled Center projects on the basis of a review of all project materials.
  • Survey of project participants. A representative sample of clients who had participated directly in the evaluation's purposive sample of major and moderate Center projects furnished descriptive information, through surveys, on the activities and resources that the project had delivered to them. These clients included individuals working at the state level who had participated in RCC or CC projects, and RCC employees who were among the clients of CC projects. They rated the relevance and usefulness of the sampled projects.

Top

7 The 22 topics were: components of effective systems of support for states, districts, and schools; data use or data-driven decision making; formative assessment; reading; adolescent literacy; mathematics; dropout prevention; high school redesign or reform; transition to high school; special education curriculum, instruction and professional development; special education assessment; English language learners;" highly qualified teacher" provisions of NCLB; teacher preparation and induction; teacher professional development; supplemental educational services; Response to Intervention; migrant education; Indian or Native American education; data management and compliance; assessment design; and parent involvement. In addition, projects that addressed none of these 22 topics were categorized as "other."