Skip Navigation
Baseline Analyses of SIG Applications and SIG-Eligible and SIG-Awarded Schools
NCEE 2011-4019
May 2011

3.1. Methodology

For the review of SIG applications, three researchers from the American Institutes for Research (AIR) led the process: Kerstin Carlson Le Floch, project director, Susan Bowles Therriault, senior researcher and Susan Cole, senior researcher. Dr. Therriault developed and facilitated the coding and quality control process. Dr. Le Floch and Ms. Cole provided guidance and feedback, and both participated in the quality control data check processes. All three researchers analyzed and synthesized the data.

3.1.1. Step 1: Data collection and capture

The primary data source for the analysis was SIG applications approved by ED for all 51 SEAs.2 Researchers downloaded all but one of the state SIG applications from ED's Web site.3 Tennessee's SIG application was listed on the Web site but had a faulty link. This application was obtained directly from Tennessee's SEA Web site.

To prepare for data capture, the three lead researchers reviewed ED's Guidance on School Improvement Grants,4 ED's School Improvement Grants Application,5 and the first set of nine approved SEA SIG applications in June 2010. Based on these resources and a review of the state SIG application form released by ED,6 the lead researchers identified four topic areas which cover the key elements of the state applications:

  • SEA definitions and identification of persistently lowest-achieving schools;
  • SEA SIG priorities (e.g., whether all, some or none of the eligible Tier I, Tier II, and Tier III schools would be served; availability of SIG models; and SEA elected waivers);
  • LEA requirements (e.g., determining LEA capacity, metrics for measuring progress, reporting requirements); and
  • SEA strategies for building LEA capacity (e.g., use of the five percent reserve funds, mechanisms for supporting SIG implementation, etc.).

Data were collected from the following sections of Part I SEA Requirements: A. Eligible Schools, B. Evaluation Criteria, C. Capacity, D. Descriptive Information, F. SEA Reservation and H. Waivers. Sections on Assurances and Consultation with Stakeholders were standard requirements for approval of the SIG application and were excluded because there was no variation in these sections among states. Data from Part II LEA Requirements (specifically A. Schools to be Served, B. Descriptive Information and C. Budget) were used only to supplement or verify information gathered from Part I of the applications, as these sections focus on district rather than state policies.

The lead researchers developed an Excel-based data capture workbook to record the data compiled on these key topics. The data capture tool was divided into worksheets for each topic, with one row for each state. For some data elements, the research team entered text data (cut and pasted from the SIG application). For other elements, the research team inserted numbers, yes or no responses, or short answers. The cells with closed-ended questions had drop-down menus with response options. Because the state SIG applications followed the outline provided by ED, information was found in the same section of the application across states. Thus, in each column of the data capture workbook, the relevant section of the SEA application was noted. For a full list of the elements examined in the data capture workbook, see Appendix A.

The lead researchers piloted the workbook, reviewing four randomly selected SEA applications and identifying topics in the application that were not captured. Based on the test cases, the researchers refined the data capture workbook by adding data elements. For example, after testing the data capture tool, the lead researchers added fields to include information on whether a SEA is able to take over schools and the waivers for which SEAs applied.

Three strategies were used in Step 1 to ensure reliability of the data capture process: training of all researchers, on-going guidance, and continuous data checks.

Training. After the data capture workbook was developed, a total of eight researchers were trained to individually review applications and capture data in the workbook. The training consisted of a review of ED's SIG guidelines and the state application form, discussion and guidance on the data capture workbook, and a group exercise focused on capturing the data from one SEA application. During the process, team members were trained to read their assigned SEA applications at least twice: once to get an overview of the state's approach, and a second time while completing the data capture workbook. For specific sections, the team members were instructed to quote directly from applications (see Appendix A for more details). The lead researchers provided one-on-one guidance and reviewed the initial entries of all researchers.

On-going guidance. To ensure reliable data entry, the team leader provided team members with on-going guidance. Team members participated in ten meetings over three months to discuss specific SEA applications and data capture categories to clarify coding categories and identify data entry discrepancies. Once discrepancies were resolved, team members returned to earlier applications to add or clarify information as appropriate. For example, the team members found that some SEAs plan to use a rubric for determining LEA capacity, others plan to use a rubric for reviewing LEA applications, and still others plan to use the same rubric to determine LEA capacity and review applications. Upon a review of all data entries, the team leader clarified the differences among each of the three categories, and each member went back to the SEA applications to confirm the accuracy of data entries.

Continuous data-checks. On a weekly basis, the team leader reviewed all entries to ensure consistency across the SEA applications for which data were entered and across data capture categories. Each team member reviewed from two to ten SEA applications. Upon completion of the data capture from all 51 SEA applications, the team leader and another senior team member conducted a final review of all data to cross-check the entries and to ensure consistency of the data captured. During this process, at least one entry from each researcher was selected for a second review. The secondary reviewers then added or corrected information in the entry within the data capture workbook.

Top

3.1.2. Step 2: Data coding and analysis

Once data capture workbooks were completed for all states and the District of Columbia, the three lead researchers developed a coding and analysis plan. First, the researchers reviewed the data capture elements to determine which categories needed further specification (in Appendix A, all of those with a "short answer" or "cut and paste from application"). For example, the narrative from each SEA application included several strategies for monitoring the implementation of the intervention models in SIG-awarded schools. To determine the prevalence of different strategies, the study team reviewed the application text from all states and identified the following top three categories: use of on-line monitoring tools, informal "check-in" meetings or conference calls, and monitoring site visits to SIG schools.

For all components of the state SIG applications that required a more detailed level of coding, the lead researchers identified a list of potential codes based on a review of the data elements across SEA applications. The researchers then developed a master list of codes with associated definitions. Next, the researchers developed state-by-state tables with the relevant application text for each state, and the coding categories in columns. The text for each state was reviewed and assigned a category (or multiple categories, if they were not mutually exclusive) listed in the columns; results were tallied for all state applications.

Reliability and validity. To ensure a reliable and valid coding process, at least two researchers reviewed each data element in the state-by-state coding tables.

First, two researchers coded all of the short answer or text data for each of the states. The codes for these two researchers were compared to identify discrepancies. The initial inter-rater reliability rating was determined based on the first round of coding, by calculating the proportion of codes on which the two researchers agreed. Across all of the data elements, the average inter-rater agreement was 96 percent. The range among the elements coded was between 93 percent and 100 percent.

For the cases in which the first two researchers disagreed on the code, a third researcher coded the text as well and reconciled the discrepancy. When the codes were finalized, the states in each category were tallied; the results of these coding analyses are presented in the following sections.

Top

2 State education agencies include all 50 states and the District of Columbia.
3 U.S. Department of Education (2010). School Improvement Fund: Summary of Applicant Information. Retrieved from: http://www2.ed.gov/programs/sif/summary/index.html#nm on September 25, 2010.
4 U.S. Department of Education. (2010). Guidance on School Improvement Grants Under Section 1003(g) of the Elementary and Secondary Education Act of 1965.
5 Ibid.
6 U.S. Department of Education. (2010). School Improvement Grants Application: Section 1003(g) of the Elementary and Secondary Education Act, CFDA Numbers: 84.377A; 84.388A.