The following formula was used to determine the statistical significance of the differences between independent groups.
For example, the formula above could be used to determine whether the difference in the percentages of students who report a particular view among students with learning disabilities and among those with hearing impairments is greater than would be expected to occur by chance. In this formula, P1 and SE1 are the first percentage and its standard error and P2 and SE2 are the second percentage and the standard error. The squared difference between the two percentages of interest is divided by the sum of the two squared standard errors.
If the product of a calculation is larger than 3.84 (i.e., 1.962), the difference is statistically significant at the .05 level-that is, it would occur by chance fewer than 5 times in 100. If the result of the calculation is at least 6.63, the significance level is .01; products of 10.8 or greater are significant at the .001 level (Owen 1962, pp. 12, 51).
Testing for the significance of differences in responses to two survey items for the same individuals involves identifying for each youth the pattern of response to the two items. Responses to each item (e.g., the youth reported relying "a lot" on parents for support-yes or no-and reported relying on friends "a lot" for support-yes or no) are scored as 0 or 1, producing difference values for individual students of +1 (responded affirmatively to the first item but not the second), 0 (responded affirmatively to both or neither item), or -1 (responded affirmatively to the second item but not the first). The test statistic is the square of a ratio, where the numerator of the ratio is the weighted mean change score and the denominator is an estimate of the standard error of that mean. Since the ratio approaches a normal distribution by the Central Limit Theorem, for samples of the sizes included in the analyses, this test statistic approximately follows a chi-square distribution with one degree of freedom-i.e., an F(1, infinity) distribution.
Regardless of whether comparisons are for independent or dependent samples, a large number of statistical analyses were conducted and are presented in this report. Since no explicit adjustments were made for multiple comparisons, the likelihood of finding at least one statistically significant difference when no difference exists in the population is substantially larger than the Type 1 error for each individual analysis. This may be particularly true when many of the variables on which the groups are being compared are measures of the same or similar constructs, as is the case in this report. To partially compensate for the number of analyses that were conducted, we used a relatively conservative p value of .01. The text mentions only differences that reach a level of statistical significance of at least p < .01. If no level of statistical significance is reported, the group differences described do not attain the p < .01 level of statistical significance. Readers also are cautioned that the meaningfulness of differences reported here cannot be inferred from their statistical significance.