Data for the study's outcome measures to estimate intervention impacts were collected through student surveys in both intervention and control schools. Student surveys provided data to address the main impact research questions regarding school violence. Peer nomination measures by which data about individual students are simultaneously collected from many different peers were not considered due to the time-consuming and impractical nature of this type of collection within the context of a large multiyear effort. For this report, the baseline data collection for students occurred in fall 2006, and the follow-up to estimate impacts after 3 years occurred in spring 2009. Teacher data were collected through a survey administered to a random sample of 24 full-time teachers at each school in spring of each year to assess other program impacts besides main outcomes (school climate, victimization, feelings of safety). In addition to outcome data, the study team collected implementation data through the teacher survey, class records, annual school prevention coordinator and teacher interviews, and classroom observations. Annual interviews were conducted with the following: one person at each intervention and control school with the most knowledge about existing violence prevention activities, three randomly selected RiPP teachers at each intervention school to ask about teachers' experiences with teaching RiPP, and three randomly selected members of the school management team at each intervention school to ask about their experiences with implementing Best Behavior. Finally, observations of RiPP sessions were conducted by the evaluation team in three randomly selected classrooms (one classroom per grade level) in each intervention school.
Student survey administration procedures were designed to address potential issues with reliability of self-report data on topics of a sensitive nature, such as violence and victimization. To safeguard the confidentiality of the students' data and encourage candid responses from students, the following measures were used: (1) prior to beginning the survey, students were read standard instructions to advise them how the survey should be completed and how the data would be used and safeguarded; (2) students were advised that the survey was voluntary and that they could skip any questions they did not wish to answer; (3) students responded to the survey by filling in circles for each item, so no handwritten responses by which students might be identified were required; (4) special labels with peel-away portions left only a bar code and no name on the survey booklets; (5) members of the evaluation team administered the survey, and no school staff were allowed to circulate among the students while the survey was in progress; (6) seating was arranged so that students had an empty seat between them, when possible; and (7) completed surveys were placed in large envelopes that were then sealed and taken from the school by the evaluation team.
The primary outcomes are student violence and student victimization, both measured through student surveys. For each of these two outcomes, two additional subindices were created to better understand any differences between intervention and control schools with regard to specific types of violence. A second set of indices was created to examine possible secondary effects from the intervention, beyond the primary effects. These were: (1) student safety concerns; (2) teacher safety concerns; (3) teacher victimization; and (4) student prosocial behaviors. Finally, a third set of indices was created to examine possible intermediate effects from the intervention. The theoretical model for the combined intervention predicts that changes in these areas would precede changes on the primary outcomes and included: (1) student perceptions of behavior expectations; (2) student attitudes toward violence; and (3) student self-reported coping strategies.