Skip Navigation

Summer Research Training Institute: Cluster-Randomized Trials: June 21–July 2, 2009

Sunday, June 21  Monday, June 22  Tuesday, June 23  Wednesday, June 24  
Thursday, June 25  Friday, June 26  Monday, June 29  Tuesday, June 30  
Wednesday, July 1  Thursday, July 2
Sunday, June 21, 2009
Sunday afternoon Arrival at Embassy Suites, 1811 Broadway, Nashville
6:30 Meet in lobby; hotel shuttle to Wyatt Center
7:00 7:00 Dinner in Wyatt Center Rotunda
Welcome: Lynn Okagaki, IES Commissioner for Education Research, U.S.
Department of Education
Monday, June 22, 2009
All instructional and group sessions are held in the Wyatt Center
Session 1 Specifying conceptual and operational models; formulating questions
8:00 – 10:00 Instructor: Mark Lipsey
This session covers: (1) developing the rationale for the importance of the intervention; (2) determining and justifying the type of study (development, efficacy, or scale-up), including a review of what is already known in the area, relevant pilot data and preliminary studies; (3) specifying the “theory of change” underlying the intervention, including a conceptual model specifying key cause-effect constructs and their linkages and an operational model of the processes and activities that affect the outcomes; and (4) framing the question precisely so that a trial can be designed to provide an answer that will be useful.
10:00 – 10:30 Break
Session 2 Describing and quantifying outcomes
10:30 – 12:30 Instructor: Mark Lipsey
This session covers considerations for identifying relevant variables and selecting tests and measures in education trials, including (1) reliability, validity, sensitivity, and relevance of measures; (2) specifying proximal (mediating) and distal outcomes/variables; (3) alignment and overalignment of measures with the intervention; (4) continuity across ages/grades for follow-up measures; (5) developmental appropriateness of measures; (6) feasibility of use; (7) respondent burden; (8) efficiency—minimizing overlap among measures; (9) attention to possible unexpected as well as expected outcomes; (10) issues associated with correlated measures, creating composite measures; and (11) measurement issues associated with special populations.
12:30 – 1:30 Lunch
Session 3 Assessing the cause
1:30 – 3:30 Instructor: David Cordray
This session defines the causal agent in intervention studies. It establishes a framework for articulating intervention models, operationalizing an assessment of implementation fidelity, and indexing the achieved relative strength (ARS) of the intervention contrast. In doing so, it highlights (1) the importance of clear specification of the intervention as a basis for assessing fidelity/implementation, and (2) measuring the relevant experience in the control group.
3:30–4:00 Break
Session 4 Introducing the Group Activity Assignment
4:00–5:30 Instructor: David Cordray
The Group Activity Assignment is designed to provide attendees with an opportunity to gain experience in applying the concepts and strategies that are presented in the various instructional sessions. With the assistance of the instructors, the activity will mimic the process of developing a feasible and technically sound proposal to conduct an RCT. Each group will formulate an intervention topic, decide on the type of RCT that is most appropriate, articulate the theory of action, and specify intervention and control contrasts. As the sessions progress, the group will have an opportunity to apply the measurement, sampling, design and analysis concepts that will be discussed in the technical sessions. Continuous feedback and resources will be made available to assist in the development of the study design. Toward the end of the Training Institute period, the final design from each group will be presented to the full assembly.
6:00–7:00 Dinner, Wyatt Center Rotunda
Top
Tuesday, June 23, 2009
Session 5 Basic experimental design with special considerations for education studies
8:00 – 10:00 Instructor: Larry Hedges
Sessions 5-7 cover the logic of randomized experiments and their advantages for making causal inferences, and the basics of experimental design and analysis of variance with a focus on the two most widely used designs: the hierarchical design and the (generalized) randomized blocks design. Issues that arise because of the hierarchical structure of populations in education (students nested in classes nested in schools) are discussed. Additional topics include: (1) issues of which units to randomize (classrooms, schools, or individual students); (2) how to do the randomization; (3) multiple levels of randomization (students to teachers, then teachers to conditions) and the implications for design and inference; (4) randomization within schools/classrooms (contamination concerns) versus randomization between schools/classrooms (power issues); (5) nature and role of blocking and covariates (including covariate and blocking at the student, classroom, and school levels); (6) crossovers and attrition after randomization; (7) multiple cohorts of treatment and control groups; and (8) how hypothesized aptitude x treatment interactions are included in design.
10:00 – 10:30 Break
Session 6 Basic experimental design with special considerations for education studies
10:30 – 12:30 Continuation of Session 5.
12:30 – 1:30 Lunch
Session 7 Basic experimental design with special considerations for education studies
1:30 – 3:30 Continuation of Session 6.
3:30–4:00 Break
4:00–5:30 Group Project Meetings
6:00–7:00 Dinner, Wyatt Center Rotunda
7:00 – 8:00 IES Grant Opportunities
Discussion of IES grant opportunities with Lynn Okagaki, IES Commissioner for Education Research
Top
Wednesday, June 24, 2009
Session 8 The analysis of cluster randomized experiments with repeated measures
8:00 – 10:00 Instructor: Larry Hedges
Randomized trials often use repeated measurements of outcomes over time for one of two reasons. Sometimes the outcome is measured repeatedly because individual measurements are unstable and the mean on many measurements is used to increase stability. In other cases, interest centers on the trajectory of response to the treatment over the time course of the treatment and beyond. This session will focus on the design and analysis of field experiments that use repeated measures for either of these purposes using hierarchical linear models.
10:00 – 10:30 Break
Session 9 Statistical analysis (Computer Lab)
10:30 – 12:30 Instructors: Larry Hedges & Spyros Konstantopoulos
An introduction to hierarchical linear modeling and how it relates to analysis of variance and regression approaches to analysis of the same designs. This session will (1) provide an overview of multilevel models for data with dependencies due to repeated measures; (2) introduce the intra-class correlation (ICC) to describe the degree of dependency or correlation in repeated measures within each level of these models; and (3) describe how different types of multilevel models address these dependencies by modeling the ICC. Also included are (4) an overview of two-level models such as individual growth curve models and models with nesting of children in classrooms; and (5) three-level models such as individual growth curve models that describe change over time of children nested in classrooms. Includes hands-on practice using HLM and SAS.
12:30 – 1:30 Lunch
Session 10 Statistical analysis (Computer lab)
1:30–3:30 Continuation of Session 9.
3:30–4:00 Break
4:00–5:30 Group Project Meetings
6:00–7:00 Dinner, TBA
Top
Thursday, June 25, 2009
Session 11 Sample size and statistical power
8:00 – 10:00 Instructor: Howard Bloom
Sessions 11 and 12 will cover: (1) computing statistical power for cluster randomized trials; (2) the role of between-unit variance components and intraclass correlation; (3) planning sample sizes with adequate power; (4) the effect of blocking (matching) and covariates on power; (5) and how choice of analysis influences power. The sessions will also include: (6) discussion of effect size; (7) how to determine and justify the minimum effect size to detect; (8) designing around power considerations associated with the minimum detectable effect size; and (9) cost considerations.
10:00 – 10:30 Break
Session 12 Sample size and statistical power
10:30 – 12:30 Continuation of Session 11.
12:30 – 1:30 Lunch
Session 13 Sampling and external validity
1:30 – 3:30 Instructor: Howard Bloom
Discussion of: (1) the nature of sampling in experiments; (2) how the concept of blocks in experiments is the same as clusters in sampling; (3) the logic of generalization from blocks (clusters) as fixed effects; (4) the logic of generalization from blocks as random effects; (5) considerations and procedures for sampling clusters (districts, schools, classrooms, instructional groups) from a population and units within clusters (classrooms in schools, students in classrooms); (6) sample representativeness and diversity; (7) oversampling; (8) interactions between sample characteristics and treatment response to explore generalizability; and (9) sampling in multi-site studies.
3:30–4:00 Break
4:00–5:30 Group Project Meetings
6:00–7:00 Dinner, Wyatt Center Rotunda
Top
Friday, June 26, 2009
Session 14 Statistical power analysis (Computer lab)
8:00 – 10:00 Instructor: Jessaca Spybrook
Sessions 14 and 15 will provide an introduction to using the Optimal Design (OD) software for conducting a power analysis. The session will cover (1) an overview of the OD software; (2) a demonstration of how to use OD to plan two-level cluster randomized trials; (3) three-level cluster randomized trials; and (4) trials that include blocking. The session will also include a discussion of how to write-up a power analysis. Includes hands-on practice using OD.
10:00 – 10:30 Break
Session 15 Statistical power analysis (Computer lab)
10:30 – 12:30 Continuation of Session 14.
12:30 – 1:30 Lunch
Session 16 Mediator analysis
1:30 – 3:30 Instructor: Laura Stapleton
This session covers methods to evaluate possible mediational processes as part of the relation between the manipulation and the distal outcome in cluster randomized trials. The session will build on the material presented in prior sessions on hierarchical linear modeling. It includes a discussion of (1) causal considerations within mediational models; (2) basic mediation models; (3) extension of the simple mediation model when the treatment is administered at the cluster level; (4) models with the mediator at the cluster or individual level; (5) robust methods to test the significance of the mediation (or indirect) effect within each of the models presented; (6) a brief review of more complex issues: multiple mediators, moderated mediation, random indirect effects, and latent variable models for testing mediation.
3:30–4:00 Break
4:00–5:30 Group Project Meetings
6:00–7:00 Dinner, TBA
Top
Monday, June 29, 2009
Session 17 Dealing with missing data
8:00 – 10:00 Instructor: John Graham
Theory of analysis of datasets involving missing data and an introduction to software for handling missing data.
10:00 – 10:30 Break
Session 18 Dealing with missing data
10:30 – 12:30 Continuation of session 17
12:30–1:30 Lunch
Session 19 Analyzing fidelity of treatment implementation
1:30 – 2:30 Instructor: David Cordray
This session describes several analytic models that can be used to analyze and summarize evidence about implementation fidelity and the achieved relative strength (ARS) of the intervention contrast. Models relevant to intent-to-treat (ITT), local average treatment effect (LATE), and treatment-on-treated (TOT) analyses will be described and illustrated.
3:30–4:00 Break
4:00–5:30 Group Project Meetings
6:00–7:00 Dinner, Wyatt Center Rotunda
Top
Tuesday, June 30, 2009
Session 20 Moderator analysis
8:00–10:00 Instructor: Spyros Konstantopoulos
In addition to assessing the main effects of interventions, researchers are also often interested in gauging whether treatments work better for certain groups of individuals. For example, some interventions in schools have a dual objective: of increasing achievement for all students while having differential effects on lower achieving subgroups to decrease the achievement gap between, e.g., males and females, minorities and whites, or disadvantaged and more advantaged students. Differential effects are typically represented in linear regression models as statistical interactions between a moderator variable (e.g., gender) and treatment (e.g., smaller vs. larger size classrooms). For instance, male and female students may benefit differently from being in smaller classrooms. This session will use data from Project STAR to illustrate moderator analysis and examine how class size reduction in the early grades affects the achievement of males and females, minorities and whites, and disadvantaged and more advantaged students.
10:00 – 10:30 Break
Session 21 Alternatives to randomized trials
10:30 – 12:30 Instructor: Mark Lipsey
Sessions 21 and 22 will provide an overview of the design alternatives that have the highest internal validity under favorable circumstances and which may be considered when a randomized design is not feasible (regression discontinuity and nonrandomized comparison groups with statistical controls). The discussion of these designs will focus on their general character and logic, the circumstances in which they are applicable, and their relative advantages and disadvantages.
12:30 – 1:30 Lunch
Session 22 Alternatives to randomized trials
1:30 – 3:30 Continuation of session 21
3:30–4:00 Break
4:00–5:30 Group Project Meetings
6:00–7:00 Dinner, TBA
Top
Wednesday, July 1, 2009
8:00 – 10:00 Group Projects—Preparation for presentations
10:00 – 10:30 Break
10:30 – 12:30 Group Projects—Preparation for presentations
12:30 – 1:30 Lunch
1:30-3:30 Group Project Presentations (Group 1)
One hour group presentation followed by discussion
3:30–4:00 Break
3:30–5:30 Group Project Presentations (Group 2)
One hour group presentation followed by discussion
6:00–7:00 Dinner, Wyatt Center Rotunda
Top
Thursday, July 2, 2009
8:00 – 10:00 Group Project Presentations (Group 3)
One hour group presentation followed by discussion
10:00 – 10:30 Break
10:30 – 12:30 Group Project Presentations (Group 4)
One hour group presentation followed by discussion
12:30 – 1:30 Lunch
1:30-3:30 Group Project Presentations (Group 5)
One hour group presentation followed by discussion
3:30 – 4:00 Break
4:00–5:00 Course evaluation and debriefing
Completion of the evaluation forms; discussion about the experience and ways to improve the summer institute
6:30 – 8:00 Graduation dinner and ceremony, Wyatt Center Rotunda

Top