Skip Navigation
2007 IES Research Training Institute: Cluster Randomized Trials: Agenda
Sunday, June 17, 2007
Sunday afternoon Arrival and Registration
5:30 - 7:30 p.m. Welcome and Dinner
Sunday evening Changing the Nature of Education Research
Monday, June 18, 2007
SECTION 1 PLANNING THE EVALUATION
8:00 - 10:00 a.m. Session 2: Specifying the conceptual and operational models; formulating precise questions
  This session covers: (1) developing the rationale for the importance of the intervention, including deciding if an intervention is ready for an RCT (randomized controlled trial); (2) determining and justifying the type of study (development, efficacy, or scale-up), including a review of what is already known in the area, relevant pilot data and preliminary studies; (3) specifying the "theory of change" underlying the intervention, including a conceptual model specifying key cause-effect constructs and their linkages and an operational model of the processes and activities that affect the outcomes; and (4) framing the question precisely so that a trial can be designed to provide an answer that will be useful.
  Instructor: Mark Lipsey
  Reading List: Reading List: Baron R. & Kenny D.A. (1986). The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173-1182.
  Clements, D.H. (2007). Curriculum research: Toward a framework for research-based curricula. Journal for Research in Mathematics Education, 38(1), 35-70.
10:00 - 10:30 a.m. Break
10:30 a.m. - 12:30 p.m. Session 3: Describing and quantifying outcomes
  This session covers considerations for identifying relevant variables and selecting tests and measures in education trials, including (1) reliability, validity, sensitivity, and relevance of measures; (2) specifying proximal (mediating) and distal outcomes/variables; (3) alignment and overalignment of measures with the intervention; (4) continuity across ages/grades for follow-up measures; (5) developmental appropriateness of measures; (6) feasibility of use; (7) respondent burden; (8) efficiency-minimizing overlap among measures; (9) attention to possible unexpected as well as expected outcomes; (10) issues associated with correlated measures, creating composite measures; and (11) measurement issues associated with special populations.
  Instructor: Mark Lipsey
1:30 - 3:15 p.m. Session 4 Assessment of treatment implementation/assessment of control condition
  This session covers strategies used to assess instruction, process, and treatment fidelity, including systematic observation, logs and diaries, questionnaires, and interviews. It includes discussion of (1) the concepts of implementation versus fidelity; (2) the importance of clear specification of the intervention as basis for assessing fidelity/implementation; and (3) measuring the relevant experience in the control group.
  Instructor: David Cordray
  Reading List: Cordray, D.S. & Pion, G.P. (1993). Psychosocial rehabilitation assessment: A broader perspective. In R.L. Glueckauf, L.B. Sechrest, G.R. Bond, & E.C. McDonel (Eds.), Improving assessment in rehabilitation and health (pp. 215-240). Newbury Park, CA: Sage Publications.
  Cordray, D.S. & Pion, G.M. (2006). Treatment strength and integrity: Models and methods. In R.R. Bootzin and P.E. McKnight (Eds.), Strengthening research methodology: Psychological measurement and evaluation (pp. 103-124). Washington, D.C.: American Psychological Association.
  Cordray, D.S. (2007). Common threats to the validity of causal inferences.
  Holland, P.W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81 (396), 945-960.
3:15 - 3:45 p.m. Break
3:45 - 5:45 p.m. Session 5: Introducing the Group Activity Assignment
  The Group Activity Assignment is designed to provide attendees with an opportunity to gain experience in applying the concepts and strategies that are presented in the various technical sessions to a specific RCT. With the assistance of the instructors, the activity will mimic the process of developing a feasible and technically sound RCT. Each group will formulate an intervention topic, decide on the type of RCT that is most appropriate, articulate the theory of action, and specify intervention and control contrasts. As the sessions progress, the group will have an opportunity to apply the measurement, sampling, design and analysis issues that have been discussed in the technical sessions. Continuous feedback and resources (e.g., extant data, normative conventions, and decision frameworks) will be made available to assist in the development of the overall RCT design. Toward the end of the Training Institute, the final design from each group will be presented to the full group.
  Instructor: David Cordray
  Reading List: National Center for Education Research, FY 2008 Request for Applications. REQUEST FOR APPLICATIONS NUMBER: IES-NCER-2008-01 Website: http://ies.ed.gov/ncer/funding/
  Top
Tuesday, June 19, 2007
8:00 - 10:00 a.m. Session 6: Basic experimental design with special considerations for education studies
  Sessions 6 to 8 will cover the logic of randomized experiments and their advantages for making causal inferences and a review of the basics of experimental design and analysis of variance focusing on the two most widely used designs: the hierarchical design and the (generalized) randomized blocks design. Issues that arise because of the hierarchical structure of populations in education (students nested in classes nested in schools) are discussed. Additional topics include: (1) issues of which units to randomize (classrooms, schools, or individual students); (2) how to do the randomization; (3) multiple levels of randomization (students to teachers, then teachers to conditions) and what the implications are for design and inference; (4) randomization within schools/classrooms (contamination concerns) versus randomization between schools/classrooms (power issues); (5) nature and role of blocking and covariates (including covariate and blocking considerations at student, classroom, and school levels); (6) crossovers and attrition after randomization; (7) handling multiple cohorts of treatment and control groups; and (8) how hypothesized aptitude x treatment interactions are included in design.
  Instructor:Larry Hedges
  Reading List: Hedges, L.V. & Hedberg, E.C. (2007). Intraclass correlations for planning group randomized experiments in education. Educational Evaluation and Policy Analysis, 29, 60-87.
  Kirk, R.E. (1995). Experimental design: Procedures for the behavioral sciences (3rd Edition). Pacific Grove, CA: Brooks Cole.
Chapter 7: Randomized Block Designs
  Kirk, R.E. (1995). Experimental design: Procedures for the behavioral sciences (3rd Edition).
Chapter 11: Hierarchical Designs
  Raudenbush, S.W. (1993). Hierarchical linear models and experimental design. In L. K. Edwards (Ed.) Applied analysis of variance in behavioral science (pp. 459-496). New York: Marcel Dekker, Inc.
10:00 - 10:30 a.m. Break
10:30 a.m. - 12:30 p.m. Session 7: Basic experimental design with special considerations for education studies
  Continuation of Session 6.
  Instructor:Larry Hedges
1:30 - 3:15 p.m. Session 8: Basic experimental design with special considerations for education studies
  Continuation of Session 7.
  Instructor: Larry Hedges
3:15 - 3:30 p.m. Break
3:30 - 5:30 p.m. Group Projects
  Top
Wednesday, June 20, 2007
8:00 - 10:00 a.m. Session 9: Statistical analysis overview I
  This session will provide an introduction to hierarchical linear model analysis and how it is related to analysis of variance methods and regression approaches to analysis of the same designs. The session will include an (1) overview of multilevel models as a means to analyze data with dependencies due to repeated measures; (2) introduce the intra-class correlation (ICC) to describe the degree of dependency or correlation in repeated measures within each level of these models; and (3) describe how different types of multilevel models address these dependencies by modeling the ICC. There will be an (4) overview of two-level models such as individual growth curve models and models that account for nesting of children in classrooms; and (5) three-level models such as individual growth curve models that describe change over time of children nested in classrooms.
  Instructor: Margaret Burchinal
  Reading List: Raudenbush, S. (1997). Statistical analysis and optimal design for cluster randomized trials. Psychological Methods, vol. 2, no. 2, pp.173-185.
  Bloom, H.W. (2005), Randomizing groups to evaluate place-based programs. In Howard S. Bloom (ed.) Learning More from Social Experiments: Evolving Analytic Method, pp. 115-172. New York: Russell Sage Foundations.
10:00 - 10:30 a.m. Break
10:30 a.m. - 12:30 p.m. Session 10: Statistical analysis overview II
  Continuation of morning session (including concepts of ICC).
  Instructor: Margaret Burchinal
1:30 - 3:15 p.m. Session 11: Modeling growth in trials
  This session will provide an introduction to growth models and longitudinal analyses, and applications of hierarchical models to the analysis of trials with longitudinal components. The session will include examples of how to use the HLM and SAS Proc Mixed programs to analyze two-level models in which children are nested in classrooms and three-level models in which there are repeated assessments of children nested in classrooms.
  Instructor: Margaret Burchinal
3:15 - 3:30 p.m. Break
3:30 - 5:30 p.m. Group Projects
5:30 - 6:30 p.m. Break
7:30 - 8:30 p.m. Session 12: IES Grant Opportunities
  Top
Thursday, June 21, 2007
8:00 - 10:00 a.m. Session 13: Sample size and statistical power I
  Sessions 13 and 14 will cover: (1) computing statistical power for cluster randomized trials; (2) the role of between-unit variance components and intraclass correlation; (3) planning sample sizes with adequate power; (4) the effect of blocking (matching) and covariates on power; (5) and how choice of analysis influences power. The sessions will also include: (6) discussion of effect size; (7) how to determine and justify the minimum effect size to detect; (8) designing around power considerations associated with the minimum detectable effect size; and (9) cost considerations.
  Instructor: Howard Bloom
  Reading List: Bloom, H., Richburg-Hayes, L.,&Black, A. (2007). Using covariates to improve precision for studies that randomize schools to evaluate Educational interventions. Educational Evaluation and Policy Analysis, 29 (1), 30-59.
  Bloom, H. (2005). Randomizing groups to evaluate place-based programs. In Howard S. Bloom (Ed.) Learning More from Social Experiments: Evolving Analytic Method (pp. 115-172). New York: Russell Sage Foundations.
10:00 - 10:30 a.m. Break
10:30 a.m. - 12:30 p.m. Session 14: Sample size and statistical power II
  Continuation of Session 13.
  Instructor: Howard Bloom
Session 15: Sampling and external validity
This session includes discussion of: (1) the nature of sampling in experiments; (2) that the concept of blocks in experiments is the same as clusters in sampling; (3) the logic of generalization from blocks (clusters) as fixed effects; (4) the logic of generalization from blocks as random effects; (5) considerations and procedures for sampling clusters (districts, schools, classrooms, instructional small groups) from a population and units within clusters (classrooms within schools, students within classrooms); (6) sample representativeness and sample diversity; (7) oversampling; (8) testing interactions between sample characteristics and treatment response to explore generalizability; and (9) sampling issues in multi-site studies.
  Instructor: Howard Bloom
3:15 - 3:30 p.m. Break
3:30 - 5:30 p.m. Group Projects
  Top
Friday, June 22, 2007
8:00 - 10:00 a.m. Session 16: Alternatives to randomized trials I
  Sessions 15 and 16 will provide an overview of the design alternatives that have the highest internal validity under favorable circumstances and may be considered when a randomized design is not feasible (regression discontinuity, nonrandomized comparison groups with statistical controls, and time series). The discussion of these designs will focus on their general character and logic, the circumstances in which they are applicable, and their relative advantages and disadvantages.
  Instructor: Mark Lipsey
  Reading List: Gormley, W.T., Gayer, T., Phillips, D.,&Dawson B. (2005). The effects of universal pre-k on cognitive development. Developmental Psychology, 41 (6), 872-884
10:00 - 10:30 a.m. Break
10:30 a.m. - 12:30 p.m. Session 17: Alternatives to randomized trials II
  Continuation of Session 15
  Instructor: Mark Lipsey
1:30 - 4:00 p.m. Group Projects
4:00 - 4:30 p.m. Break
4:30 - 6:00 p.m. Networking Session
  Top
Monday, June 25, 2007
SECTION 2 IMPLEMENTING THE EVALUATION
8:00 - 10:00 a.m. Session 19: Recruitment of sites and participants
  This session covers strategies for (1) recruiting and retaining schools into trials; (2) encouraging school personnel and parents to participate (e.g., incentives, benefits); (3) tracking participants; (4) encouraging participation in posttest assessments; and (5) dealing with instability in sample (e.g., students and teachers transferring; schools merging, dividing, or closing). In addition, the session will include (6) discussion of ethical concerns of schools and parents regarding participating in randomized trials.
  Instructor: Fred Doolittle
  Reading List: Gueron, J.M. (2000). The politics of random assignment: Implementing studies and impacting policy. MDRC Working Paper.
10:00 - 10:30 a.m. Break
10:30 a.m. - 12:30 p.m. Session 20: Data collection in the field
  This session covers (1) practical aspects of data gathering designed to minimize error, bias, and loss of cases: (2) training research staff; (3) pilot-testing data gathering procedures; (4) providing for ongoing quality assurance monitoring; (5) use of state-of-the-art data gathering methods (e.g., computer assisted phone interviewing or CAPI) designed to reduce errors and bias by enhancing the completeness and consistency of responses and automatically recording participants' data; and (6) strategies for conducting small-scale validation studies to assess data quality.
  Instructor: Ina Wallace
  Reading List: Fowler, F.J. (2002) Survey Research Methods, Third Edition. Thousand Oaks, CA: Sage Publications.
Chapter 4: Methods of Data Collection
Chapter 7: Survey Interviewing
Chapter 9: Ethical Issues in Survey Research
  Sattler, J.M. (2001). Assessment of Children: Cognitive Applications, Fourth Edition. San Diego, CA: Jerome M. Sattler, Publisher, Inc.
Chapter 7: Administering Tests to Children
1:30 - 2:45 p.m. Session 21: Recruiting participants and collecting data from the trenches
  This session is an informal discussion with Vanderbilt researchers regarding recruiting and maintaining participants and collecting data.
2:45 - 3:00 p.m. Break
3:00 - 5:00 p.m. Group Projects
  Top
Tuesday, June 26, 2007
SECTION 3 DATA ANALYSIS
8:00 - 10:00 a.m. Session 22: Analyzing intervention effects I
  Sessions 22 to 24 constitute a hands-on introduction to the analysis of randomized experiments in educational research. Session 22 is an introduction to the use of HLM to conduct analyses of experiments using multilevel and mixed model analysis of variance approach.
  Instructor: Larry Hedges
  Reading List: Singer, J.D. (1998). Using SAS PROC MIXED to fit multilevel models, hierarchical models, and individual growth models. Journal of Educational and Behavioral Statistics, 25, 323-355.
10:00 - 10:30 a.m. Break
10:30 a.m. - 12:30 p.m. Session 23: Analyzing intervention effects II
  Continuation of morning session.
  Instructor: Larry Hedges
1:30 - 3:15 p.m. Session 24: Analyzing intervention effects III
  Continuation of morning session.
  Instructor: Larry Hedges
3:15 - 3:30 p.m. Break
3:30 - 5:30 p.m. Group Projects
  Top
Wednesday, June 27, 2007
8:00 - 10:00 a.m. Session 25: Handling missing data in the analysis of trials
  Sessions 25 and 26 will focus on a discussion of analysis with missing data, and an efficiency (planned missing data) design for measurement called the 3-form design. Session 25 will cover (1) missing data theory, (2) analysis with multiple imputation (with Joe Schafer's NORM program and SAS Proc MIX), and (3) analysis with Full Information Maximum Likelihood (FIML) procedures (in the context of structural equation modeling). This session will also address material related to participant attrition.
  Instructor: John Graham
  Reading List: Graham, J.W., Cumsille, P.E.,&Elek-Fisk, E. (2003). Methods for handling missing data. In J.A. Schinka&W.F. Velicer (Eds.). Research Methods in Psychology (pp. 87-114). Volume 2 of the Handbook of Psychology (I. B. Weiner, Editor-in-Chief). New York: John Wiley&Sons.
  Graham, J.W., Taylor, B.J., Olchowski, A.E.,&Cumsille, P.E. (2006). Planned missing data designs in psychological research. Psychological Methods, 11, 323-343.
  Schafer, J.L.,&Graham, J.W. (2002). Missing data: our view of the state of the art. Psychological Methods, 7, 147-177.
10:00 - 10:30 a.m. Break
10:30 a.m. - 12:30 p.m. Session 26: Missing data design
  Session 26 will include a discussion of 3-form design, and related measurement designs. The 3-form design, a kind of matrix sampling, allows researchers to leverage limited resources to collect data for 33% more survey questions than can be answered by any one respondent. This session will cover implementation strategies, provide examples, and provide strategies for estimating the benefit of the design compared to other possible measurement designs. Specific advantages for group randomized trials will be discussed.
  Instructor: John Graham
1:30 - 2:45 p.m. Session 27: Reporting guidelines
  This short session covers reporting guidelines for field trials (e.g., CONSORT) including (1) describing the source (e.g., initial selection, refusal at assignment, attrition) and magnitude of participant loss; (2) practices for describing the attributes of participants in the achieved samples; (3) adequate description of treatment, and (4) implementation data, etc.
  Instructor:David Cordray
  Reading List: Campbell, M.K., Elbourne, D.R.,&Altman, D.G. (2004). CONSORT Statement: extension to randomized trials. British Medical Journal, 328, 702-708
  What Works Clearinghouse: Evidence standards for reviewing studies, 2006.
  What Works Clearinghouse Improvement Index
2:45 - 3:00 p.m. Break
3:00 - 5:30 p.m. Group Projects
  Top
Thursday, June 28, 2007
8:00 - 10:00 a.m. Session 28: Reports of Student Work Groups: Designing Randomized Control Trials
10:00 - 10:30 a.m. Break
10:30 a.m. - 12:30 p.m. Session 29: Reports of Student Work Groups: Designing Randomized Control Trials
1:30 - 3:30 p.m. Session 30: Reports of Student Work Groups: Designing Randomized Control Trials
3:30 - 3:45 p.m. Break
3:45 - 4:45 p.m. Session 31: Reports of Student Work Groups: Designing Randomized Control Trials
5:30 - 7:30 p.m. Graduation Dinner and Ceremony
  Top
Friday, June 29, 2007
8:00 - 10:00 a.m. Session 32: Final Review and Evaluation
10:00 - 11:00 a.m. Check-out and Leave

Top