Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Project VIABLE: Validation of Instruments for Assessing Behavior Longitudinally and Efficiently
Center: NCSER Year: 2006
Principal Investigator: Chafouleas, Sandra Awardee: University of Connecticut
Program: Social, Emotional, and Behavioral Competence      [Program Details]
Award Period: 6/1/2006 to 5/31/2010 Award Amount: $1,496,507
Type: Measurement Award Number: R324B060014
Description:

Purpose: Empirical attention to the development and validation of viable formative measures of social behavior is essential if we are to effectively evaluate the success of positive behavior interventions put in place to address challenging student behavior. The purpose of this project is to develop and validate the Direct Behavior Rating (DBR) for use in student progress monitoring that is also feasible for use in applied settings. Thus, the goal of Project VIABLE is to develop the DBR for use in progress monitoring through 3 phases of investigation including 1) foundations of measurement, 2) decision making and validity, and 3) feasibility.

Project Activities: The researchers will develop and validate the DBR progress monitoring tool. In the first phase, issues surrounding foundations of measurement (i.e., instrumentation and procedures) will be examined. Questions regarding scale composition, wording of items, frequency and duration of measurement, and length of observation rating period will be examined. In the second phase, questions related to both summative and formative application will be addressed. Finally, the third phase will analyze the feasibility of the DBR for use by educators and will evaluate questions pertaining to training and use, and perceived usability.

Products: The expected outcomes from this study include:

  1. A developed and validated DBR tool to measure the social behavior of children with behavioral or emotional problems,
  2. Technical papers from the results from each phase of the project,
  3. Published reports on the results of analyses examining the validity and feasibility of the DBR.

Setting: Participants will be recruited from universities in Connecticut and North Carolina (e.g., pre-service teachers) and school districts (e.g. teachers and their students) near those universities.

Population: Participants will be undergraduates (pre-service teachers) and classroom teacher-student dyads. Each teacher-student pairing will include a student and his/her teacher in a grade between first and eighth. Phase 1 will involve approximately 200-300 undergraduate students. Phases 2 and 3 will involve approximately 100-200 undergraduate students and 30-60 teacher-student dyads. Teachers will identify students in his or her class who exhibits one of the target behaviors defined during phase 1. No other inclusion/exclusion criteria will be used, and as such, it is anticipated that a wide range of student ages, grades, disabilities, behavior problems, and classroom characteristics will be included.

Research Design and Methods: The study will be conducted in three phases: foundation of measurement, decision-making and validity, and feasibility. In the foundation of measurement phase, issues surrounding measurement (i.e., instrumentation and procedures) will be examined, including scale composition, wording of items, frequency and duration of measurement, and length of observation rating period. In the decision-making and validity phase, questions related to both summative and formative application will be addressed such as criterion-related validity. In addition, sensitivity to change (i.e., response to intervention) will be investigated. In the third phase, feasibility, the DBR will be examined by educators through questions pertaining to training and use, and perceived usability.

Key Measures: The DBR is a measurement that is rated at least once per day and can be shared to guide decisions regarding behavior management and/or intervention. Using such measures as a way to collect information about student behavior may be considered a type of performance-based behavioral rating where behaviors are identified and described, and then a Likert-type scale is developed that corresponds to specific dimension of behavior. The researchers will focus on behaviors related to externalizing problems (e.g., aggression, hyperactivity, conduct problems) and school problems (e.g., attention problems, learning problems) given their relevance to a population of students exhibiting serious behavior disorders.

Data Analytic Strategy: During phase 1, data analysis will be conducted to guide development and optimize the psychometric characteristics of the DBR for three levels of data interpretation: norm-referenced, criterion-referenced, and self-referenced. Norm-referenced interpretations of DBR outcomes will be examined using traditional psychometric procedures to examine rank-order test-re-test reliability and inter-rater reliability. Researchers will also explore psychometric property data in a series of Generalizability Studies (G-studies) and Dependability Studies (D-studies), which allows investigation of the influence of the sources of measurement error (i.e., rater, items, duration, etc.) across both relative norm-referenced interpretations and absolute criterion and self-referenced interpretations. The purpose of phase 2 is to evaluate and develop criterion/concurrent validity evidence for the DBR procedures and instrumentation. Standard procedures will be used to calculate and report demographic information to describe the participant sample and to calculate validity and reliability coefficients. Data from the Behavior System for Children-2nd Edition (BASC) Student Observation System (BASC SOS) and Teacher Report Form (BASC TRF), State-Event Classroom Observation System (SECOS) and the DBR will be examined for convergent and divergent validity evidence. Data will also be analyzed to examine the generalizability and dependability of DBR outcomes across raters, observations, forms, and observation sessions. The sensitivity of the DBR to evaluate intervention effects will be examined by comparing teachers' and pre-service teachers' evaluations of level and trend for DBR against direct observation data. Descriptive statistics will be derived within and between raters across DBR and direct observations to determine consistency of decision-making across types. Descriptive statistics will also be derived for baseline and intervention phases and disaggregated to examine the average effect sizes that correspond with sensitivity to intervention. Phase 3 examinations of feasibility will result in descriptive statistics that define the demographics of the sample, duration of training, intensity/duration of ongoing support, and DBR. Both quantitative and qualitative data will summarize the needs for training and use of DBR along with the perceived utility of the procedures by teacher.

Intervention: Due to the nature of the research, there is no intervention.

Control Condition: Due to the nature of the research, there is no control condition.

Related IES Projects: Enhancing Ci3T: Building Professional Capacity for High Fidelity Implementation to Support Students' Educational Outcomes (Project ENHANCE) (R324N190002); Exploring the Status and Impact of School-Based Behavior Screening Practices in a National Sample: Implications for Systems, Policy, and Research (R305A140543); Project VIABLE-II: Unified validation of Direct Behavior Rating (DBR) in a problem-solving model (R324A110017); Project EASS-E: Expanding Approaches to School Screening With Equity (R305A220249).

Products and Publications

Journal article, monograph, or newsletter

Briesch, A.M., Chafouleas, S.M., and Riley-Tillman, T.C. (2010). Generalizability and Dependability of Behavior Assessment Methods to Estimate Academic Engagement: A Comparison of Systematic Direct Observation and Direct Behavior Rating. School Psychology Review, 39(3): 408–421.

Briesch, A.M., Kilgus, S.P., Chafouleas, S.M., Riley-Tillman, T.C., and Christ, T.J. (2013). The Influence of Alternative Scale Formats on the Generalizability of Data Obtained From Direct Behavior Rating Single-Item Scales (DBR-SIS). Assessment for Effective Intervention, 38(2): 127–133. doi:10.1177/1534508412441966

Chafouleas, S.M. (2011). Direct Behavior Rating: A Review of the Issues and Research in Its Development. Education and Treatment of Children, 34(4): 575–591.

Chafouleas, S.M., Briesch, A.M., Riley-Tillman, T.C., Christ, T.C., Black, A.C., and Kilgus, S.P. (2010). An Investigation of the Generalizability and Dependability of Direct Behavior Rating Single Item Scales (DBR-SIS) to Measure Academic Engagement and Disruptive Behavior of Middle School Students. Journal of School Psychology, 48(3): 219–246. doi:10.1016/j.jsp.2010.02.001

Chafouleas, S.M., Christ, T.J., and Riley-Tillman, T.C. (2009). Generalizability of Scaling Gradients on Direct Behavior Ratings. Educational and Psychological Measurement, 69(1): 157–173. doi:10.1177/0013164408322005

Chafouleas, S.M., Jaffery, R., Riley-Tillman, T.C., Christ, T.J., and Sen, R. (2013). The Impact of Target, Wording, and Duration on Rating Accuracy for Direct Behavior Rating. Assessment for Effective Intervention, 39(1): 39–53. doi:10.1177/1534508413489335

Chafouleas, S.M., Kilgus, S.P., and Hernandez, P. (2009). Using Direct Behavior Rating (DBR) to Screen for School Social Risk: A Preliminary Comparison of Methods in a Kindergarten Sample. Assessment for Effective Intervention, 34(4): 214–223. doi:10.1177/1534508409333547

Chafouleas, S.M., Kilgus, S.P., Jaffery, R., Riley-Tillman, T.C., Welsh, M.E., and Christ, T.J. (2013). Direct Behavior Rating as a School-Based Behavior Screener for Elementary and Middle Grades. Journal of School Psychology, 51(3): 367–385. doi:10.1016/j.jsp.2013.04.002

Chafouleas, S.M., Kilgus, S.P., Riley-Tillman, T.C., Jaffery, R., and Harrison, S. (2012). Preliminary Evaluation of Various Training Components on Accuracy of Direct Behavior Ratings. Journal of School Psychology, 50(3): 317–334. doi:10.1016/j.jsp.2011.11.007

Chafouleas, S.M., Riley-Tillman, T.C., and Christ, T.J. (2009). Direct Behavior Rating (DBR): An Emerging Method for Assessing Social Behavior Within a Tiered Intervention System. Assessment for Effective Intervention, 34: 195–200. doi:10.1177/1534508409340391

Chafouleas, S.M., Riley-Tillman, T.C., Jaffery, R., Miller, F.G., and Harrison, S.E. (2015). Preliminary Investigation of the Impact of a Web-Based Module on Direct Behavior Rating Accuracy. School Mental Health, 7(2): 92–104. doi:10.1007/s12310–014–9130–z

Chafouleas, S.M., Sanetti, L.M.H., Jaffery, R., and Fallon, L. (2012). An Evaluation of a Classwide Intervention Package Involving Self-Management and a Group Contingency on Classroom Behavior of Middle School Students. Journal of Behavioral Education, 21(1): 34–57. doi:10.1007/s10864–011–9135–8

Chafouleas, S.M., Sanetti, L.M.H., Kilgus, S.P., and Maggin, D.M. (2012). Evaluating Sensitivity to Behavioral Change Using Direct Behavior Rating Single-Item Scales. Exceptional Children, 78(4): 491–505.

Christ, T.J., and Boice, C. (2009). Rating Scale Items: A Brief Review of Nomenclature, Components, and Formatting to Inform the Development of Direct Behavior Rating (DBR). Assessment for Effective Intervention, 34(4): 242–250. doi:10.1177/1534508409336182

Christ, T.J., Riley-Tillman, T.C., and Chafouleas, S.M. (2009). Foundation for the Development and Use of Direct Behavior Rating (DBR) to Assess and Evaluate Student Behavior. Assessment for Effective Intervention, 34(4): 201–213. doi:10.1177/1534508409340390

Christ, T.J., Riley-Tillman, T.C., Chafouleas, S.M., and Boice, C.H. (2010). Direct Behavior Rating (DBR): Generalizability and Dependability Across Raters and Observations. Educational and Psychological Measurement, 70(5): 825–843. doi:10.1177/0013164410366695

Christ, T.J., Riley-Tillman, T.C., Chafouleas, S.M., and Jaffery, R. (2011). Direct Behavior Rating: An Evaluation of Alternate Definitions to Assess Classroom Behaviors. School Psychology Review, 40(2): 181–199.

Harrison, S.E., Riley-Tillman, T.C., and Chafouleas, S.M. (2014). Direct Behavior Rating: Considerations for Rater Accuracy. Canadian Journal of School Psychology, 29(1): 3–20. doi:10.1177/0829573513515424

Jaffery, R., Johnson, A.H., Bowler, M.C., Chafouleas, S.M., and Riley-Tillman, T.C. (2011). Options in Agreement Indices for Establishing Expert Consensus on Behavioral Ratings Within School and Industrial/Organizational Psychology.

Jaffery, R., Johnson, A.J., Bowler, M.C., Riley-Tillman, T.C., Chafouleas, S.M., and Harrison, S.E. (2015). Using Consensus Building Procedures With Expert Raters to Establish Comparison Scores of Behavior for Direct Behavior Rating. Assessment for Effective Intervention, 40(4): 195–204. doi:10.1177/1534508415569527

Kilgus, S.P., Chafouleas, S.M., Riley-Tillman, T.C., and Welsh, M.E. (2012). Direct Behavior Rating Scales as Screeners: A Preliminary Investigation of Diagnostic Accuracy in Elementary School. School Psychology Quarterly, 27(1): 41–50. doi:10.1037/a0027150

Kilgus, S.P., Riley-Tillman, T.C., Chafouleas, S.M., Christ, T. J., and Welsh, M. (2014). Direct Behavior Rating as a School-Based Behavior Universal Screener: Replication Across Sites. Journal of School Psychology, 52(1): 63–82. doi:10.1016/j.jsp.2013.11.002

LeBel, T.J., Kilgus, S.P., Briesch, A.M., and Chafouleas, S.M. (2010). The Impact of Training on the Accuracy of Teacher-Completed Direct Behavior Ratings (DBRs). Journal of Positive Behavior Interventions, 12(1): 55–63. doi:10.1177/1098300708325265

Miller, F.G., Riley-Tillman, T.C., Chafouleas, S.M., and Schardt, A.A. (2016). Direct Behavior Rating Instrumentation: Evaluating the Impact of Scale Formats. Assessment for Effective Intervention. doi:10.1177/1534508416658007

Riley-Tillman, T.C., Chafouleas, S.M., Christ, T.J., Briesch, A. M., and LeBel, T.J. (2009). The Impact of Item Wording and Behavioral Specificity on the Accuracy of Direct Behavior Ratings (DBRs). School Psychology Quarterly, 24(1): 1–12. doi:10.1037/a0015248

Riley-Tillman, T.C., Christ, T.J., Chafouleas, S.M., Boice, C.H. and Briesch, A.M. (2011). The Impact of Observation Duration on the Accuracy of Data Obtained From Direct Behavior Rating (DBR). Journal of Positive Behavior Interventions, 13(2): 119–128. doi:10.1177/1098300710361954

Sanetti, L.M.H, Chafouleas, S.M., Berggren, M.L., Faggella-Luby, M., and Byron, J.R. (2016). Implementing Modeling and Self-Monitoring with DBRC in a Tier 2 Reading Group: A Pilot Study of Feasibility. Journal of Evidence-Based Practices for Schools, 15(1): 8–40.

Sanetti, L.M.H., Chafouleas, S.M., Berggren, M.L., Fagella-Luby, M., and Byron, J.R. (in press). Implementing Modeling and Self-Monitoring With DBRC in a Tier 2 Reading Group: A Pilot Study of Feasibility. Journal of Evidence-Based Practices for Schools.

Sanetti, L.M.H., Chafouleas, S.M., Fallon, L.M., and Jaffrey, R. (2014). Increasing Teachers' Adherence to a Classwide Intervention Through Performance Feedback Provided by a School-Based Consultant: A Case Study. Journal of Educational and Psychological Consultation, 24(3): 239–260. doi:10.1080/10474412.2014.923734

Sanetti, L.M.H., Chafouleas, S.M., O'Keeffe, B.V., and Kilgus, S.P. (2013). Treatment Integrity Assessment of a Daily Report Card Intervention: A Preliminary Evaluation of Two Methods and Frequencies. Canadian Journal of School Psychology, 28(3): 261–276. doi:10.1177/0829573513493244

Schlientz, M.D., Riley-Tillman, T.C., Briesch, A.M., Walcott, C.M., and Chafouleas, S.M. (2009). The Impact of Training on the Accuracy of Direct Behavior Ratings (DBR). School Psychology Quarterly, 24(2): 73–83. doi:10.1037/a0016255

Welsh, M.E., Miller, F.G., Kooken, J.W., Chafouleas, S.M., and McCoach, D.B. (2016). The Kindergarten Transition: Behavioral Trajectories in the First Formal Year of School. Journal of Research in Childhood Education, 30(4): 456–473. doi:10.1080/02568543.2016.1214935 Full text


Back