Skip Navigation
Funding Opportunities | Search Funded Research Grants and Contracts

IES Grant

Title: Development and Validation of a Web-Based System for Monitoring Social Behavior
Center: NCSER Year: 2015
Principal Investigator: Volpe, Robert Awardee: Northeastern University
Program: Social, Emotional, and Behavioral Competence      [Program Details]
Award Period: 4 years (8/1/2015-7/31/2019) Award Amount: $1,599,252
Type: Measurement Award Number: R324A150071

Co-Principal Investigator: Amy Briesch (Northeastern University) and Julie Sarno Owens (Ohio University)

Purpose: The purpose of this project is to develop tools for progress monitoring of social behavior, particularly for use in monitoring the impact of Tier 2 and 3 school-based interventions targeting academic enablers (study skills, interpersonal skills, motivation, and academic engagement) and externalizing behavior (disruptive and oppositional behavior) for students in kindergarten through third grade. Direct Behavior Rating (DBR) is one promising method for assessing social behavior. DBR has been described as a hybrid assessment tool, melding the characteristics of both behavior rating scales and systematic direct observation. Although the reliability and validity evidence for DBR has grown in recent years, research has focused almost exclusively on the evaluation of three single-item behavior scales (i.e., engagement, disruptive behavior, non-compliance) completed using traditional paper measures. This research team will improve upon exiting DBR research by moving beyond single-item DBR scales to developing multi-item scales and expanding the types of behaviors measured. In addition, they will develop mobile-enabled, web-based tools for rating and summarizing student behavior using the newly developed DBR scales.

Project Activities: In the assessment development phase, the research team will develop and refine the pool of DBR items for the multi-item scales. Next, in the assessment evaluation phase, the team will evaluate the reliability, validity, and treatment sensitivity of the DBR scales. In the technology development phase, the team will develop mobile-enabled, web-based tools for rating and summarizing student behaviors using the newly developed DBR scales, and refine the system based on user feedback to maximize feasibility, procedural integrity, and instrument utility.

Products: The products of this project will include validated web-based DBR scales and tools for progress monitoring social behavior for students in Grades K–3. They will also include peer-reviewed publications and presentations.

Structured Abstract

Setting: The research will take place in elementary schools in Massachusetts and Ohio.

Sample: Overall, approximately 475 teachers in Grades K–3 and their students (including those with and without behavior problems) will participate in this study.

Assessment: Direct Behavior Rating (DBR) scales combine the strength of a rating scale (e.g., efficient data recording) and the benefits of systematic direct observation (e.g., data recording that occurs at the time and place of the behavior). In brief, a DBR is completed by a rater (e.g., teacher) who quantifies perception of a well-defined behavior of a student through rating that behavior in close proximity to the time and place of behavior observation. The DBRs developed in this grant will include multi-item DBR scales measuring academic enablers (study skills, interpersonal skills, motivation, and academic engagement) and externalizing behaviors (disruptive behavior and oppositional behavior) for use in the formative assessment of student behavior in Grades K–3.

Research Design and Methods: In year 1, an initial pool of items will be identified based largely on a national teacher survey of common referral concerns. This pool of items will be expanded, refined, and verified based on the following four methods: (1) a review of extant measures, (2) analysis of large datasets of daily report card target behaviors, (3) focus groups with teachers, parents, and school administrators (Consumer Advisory Panel), and (4) feedback from a panel of experts (Scientific Advisory Panel). In years 2–4, Generalizability (G) and Decision (D) studies will be used to generate information to further refine the multi-item scales (e.g. internal consistency reliability, inter-rater reliability, temporal stability). In addition, a series of multitrait-multimethod studies will investigate the convergent and discriminant validity and temporal stability of the scales. Treatment sensitivity of the multi-item scales will be investigated through a series of multiple-baseline single-case design intervention studies. A series of parallel G and D studies will compare the dependability of the single-item compared to multi-item scales. In years 3 and 4, to develop mobile-enabled, web-based tools, the team will use focus groups with teachers and a second panel of experts (Consumer Advisory Panel) to solicit feedback regarding acceptability, usability, utility, and perceived barriers to use.

Control Condition: Due to the nature of the research design, there is no control condition.

Key Measures: Key measures are the DBR scales to be developed and behavior rating scales for determining validity and utility. The behavior rating scales for determining validity include the Integrated Screening and Intervention System Teacher Rating Form, Daily Report Card, the Social Skills Improvement System (SSIS), the Teacher Report Form (TRF Externalizing and Internalizing Scales), and the Academic Competence Evaluation Scale (ACES).

Data Analytic Strategy: Multiple statistical and measurement techniques will be used to develop and assess the DBR, including exploratory factor analysis, Generalizability studies, and Decision studies. Visual analysis and non-overlap effect size techniques will be used to evaluate the multiple-baseline single-case design studies. Multitrait-multimethod analyses will be conducted using an analysis of variance approach. Focus group data will be transcribed and content analysis used to identify emergent themes.