Skip Navigation

Publications & Products

Search Results: (16-23 of 23 records)

 Pub Number  Title  Date
REL 2015046 A Primer for Analyzing Nested Data: Multilevel Modeling in SPSS Using an Example from a REL Study
Analyzing data that possess some form of nesting is often challenging for applied researchers or district staff who are involved in or in charge of conducting data analyses. This report provides a description of the challenges for analyzing nested data and provides a primer of how multilevel regression modeling may be used to resolve these challenges. An illustration from the companion report, The correlates of academic performance for English language learner students in a New England district (REL 2014–020), is provided to show how multilevel modeling procedures are used and how the results are interpreted.
12/23/2014
REL 2015049 Using Administrative Data for Research: A Companion Guide to A Descriptive Analysis of the Principal Workforce in Florida Schools
This report outlines the processes and procedures used to analyze data from the Florida education staffing database. The report also provides directions and examples for conducting similar work using administrative databases.
12/23/2014
REL 2014064 Reporting What Readers Need to Know about Education Research Measures: A Guide
This brief provides five checklists to help researchers provide complete information describing (1) their study's measures; (2) data collection training and quality; (3) the study's reference population, study sample, and measurement timing; (4) evidence of the reliability and construct validity of the measures; and (5) missing data and descriptive statistics. The brief includes an example of parts of a report's methods and results section illustrating how the checklists can be used to check the completeness of reporting.
9/9/2014
REL 2014051 Going public: Writing About Research in Everyday Language
This brief describes approaches that writers can use to make impact research more accessible to policy audiences. It emphasizes three techniques: making concepts as simple as possible, focusing on what readers need to know, and reducing possible misinterpretations. A glossary of common concepts is included showing the approaches applied to a range of concepts common to impact research, such as ‘regression models’ and ‘effect sizes.’
6/24/2014
REL 2014036 Using evidence-based decision trees instead of formulas to identify at-risk readers
The purpose of this study was to examine whether the early identification of students who are at-risk for reading comprehension difficulties is improved using logistic regression or classification and regression tree (CART). This research question was motivated by state education leaders’ interest in maintaining high classification accuracy while simultaneously improving practitioner understanding of the rules by which students are identified as at-risk or not at-risk readers. Logistic regression and CART were compared using data on a sample of grades 1 and 2 Florida public school students who participated in both interim assessments and an end-of-the year summative assessment during the 2012/13 academic year. Grade-level analyses were conducted and comparisons between methods were based on traditional measures of diagnostic accuracy, including sensitivity (i.e., proportion of true positives), specificity (proportion of true negatives), positive and negative predictive power, overall correct classification, and the receiver operating characteristic area under the curve. Results indicate that CART is comparable to logistic regression, with the results of both methods yielding negative predictive power greater than the recommended standard of .90. The comparability of results suggests that CART should be used due to its ease in interpretation by practitioners. In addition, CART holds several technical advantages over logistic regression.
6/17/2014
REL 20140037 Recognizing and Conducting Opportunistic Experiments in Education: A Guide for Policymakers and Researchers
Opportunistic experiments are type of randomized controlled trial that study the effects of a planned intervention or policy change with minimal added disruption and cost. This guide defines opportunistic experiments and provides examples, discusses issues to consider when identifying potential opportunistic experiments, and outlines the critical steps to complete opportunistic experiments. It concludes with a discussion of the potentially low cost of conducting opportunistic experiments and the potentially high cost of not conducting them. Readers will also find a checklist of key questions to consider when conducting opportunistic experiments.
5/6/2014
REL 2014006 Testing the Importance of Individual Growth Curves in Predicting Performance on a High-Stakes Reading Comprehension Test in Florida
REL Southeast at Florida State University evaluated student growth in reading comprehension over the school year and compared the growth to the end-of-year Florida Comprehensive Assessment Test (FCAT). Using archival data for 2009/10, the study analyzes a stratified random sample of 800,000 Florida students in grades 3-10: their fall, winter, and spring reading comprehension scores on the Florida Assessments for Instruction in Reading (FAIR) and their reading comprehension scores on the FCAT. This study examines the relationship among descriptive measures of growth and inferential measures for students in grades 3-10 and considers how well such measures statistically explain differences in end-of-year reading comprehension after controlling for student performance on a mid-year status assessment. Student differences in reading comprehension performance were explained by the four growth estimates (measured by the coefficient of determination, R2) and differed by status variable used (performance on the fall, winter, or spring FAIR reading comprehension screen).
1/22/2014
REL 2013008 Evaluating the screening accuracy of the Florida Assessments for Instruction in Reading (FAIR)
This report analyzed student performance on the FAIR reading comprehension screen across grades 4-10 and the Florida Comprehensive Assessment Test (FCAT) 2.0 to determine how well the FAIR and the 2011 FCAT 2.0 scores predicted 2012 FCAT 2.0 performance. The first key finding was that the reading comprehension screen of the Florida Assessments for Instruction in Reading (FAIR) was more accurate than the 2011 Florida Comprehensive Assessment Test (FCAT) 2.0 scores in correctly identifying students as not at risk for failing to meet grade-level standards on the 2012 FCAT 2.0. The second key finding was that using both the FAIR screen and the 2011 FCAT 2.0 lowered the underidentification rate of at-risk students by 12–20 percentage points compared with the results using the 2011 FCAT 2.0 score alone.
9/10/2013
<< Prev    16 - 23    
Page 2  of  2