Skip Navigation
archived information
REL Pacific

[Return to Ask A REL]

REL Pacific Ask A REL Response

Educator Effectiveness
PDF icon

December 2016

Question

Which student growth models could meet federal accountability regulations and are suitable for small populations and alternative assessments?

Response

The following document is a response to an Ask A REL inquiry from the University of Hawai'i, in collaboration with the Hawaii State Department of Education's Hawaiian Language Immersion Program (Ka Papahana Kaiapuni Hawai'i). They are in the midst of developing and piloting Hawaiian-language assessments in math and reading as an approved alternative assessment to the state-wide Smarter Balanced assessments. These stakeholders are interested in understanding how to interpret and use the scores from these new assessments and in particular, have requested assistance from the REL in identifying relevant student growth models that they could consider using for their accountability purposes. They are interested in models for small populations that have been used in other settings and/or are likely to be appropriate to meet state or federal assessment standards. Although state regulations may change somewhat based on the new Federal ESSA law, the group wants to ensure that their assessment system is able to demonstrate student growth for this small population of Hawaiian-language immersion students (21 schools/programs and 2000 students).

In response to this inquiry, REL Pacific has gathered literature and online resources.

REL Pacific reviewed information in the QuestionPoint database—a database of existing Ask A REL responses across all ten REL regions—regarding student growth models. Additional sources were identified through a web-based search. Relevant research articles, regulations, models, and practitioner guides were included in the search. Search terms and selection criteria for the resources are included in Appendix A. The resulting sources have been organized into these categories:

  • Overview of growth model types
  • Federal and state regulations
  • Small-scale assessment and annual yearly progress
  • Student learning objectives
  • Curriculum-based measurement

Descriptions of the resources are quoted from the publication abstract (Abstract) or the publication itself (Introduction or Excerpt). An abstract is used when available. However, if additional text in the resource provides important information not contained in the author's abstract, the additional information is provided.

Research References

Overview of growth model types

Castellano, K. & Ho, A.D. (February, 2013). A practitioner's guide to growth models. Washington, D.C.: CCSSO. Retrieved from https://eric.ed.gov/?id=ED551292.

From the excerpt, pp. 11–12:
... statistical models and accountability systems have become increasingly varied and complex, resulting in growth models with interpretations that do not always align with intuition. This guide does not promote one type of interpretation over another. Rather, it describes growth models in terms of the interpretations they best support and, in turn, the questions they are best designed to answer. The goal of this guide is thus to increase alignment between user interpretations and model function in order for models to best serve their desired purposes: increasing student achievement, decreasing achievement gaps, and improving the effectiveness of educators and schools.

A Practitioner's Guide to Growth Models begins by overviewing the growth model landscape, establishing naming conventions for models and grouping them by similarities and contrasts. It continues by listing a series of critical questions or analytical lenses that should be applied to any growth model in current or proposed use. The remainder of the guide delves systematically into each growth model, viewing it through these lenses. This guide is structured like a guidebook to a foreign country. Like a guidebook, it begins with an overview of central features and a presentation of the landscape before proceeding to specific regions and destinations. Although it can be read from beginning to end, a typical user may flip to a model that he or she is using or considering for future use. Although the guide is structured to support this use, readers are encouraged to peruse the beginning sections so that, following the analogy, they can appreciate the full expanse of this landscape.

Statewide Longitudinal Data Systems Grant Program. (July, 2012). Growth models: Issues and advice from the states. A guide of the statewide longitudinal data systems grant program. Washington, DC: Institute of Education Sciences. Retrieved from https://eric.ed.gov/?id=ED551302.

From the excerpt, p. 1:
The Statewide Longitudinal Data Systems (SLDS) Grant Program was asked by several states to review current growth models with the goal of determining the impact of different models on longitudinal data systems and capturing some best practices that states are using in the implementation process. From July 2011 through February 2012, representatives from Colorado, Arkansas, Ohio, Iowa, Pennsylvania, Delaware, and Florida participated in a working group session and follow-up discussions facilitated by members of the State Support Team (SST).1 This working group allowed states to more easily discuss and share strategies, best practices, and challenges related to the use of growth models. Specifically, these states have provided the following information to the SST in response to questions about their specific growth model(s) related to:
  • types and purposes of growth model(s) used;
  • description of model(s) used;
  • data elements required for each model; and
  • issues and barriers experienced during development, implementation, or use

Reform Support Network. (August, 2015). Emerging approaches to measuring student growth. [Brief]. Reform Support Network. Washington, D.C: U.S. Department of Education. Retrieved from http://www2.ed.gov/
about/inits/ed/implementation-support-unit/tech-assist/emergapprotomeasurstudgrowth.pdf
.

From the introduction excerpt, p. 1–2:
This publication aims to help State education agencies, school districts and their partners consider innovative and emerging approaches to measuring student learning in the context of educator evaluations. In particular, States and districts can use this publication as a resource as they work to improve their systems over time and contemplate new approaches to measuring growth.

For the Reform Support Network (RSN), a technical support group for Race to the Top States, there has been a long arc of work focused on non-tested grades and subjects (NTGS) and how to include teachers of NTGS into evaluation systems requiring measures of student growth. The RSN focused mainly on student learning objectives (SLOs) because it was the solution of choice for the majority of States implementing new evaluation systems. Meanwhile, for teachers of tested grades and subjects, many States and districts use quantitative models of various forms, such as student growth percentiles or prediction models.

In February 2015, the RSN convened a group of experts and practitioners to consider emerging measures of student learning, such as portfolios and prediction models to grapple with the following questions:
  1. What innovative and emerging approaches to measuring student growth are available for States and districts? What promising approaches are being implemented on a small scale?
  2. How can States and districts evaluate and improve the way they incorporate existing measures of student growth into educator evaluations?
  3. How might States and districts measure student growth in five years?
This publication captures key points that emerged from discussion of these questions during the convening.

Hoffer, T. B., Hedberg, C. E., Brown, K. L., Halverson, M. L., Reid-Brossard, P., Ho, A. D., & Furgol, K. (2011). Final report on the evaluation of the growth model pilot project. Jessup, MD: US Department of Education. Retrieved from https://eric.ed.gov/?id=ED515310.

From the executive summary:
The U.S. Department of Education (ED) initiated the Growth Model Pilot Project (GMPP) in November 2005 with the goal of approving up to ten states to incorporate growth models in school adequate yearly progress (AYP) determinations under the Elementary and Secondary Education Act (ESEA). After extensive reviews, nine states were fully approved for the initial phase of the pilot project by the 2007–08 school year: Alaska, Arizona, Arkansas, Delaware, Florida, Iowa, North Carolina, Ohio, and Tennessee. Based on analyses of data provided by the U.S. Department of Education and by the pilot grantee states, this report describes the progress these states made in implementing the GMPP in the 2007–08 school year.

Federal and state regulations
ESSA

National Conference of State Legislatures. (2016). Summary of the Every Student Succeeds Act, legislation reauthorizing the Elementary and Secondary Education Act. Washington, D.C.: Author. Retrieved from http://www.ncsl.org/documents/capitolforum/2015/onlineresources/summary_12_10.pdf.

From the excerpt, “Academic assessments,” p. 3:
States are required to implement a set of high-quality student academic assessments in math, reading/language arts, and science, and may implement assessments in other subjects. These assessments (with exceptions regarding alternative assessments for certain students) must be administered to all elementary and secondary students and must measure the achievement of all students. Assessments must be aligned with challenging state academic standards.

The bill keeps the current schedule of federally required statewide assessments. Math and reading/language arts have to be assessed yearly in grades three through eight, and once in grades nine through 12. Science must be assessed at least once in grades three through five, grades six through nine, and once in grades 10 through 12. States may assess other subjects.

These assessments must involve multiple measures of student achievement, including measures that assess higher-order thinking skills and understanding, which may include measures of student growth and may be partially delivered in the form of portfolios, projects or extended performance tasks. They must provide appropriate accommodations for children with disabilities. The assessments can be administered through a single summative assessment or through multiple assessments during the course of the academic year. Results must be disaggregated with each state, local education agency, and school by:
  • Racial and ethnic group;
  • Economically disadvantaged students compared to students who are not economically disadvantaged;
  • Children with disabilities as compared to children without disabilities;
  • English proficiency status;
  • Gender; and
  • Migrant status
Alternate assessments are to be aligned with alternative academic standards and achievement goals. Only one percent of the total number of all students in the state can be assessed using these alternate assessments. LEAs may administer a nationally-recognized high school academic assessment approved by the state in place of a required statewide assessment.

NOTE: other provisions regarding assessments are contained in Part B of Title I of the bill, including new flexibility to develop innovative assessments, and are described below.
From the excerpt, “Statewide accountability system,” p. 4:
Each state must have a statewide accountability system that is based on the challenging state academic standards for reading/language arts and math to improve student academic achievement and school success. States shall: Establish ambitious state-designed long-term goals for all students and each subgroup of students in the state for improved:
  • Academic achievement as measured by proficiency on the annual assessments
  • High school graduation rates including the four-year adjusted cohort graduation rate and at the state's discretion the extended-year adjusted cohort graduation rate
  • Percent of English learners making progress in achieving English language proficiency
The indicators of the system, for all students and separately for each subgroup
  • Academic achievement as measured by proficiency on annual assessments
  • Another indicator of academic achievement
  • For high schools, a measure of the graduation rate.
  • Progress of English learners in achieving English language proficiency
  • An indicator of school quality and student success such as student engagement, educator engagement, student access to advanced coursework, postsecondary readiness, school climate and safety, or other measure.
States must also incorporate test participation in some way in their accountability system. States must count academic factors more heavily. A state must use this system to meaningfully differentiate all public schools in the state based on all indicators for all students and subgroups of students and puts substantial weight on each indicator. The system must differentiate any school in which any subgroup of students is consistently underperforming. Those subgroups are:
  • Economically disadvantaged students
  • Students from major racial and ethnic groups
  • Children with disabilities
  • English learners

Hawaii Department of Education

Hawaii Department of Education. Growth model. [website]. Retrieved from http://www.hawaii
publicschools.org/VisionForSuccess/SchoolDataAndReports/Growth-Model/Pages/home.aspx#
.

From the excerpt, “Frequently asked questions,” p. 5: Is the Hawaii State Assessment (HSA) the only test used to calculate student growth percentiles? What about other assessments used in our school, complex area, or district? What about alternate assessments?
The Hawaii Growth Model only calculates student growth percentile (SGP) scores using data from the Hawaii State Assessment (HSA) in Reading and Mathematics. Assessments that are only used within a school, complex area, or district cannot be used because the model requires data from students in the entire state. While it is possible to calculate growth percentiles for other assessments that are administered statewide, Hawaii has not yet decided to incorporate any additional assessments. Student scores from the Hawaii State Alternate Assessment (HSA-ALT), Hawaiian Aligned Portfolio Assessment (HAPA), End of Course Exams, ACT Tests, etc. are not currently used to calculate growth scores.

Hawai'i Department of Education. Strive HI performance system. [website]. Retrieved from http://www.hawaiipublicschools.org/VisionForSuccess/AdvancingEducation/StriveHIPerformance
System/Pages/home.aspx
.

Hawaii Department of Education. Every Student Succeeds Act—FAQ. [website]. Retrieved from http://www.hawaiipublicschools.org/VisionForSuccess/AdvancingEducation/StriveHIPerformance
System/Pages/ESSA.aspx
.

Small-scale assessment and annual yearly progress

Arizona State Board for Charter Schools (August, 2015). Academic performance framework and guidance. Phoenix, AZ: Author. Retrieved from https://asbcs.az.gov/sites/default/files/documents/files/Final%20Combined
%20Guidance%20Document%20FINAL%20revised%2011.16.15.pdf
.

From Appendix G: Traditional and small school methodology, p. 75:
The Arizona State Board of Education adopted the Arizona Growth Model, based on the Student Growth Percentile Methodology first used in Colorado. This method provides an effective way to measure peer referenced student growth. A student growth percentile (SGP) calculates a student's progress in comparison with his or her academic peers—students with similar performance on previous assessments. Each individual student's growth in assessment results is ranked against the growth for all students with the same test result on the baseline assessment. A student with an SGP of 50 demonstrated higher growth than half of his academic peers across the state with similar performance in current and past years. A school median SGP of 50 indicates that at least half of the students in the school showed more growth than half of their academic peers with similar performance across the state in past years.

In the state A–F School Accountability Letter Grade System, a three-year pooled SGP is calculated for small schools with fewer than 30 test records in the current year. By aggregating three years' worth of growth data, variability due to the very small number of students is reduced. The academic framework uses a similar method for small charter schools with fewer than 30 test records in either of the evaluated subjects (math or reading).

Coladarci, T. (2005). Adequate yearly progress, small schools, and students with disabilities: The importance of confidence intervals when making judgments about AYP. Rural Special Education Quarterly, 24(1), 40–47.

From the abstract:
Indicators of school-level achievement, such as the percentage of students who are proficient in a particular content area, are subject to random year-to-year variation in much the same way that the results of an opinion poll will vary from one random sample to another. This random variation, which is more pronounced for a small school, should be taken into account by education officials when evaluating school progress in a policy climate of high stakes. To do otherwise is to risk the false identification of a failing school, whether for all students combined or for the subgroup of students with disabilities. In this article, I describe the application of confidence intervals to the evaluation of “adequate yearly progress” for No Child Left Behind (NCLB). Throughout, I demonstrate the particular relevance of confidence intervals for small schools in general and, more specifically, for the (smaller still) subgroup of students with disabilities.

REL Pacific at McREL was unable to locate a free link to the full-text version of this resource. Although REL Pacific at McREL tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest. It may be found through university or public library systems.

Student learning objectives

Lachlan-Haché, L., Cushing, E., and Bivona, L. (November, 2012). Student learning objectives as measures of educator effectiveness: the basics. Washington, D.C.: AIR. Retrieved from https://files.eric.ed.gov/fulltext/
ED565765.pdf
.

From the introduction:
AIR is working with states and districts across the country to improve teacher evaluation and feedback. Our work is focused on designing systems of educator evaluation and compensation that incorporate multiple measures of performance and, in particular, measures of student growth. In this work, student learning objectives (SLOs) have emerged as a novel approach to measuring student growth, particularly for the majority of educators not covered by a state standardized assessment (Prince et al., 2009). In this paper, we offer some ideas for states and districts that are considering the use of SLOs to measure student growth, including a basic description of SLOs, highlights of the SLO development process, and a discussion of their function within the evaluation cycle. For more detailed discussions of SLO implementation, benefits, challenges, and potential solutions, see the other papers in this series: Implementing Student Learning Objectives: Core Elements for Sustainability and Student Learning Objectives: Benefits, Challenges, and Solutions.

Lacireno-Paquet, N., Morgan, C., & Mello, D. (2014). How states use student learning objectives in teacher evaluation systems: a review of state websites (REL 2014–013). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northeast & Islands. Retrieved from https://eric.ed.gov/?id=ED544769.

From the summary:
Student learning objectives (SLOs) are one way to measure individual teachers' contributions to their students' learning growth. The SLO process is a “participatory method of setting measurable goals, or objectives, based on the specific assignment or class, such as the students taught, the subject matter taught, the baseline performance of the students, and the measurable gain in student performance during the course of instruction.” This method is an alternative to the more generally used value-added modeling with standardized test scores, which may not be available or appropriate for all teachers and subjects. This report presents information on the use of SLOs in teacher evaluation systems in 30 states. It aims to inform state and local policymakers engaged in creating or supporting the development of teacher evaluation systems that include SLOs.

Reform Support Network (December, 2012). Targeting growth: Using student learning objectives as a measure of educator effectiveness. [Brief]. Washington, D.C: U.S. Department of Education. Retrieved from https://www2.ed.gov/about/inits/ed/implementation-support-unit/tech-assist/targeting-growth.pdf.

From the introduction:
As States and districts implement educator evaluation systems that include measures of student growth, one of the challenges they face is identifying measures for nontested grades and subjects. Using student learning objectives (SLOs) is one promising approach to addressing this challenge.

SLOs have their origins in the experience of Denver Public Schools, which in 1999 began using them to link teacher pay to student outcomes. Districts like Austin Independent School District and Charlotte- Mecklenburg Schools, as well as States that won Race to the Top grants—including Rhode Island, Georgia, New York and several others—are building on the experience of Denver Public Schools and developing methods for using SLOs as a tool to incorporate measures of student growth for non-tested grades and subjects (NTGS) in their evaluation systems.

[Note: Overviews and links provided to extensive SOL resources in New York, Denver, and Rhode Island]

Curriculum-based measurement

Deno, S. L. (2003). Developments in curriculum-based measurement. The Journal of Special Education, 37(3), 184–192. Retrieved from http://eric.ed.gov/?id=EJ785942.

From the abstract:
Curriculum-based measurement (CBM) is an approach for assessing the growth of students in basic skills that originated uniquely in special education. A substantial research literature has developed to demonstrate that CBM can be used effectively to gather student performance data to support a wide range of educational decisions. Those decisions include screening to identify, evaluating prereferral interventions, determining eligibility for and placement in remedial and special education programs, formatively evaluating instruction, and evaluating reintegration and inclusion of students in mainstream programs. Beyond those fundamental uses of CBM, recent research has been conducted on using CBM to predict success in high-stakes assessment, to measure growth in content areas in secondary school programs, and to assess growth in early childhood programs. In this article, best practices in CBM are described and empirical support for those practices is identified. Illustrations of the successful uses of CBM to improve educational decision making are provided.

From the excerpt: Measuring Growth in Secondary School Programs and Content Areas
CBM was developed initially to help teachers at the elementary school level increase the achievement of students struggling to learn basic skills in reading, writing, and arithmetic. As development in those areas has proceeded, teachers in secondary school programs have become interested in the application of similar formative evaluation approaches with their students. For that reason, technical work has proceeded on establishing CBM progress monitoring methods for assessing student growth both in advanced academic skills and in content area learning (Espin, Scierka, Skare, & Halvorson, 1999; Espin & Tindal, 1998). The technical developments in using CBM methods to assess growth in reading and writing at the secondary level have generated outcomes that appear both promising and tentative. In general, attempts to establish the criterion validity of the same reading and writing measures that have been used at the elementary level have revealed that those measures do correlate with important criteria (e.g., test scores, grade point average, teacher judgment), but the correlations are not as strong as for elementary students. One exception involves a recent study conducted by Muyskens and Marston (2002) in which correlations were high for students in eighth grade. That research was conducted with middle school students, rather than high school students, so it is possible that further studies will identify those upper levels of competence for which ordinary CBMs will be effective.

Fuchs, L.S. & Fuchs, D. (2004). Determining adequate yearly progress from kindergarten through grade 6 with curriculum-based measurement. Assessment for Effective Intervention, 24(4), 25–37. Retrieved from http://eric.ed.gov/?id=EJ793279.

From the abstract:
Curriculum-based measurement (CBM) bridges traditional psychometric and classroom-based observational assessment paradigms to forge an innovative approach to measurement, with several advantages over traditional and other forms of classroom assessment. This article provides a framework for extending CBM in two ways. First, the authors explain how CBM may be used effectively and efficiently to fulfill the Adequate Yearly Progress (AYP) accountability requirement of No Child Left Behind and how such an approach may be linked to special education accountability. Second, with the goal of tracking AYP across the elementary school grades, the authors introduce a system of CBM indicators that extends the CBM passage reading fluency task down to the beginning of kindergarten and up through the end of sixth grade. Across these two extensions to CBM, the goal is to provide a seamless approach to progress monitoring in reading across the elementary grades and across general and special education.

REL Pacific at McREL was unable to locate a free link to the full-text version of this resource. Although REL Pacific at McREL tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest. It may be found through university or public library systems.

Hosp, M. K., & Hosp, J. L. (Fall, 2003). Curriculum-based measurement for reading, spelling, and math: How to do it and why. Preventing School Failure, 48(1), 10–17. Retrieved from http://eric.ed.gov/?id=EJ770008.

From the abstract:
The purpose of this article is to provide a rationale for collecting and using curriculum-based measurement (CBM) data as well as providing specific guidelines for how to collect CBM data in reading, spelling, and math. Relying on the research conducted on CBM over the past 25 years, we define what CBM is and how it is different from curriculum-based assessment (CBA). We describe in detail how to monitor student growth within an instructional program using CBM data in reading, spelling, and math. Last, we discuss reasons teachers should collect and use CBM data.

REL Pacific at McREL was unable to locate a free link to the full-text version of this resource. Although REL Pacific at McREL tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest. It may be found through university or public library systems.

Methods

Keywords and Search Terms Used in the Search

The following keywords and search strings were used to search the reference databases and other sources:

  • “growth model” NOT “Dissertations & Theses”
  • “growth model” AND “AYP" NOT “Dissertations & Theses”
  • “growth model” AND “rural” NOT “Dissertations & Theses”
  • “growth model” AND “small" NOT “Dissertations & Theses”
  • “AYP” AND “small" NOT “Dissertations & Theses”
  • “charter” AND “growth model" NOT “Dissertations & Theses”
  • “AYP” AND “charter" NOT “Dissertations & Theses”
  • “accountability” AND “rural" NOT “Dissertations & These”
  • “accountability” AND “small" NOT “Dissertations & These”
  • “accountability” AND “charter" NOT “Dissertations & Theses”
  • “accountability” AND “native language" NOT “Dissertations & Theses”
  • “AYP” AND “native language" NOT “Dissertations & Theses”
  • “accountability” AND “charter" NOT “Dissertations & Theses”
  • “accountability” AND “charter" NOT “Dissertations & Theses”
  • “accountability” AND “charter" NOT “Dissertations & Theses”
  • “indigenous” AND “growth model” NOT “Dissertations & Theses”
  • “native” AND “growth model ” NOT “Dissertations & Theses”
  • “indigenous” AND “measure” And “education ” NOT “Dissertations & Theses”
  • “native” AND “measure” And “education” NOT “Dissertations & Theses”

Databases and Resources

Google/Google Scholar, ERIC, ProQuest Education Journals, QuestionPoint

Reference Search and Selection Criteria

The web search sought research studies, regulations, models, and practitioner guides published within the last 15 years. REL Pacific searched for documents that are freely available online, though several sources are included that may only be accessed online by purchase or through a library system. Resources included also had to be in English. Resources included in this document were last accessed in February 2016. URLs, descriptions, and content included in this document were current at that time.


This memorandum is one in a series of quick-turnaround responses to specific questions posed by educational stakeholders in the Pacific Region (American Samoa, the Commonwealth of the Northern Mariana Islands, the Federated States of Micronesia, Guam, Hawai'i, the Republic of the Marshall Islands, and the Republic of Palau), which is served by the Regional Educational Laboratory (REL Pacific) at McREL International. This memorandum was prepared by REL Pacific under a contract with the U.S. Department of Education's Institute of Education Sciences (IES), Contract ED-IES-17-C-0010, administered by McREL International. Its content does not necessarily reflect the views or policies of IES or the U.S. Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.