Research References
Conaway, C., Keesler, V., & Schwartz, N. (2015). What research
do state education agencies really need? The promise and
limitations of state longitudinal data systems.
Educational Evaluation and Policy Analysis, 37(1S),
16S–28S.
https://eric.ed.gov/?id=EJ1058609. Retrieved from
https://journals.sagepub.com/doi/pdf/10.3102/0162373715576073
From the ERIC abstract: “State longitudinal data
systems (SLDSs) have created more opportunities than ever
before for rigorous research to influence education policy
decisions. As state practitioners who play central roles in
building and using our states’ longitudinal data systems, we
are excited about their promise for supporting policymaking
and research. Yet, we also recognize that the data in SLDSs
will not answer many of our most pressing research questions,
nor will the presence of these systems create the meaningful
collaboration between researchers and practitioners that we
feel is needed to inform our states’ policy landscapes. The
barriers to the kinds of research we need are mostly unrelated
to the promises of SLDSs. We outline the challenges we have
experienced in developing research agendas, building our
internal capacity for research, and working with external
partners, and we identify the research questions we need to
answer that are not easily addressed with SLDS data.”
Gottfried, M. A., Ikemoto, G. S., Orr, N., & Lemke, C. (2011).
What four states are doing to support local data-driven
decisionmaking: Policies, practices, and programs
(Issues & Answers Report, REL 2012-No. 118). Washington, D.C.:
U.S. Department of Education, Institute of Education Sciences,
National Center for Education Evaluation and Regional
Assistance, Regional Educational Laboratory Mid-Atlantic.
https://eric.ed.gov/?id=ED526134
From the ERIC abstract: “This report documents how four
state education agencies are supporting local data-driven
decisionmaking through their policies, practices, and programs
for creating data systems, improving data access and use, and
building district and school capacity to use data.
Specifically, this report, responding to District of Columbia
Office of State Superintendent of Education and Pennsylvania
Department of Education requests, describes how Arkansas,
Florida, Texas, and Virginia are supporting local data-driven
decisionmaking. Two questions guide this study: (1) What
policies or practices in Arkansas, Florida, Texas, and
Virginia support local use of data for education purposes; and
(2) How do Arkansas, Florida, Texas, and Virginia support
local use of data in selected state education
agency-administered programs? This study found that the four
states have implemented a range of policies and practices in
all three categories of the study's analytical framework: (1)
Creating, expanding, and linking data systems. The four states
have created and built state repositories and are expanding
the types of data collected and warehoused to better equip
districts and schools to rigorously assess whether students,
schools, and districts are meeting state college readiness
requirements and career readiness expectations. (2) Ensuring
data access and use. The four states have implemented policies
and practices to help local educators and administrators
access, understand, and use data effectively. In doing so,
they are making data and analyses timely, readily available,
and easy to understand for parents, educators, and
policymakers. (3) Building district and school capacity to use
data. The four states have focused on strengthening local
human resource capability, mainly through partnerships and
professional development. By building local capacity to access
and analyze data stored in state longitudinal data systems,
the states intend to help local policymakers and practitioners
use data inform key policy questions on performance and
improvement. In addition to state policies, the study also
identified five state programs supporting district and school
use of data (one in Florida, two in Texas, and two in
Virginia). Appended are: (1) Summaries of studies with strong
findings on state education agency support for local
data-driven decisionmaking; and (2) Study methods.”
LaPointe, M. A., Brett, J., Kagle, M., Midouhas, E., Sanchez,
M. T., Oh, Y., et al. (2009).
How state education agencies in the Northeast and Islands
Region support data-driven decisionmaking in districts and
schools
(Issues & Answers Report, REL 2009-No. 072). Washington, D.C.:
U.S. Department of Education, Institute of Education Sciences,
National Center for Education Evaluation and Regional
Assistance, Regional Educational Laboratory Northeast &
Islands.
https://eric.ed.gov/?id=ED505288
From the ERIC abstract: “The report examines the
initiatives of state education agencies in the Northeast and
Islands Region to support data-driven decisionmaking in
districts and schools and describes the service providers
hired to support this work. Four components of data-driven
decisionmaking initiatives are identified: (1) Centralized
data system/warehouse; (2) Tools for data analysis and
reporting; (3) Training on data systems/warehouses and tools;
and (4) Professional development in using data for
decisionmaking. Analysis of the four components across the
state education agency initiatives revealed that not all
initiatives include all four components, and implementation is
affected in part by available funding and capacity. The study
outlines considerations for education decisionmakers and
researchers on the potential benefits of implementing
additional components of a data-driven decisionmaking system,
sources of funding, and strategies to enhance their capacity
to support teachers and administrators. Ideas are proposed for
further research, including examining how state education
agencies scale up their data-driven decisionmaking
initiatives; exploring how state education agencies, schools,
and districts implement data-driven decisionmaking; and
analyzing the impacts of data-driven decisionmaking on student
and school outcomes. [This report was written with Young Oh
and Charlotte North. For summary report, see ED505289.]”
Opalka, A., Jochim, A., & DeArmond, M. (2019).
A middle way for states in the ESSA era: Lessons from Texas.
Seattle, WA: Center on Reinventing Public Education.
https://eric.ed.gov/?id=ED600606
From the ERIC abstract: “In 2017 the Texas Education
Agency (TEA) launched the Systems of Great Schools (SGS)
initiative. With a combination of incentives and capacity
building, SGS attempts to transform how school districts
approach school improvement. It calls on districts to manage
school performance in new ways, expand access to school choice
options, and take a dynamic approach to managing their supply
of schools. As one of TEA's partners said, SGS is ‘basically
changing the operating system of the district.’ Unlike other
recent improvement efforts, SGS has set out not to change
individual schools, but entire systems. The promise of this
approach rests on the hope that districts, in turn, will
reinvent themselves in ways that enable them to eliminate
low-performing schools and foster higher-performing schools to
take their place. The policy environment in Texas has created
conditions that may help realize those hopes. The combination
of reprieve from potent state accountability, incentives to
partner with external organizations to improve low-performing
schools, additional capacity support and grant opportunities,
and a strong but flexible framework for locally designed
accountability systems help make SGS more appealing, and more
feasible, for districts. These policy tools are not new, but
the coordinated use of them to create meaningful incentives
for districts to voluntarily make system-level changes should
be of interest to state leaders elsewhere. Texas’ initiative
suggests several important lessons for other state leaders
interested in adopting ‘middle-way’ programs in other state
agencies, which we list in this report: (1) New programs don't
necessarily require large new departments, but benefit from
creative reorganization and realignment of existing programs
and resources toward new strategic goals; (2) While it’s
important to attend to the organizational and human side of
change inside the state agency by finding ways to align with
existing work and strategies, it’s also crucial to secure
political support from the top and outside to make and protect
organizational and resource changes; and (3) Successful change
efforts require clear communication about the shifts the state
expects to make, what success looks like, and how they will
support districts to get there. While sustained improvement in
participating districts is not guaranteed with SGS, this
account of TEA’s early experience reimagining state-led change
can inform efforts in other states in the post-No Child Left
Behind era.”
Tanenbaum, C., Boyle, A., Graczewski, C., James-Burdumy, S.,
Dragoset, L., & Hallgren, K. (2015).
State capacity to support school turnaround
(NCEE Evaluation Brief, NCEE 2015-4012). Washington, D.C.:
U.S. Department of Education, Institute of Education Sciences,
National Center for Education Evaluation and Regional
Assistance.
https://eric.ed.gov/?id=ED556118
From the ERIC abstract: “One objective of the U.S.
Department of Education’s (ED) School Improvement Grants (SIG)
and Race to the Top (RTT) program is to help states enhance
their capacity to support the turnaround of low-performing
schools. This capacity may be important, given how difficult
it is to produce substantial and sustained achievement gains
in low-performing schools. There is limited existing research
on the extent to which states have the capacity to support
school turnaround and are pursuing strategies to enhance that
capacity. This brief documents states’ capacity to support
school turnaround as of spring 2012 and spring 2013. It
examines capacity issues for all states and for those that
reported both prioritizing turnaround and having significant
gaps in expertise to support it. Key findings, based on
interviews with administrators from 49 states and the District
of Columbia, include the following: (1) More than 80 percent
of states made turning around low-performing schools a high
priority, but at least 50 percent found it very difficult to
turn around low-performing schools; (2) 38 states (76 percent)
reported significant gaps in expertise for supporting school
turnaround in 2012, and that number increased to 40 (80
percent) in 2013; (3) More than 85 percent of states reported
using strategies to enhance their capacity to support school
turnaround, with the use of intermediaries decreasing over
time and the use of organizational or administrative
structures increasing over time; and (4) States that reported
both prioritizing school turnaround and having significant
gaps in expertise to support it were no more likely to report
using intermediaries than other states but all 21 of these
states reported having at least one organizational or
administrative structure compared with 86 percent (25 of 29)
of all other states. Appended are: (1) Race to the Top and
School Improvement Grant Intervention Models as Described by
the U.S. Department of Education SIG Guidance (2012); (2)
State Interview Questions Used for Analyses in this Brief; and
(3) Analysis of State Capacity to Support School Turnaround by
RTT Status.”
Weinstock, P., Gulemetova, M., Sanchez, R., Silver, D., &
Barach, I. (2019).
National evaluation of the comprehensive centers program
final report
(NCEE 2020-001). Washington, D.C.: U.S. Department of
Education, Institute of Education Sciences, National Center
for Education Evaluation and Regional Assistance.
https://eric.ed.gov/?id=ED599103
From the ERIC abstract: “Between 2012 and 2018, the
U.S. Department of Education invested nearly $350 million in
22 Comprehensive Technical Assistance (TA) Centers operating
across the nation. These Centers were charged with delivering
TA that builds the capacity of state education agencies (SEAs)
to support local educational agencies (LEAs) in improving
student outcomes. Centers were given broad discretion in
interpreting and enacting this mandate. This evaluation sought
to address the open questions about how the Centers designed
and implemented the TA, what challenges they encountered, and
what outcomes they achieved. With thorough documentation of
how this process played out, stakeholders will be in a better
position to inform future program improvement. Key takeaways
from the study include: (1) Overall, Centers and their TA
recipients reported that the Centers’ TA improved the capacity
of SEAs to meet their goals; (2) Centers shared similar
approaches to the design and implementation of their TA. Those
Center practices perceived to be instrumental to building
capacity included: engaging a broad array of stakeholders to
provide input on policy; providing products and tools for SEA
staff to use as they took greater ownership of policy design
and implementation; imparting organizational practices and
structures resilient to SEA turnover and policy shifts; and
flexibly adapting TA in response to changing priorities and
needs. (3) Centers and their TA recipients pointed to a few
areas for program improvement, including clarification of the
Centers’ role and expected outcomes related to their work with
LEAs, and further guidance for SEAs about how best to use the
Centers. Overall, the evaluation found that Centers shared
similarities in their approaches to the design and
implementation of their work, and Centers and key TA
recipients reported that the work generally helped build SEA
capacity. These two projects are not necessarily
representative of all Center projects, but were selected to
bring the overall findings in this report more to life while
also recognizing the unique combinations of needs, strategies,
challenges, and outcomes that may make up each project. This
evaluation’s findings are consistent with the findings of the
prior national evaluation of the Centers, published in 2011.”