Institute of Education Sciences (IES) Board Room
80 F Street NW
Washington, DC 20001
National Board for Education Sciences (NBES) Members Present
David Chard, Ph.D. (Chair)
Susanna Loeb, Ph.D. (Vice Chair)
Anthony S. Bryk, Ed.D.
Adam Gamoran, Ph.D.
Robert Granger, Ed.D.
Kris D. Gutierrez, Ph.D. (by phone)
Larry V. Hedges, Ph.D.
Margaret R. (Peggy) McLeod, Ed.D.
Judith Singer, Ph.D.
Hirokazu Yoshikawa, Ph.D.
NBES Members Absent
Darryl J. Ford, Ph.D.
Bridget Terry Long, Ph.D.
Robert A. Underwood, Ed.D.
Ex-Officio Members Present
Sue Betka, Acting Director, IES, U.S. Department of Education (ED)
Thomas Brock, Ph.D., Commissioner, National Center for Education Research (NCER)
Peggy Carr, Ph.D., Acting Commissioner, National Center for Education Statistics (NCES)
Joan McLaughlin, Ph.D., Commissioner, National Center for Special Education Research (NCSER)
Ruth Curran Neild, Ph.D., Commissioner, National Center for Education Evaluation and Regional Assistance (NCEE)
Brett Miller, Ph.D., Health Scientist Administrator, Child Development & Behavior Branch, Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health (NIH)
Chuck Pierret, Bureau of Labor Statistics
Ellie Pelaez, Designated Federal Official (DFO)
Laurie Miller Brotman, Ph.D., Bezos Family Foundation Professor of Early Childhood Development, Center for Early Childhood Health and Development, Department of Population Health, New York University Langone Medical Center
Holly K. Craig, Ph.D., University of Michigan (by phone)
Robert Gordon, Senior Advisor to Secretary Duncan, ED
Jonathan Guryan, Ph.D., Associate Professor of Human Development and Social Policy, Institute for Policy Research, Northwestern University, University of Chicago Crime Lab/Urban Education Lab
Rucker C. Johnson, Ph.D., Associate Professor, Goldman School of Public Policy, University of California-Berkeley
Ted Mitchell, Ph.D., Under Secretary for Higher Education, ED
Call to Order
David Chard, Ph.D., NBES Chair
Dr. Chard called the meeting to order at 9:00 a.m., and Ellie Pelaez, DFO, called the roll. Board members unanimously approved the agenda for the meeting.
Sue Betka, Acting IES Director
Regarding the budget, Ms. Betka said the agency is operating under a continuing resolution that remains in effect until mid-December at the same funding level as fiscal year 2014. The President requested increases for ED research, statistics, the statewide longitudinal data systems (SLDS), and special education studies (but they have not been approved). However, IES is receiving applications for special education and other research proposals and hopes to announce the SLDS competition soon. Ms. Betka said it is still possible that the Education Sciences Reform Act (ESRA) could be reauthorized during the last Congressional session before the end of the year.
The Department has received some input and candidate recommendations for a new IES director. Informal conversations about potential nominees are underway, and the Department continues to welcome input and recommendations. The White House vetting process and confirmation take a long time, said Ms. Betka; while the Department hopes the process will move forward quickly, she did not think there would be a new, permanent director for some time.
Appointments to the Board no longer require Congressional confirmation. The White House recently announced that Judith Singer, Ph.D., would be reappointed to the Board, and two new members, Deborah Phillips, Ph.D. and Michael Feuer, Ph.D., will be joining. One vacancy remains. Adam Gamoran, Ph.D., asked why appointment of the new members had not yet been finalized, and Ms. Betka responded that the required paperwork results in some delay.
Ms. Betka said the IES commissioners' reports would demonstrate some of the ways IES seeks to improve its activities. A well-developed performance review process is in place to ensure IES is moving forward.
Dr. Chard announced that this would be the final meeting for Board member Robert Granger, Ed.D. and thanked Dr. Granger for his service. He also welcomed Dr. Peggy Carr, the Acting NCES commissioner.
IES Commissioners' Reports
National Center for Education Evaluation and Regional Assistance
Ruth Curran Neild, Ph.D., NCEE Commissioner
Dr. Neild pointed out that just 2 years ago, the Board challenged the What Works Clearinghouse (WWC) to better disseminate and target products to different audiences. Since then, the WWC has come a long way, and Dr. Neild shared some examples of the "new normal."
So far this year, the WWC has had six topical campaigns highlighting WWC resources. Each campaign has its own landing page on the WWC website that links to previously released material, showing the connections to a specific topic. Sometimes the campaigns introduce new products, as with the second annual back-to-school campaign. For a campaign to help schools and districts select math instructional materials, the website offers an animated white board video. The video emphasizes that decisionmaking is a complex process and encourages users to compare WWC practice guides with materials of interest and supporting evidence.
In July, the WWC hosted a webinar on designing stronger studies to meet WWC standards. A webinar in October focused on resources for faculty of principal and teacher preparation programs in an effort to reach out to a new audience. Dr. Chard opened the webinar and addressed the importance of using evidence in preparation programs. Outreach materials about the October webinar went to the deans of every university school of education in the United States. Dr. Neild asked for input on other ways to encourage use of WWC products in teacher/leader preparation.
The Regional Education Laboratories (RELs) released 13 reports in the past quarter for a total of 29 reports in 2014. ED Week blogs and other outlets featured the reports. For example, REL West, as part of the Silicon Valley Research Alliance, helped school districts use their own data to assess placement and success in middle school math classes. The resulting report found that the Mathematics Diagnostic Testing Project assessment was as good a predictor of success in eighth-grade algebra as the standardized seventh-grade assessment test but took less time and produced results more quickly.
The REL program has introduced a new system of color-coded icons to categorize materials (e.g., causal analyses, policy issues, tools), which is intended to help audiences key in to the purpose of the report. The icons are being used more frequently in REL news flashes as well, said Dr. Neild.
Across its programs, NCEE is trying to focus on simplicity and clarity. An animated whiteboard video helps users navigate the new, more streamlined search mechanism for the Education Resources Information Center (ERIC). A guide addresses writing about research in everyday language so that findings are more understandable. Dr. Neild asked for input on other ways IES can support or encourage researchers to write more plainly.
In response to a Congressional mandate to evaluate the 2010 Teacher Incentive Fund grants, which provided performance-based awards, NCEE released its first report in September, along with a short summary of the findings. The report found a lot of confusion about whether teachers would receive a payout and when, as well as discrepancies between what teachers and districts knew about the maximum bonus amounts.
Hirokazu Yoshikawa, Ph.D., suggested that the WWC conduct extremely targeted campaigns for policy initiatives—for example, highlighting evidence about early childhood education, an issue under discussion now by the White House and some states. Researchers and advocates direct policymakers to the WWC as the best source for rigorous data, but more outreach by the WWC at conferences and other events around particular issues would be very helpful, he said. Dr. Neild agreed; she asked that Board members work with her on timing campaigns.
Dr. Granger praised the WWC for the tremendous progress made in the past year. He asked what efforts are made to collect information so that lessons learned are retained over time. Dr. Neild said that IES has brought on strong federal staff, and the WWC uses multiple prime contractors with the aim of encouraging a marketplace of ideas. Dr. Granger also asked whether the WWC partners with other intermediary organizations to get the word out about its products. Dr. Neild said that it does and gave an example of how the WWC worked with the Association for Supervision and Curriculum Development (ASCD) on choosing a math curriculum.
Dr. Singer also applauded the progress of the WWC. Now that the WWC has more deliverables to offer, she suggested developing strategic partnerships aimed at building specific audiences with professional organizations, such as the Council of Great City Schools, and with other ED offices. Dr. Neild said the webinars were a first step toward reaching out to broader audiences, but a full-fledged strategy is needed.
Dr. Chard asked whether, through strategic partnerships, organizations like the National Council of Teachers of Mathematics will include prominent links to the WWC on their websites. Dr. Neild confirmed that such efforts are underway. Dr. Chard asked Board members to suggest other organizations as potential partners.
Anthony S. Bryk, Ed.D. said there are not many policy issues (as opposed to programs or practices) for which the WWC has evidence that can inform decision makers. He suggested that IES—though not necessarily the WWC—assess state initiatives for emerging issues, then gather and disseminate high-quality education research around those issues. For example, Ohio is considering mandatory retention at third grade; there is deep research on this issue, and IES could be informative, said Dr. Bryk.
Dr. Granger questioned whether the WWC is equipped to assess, for example, the long-term economic benefits of early childhood education, the best way to evaluate teachers, or the impact of teacher incentive programs. Dr. Neild said the WWC has tried to develop new products that address policy issues and hoped that a current attempt would be successful.
National Center for Education Research and National Center for Special Education Research
Thomas Brock, Ph.D., NCER Commissioner, and Joan McLaughlin, Ph.D., NCSER Commissioner
Dr. McLaughlin said NCER and NCSER have convened two technical working groups (TWGs) to solicit input from stakeholders. The first brought together a range of practitioners to discuss emerging needs in education and how to improve the relevance and dissemination of education research. The second technical working group brought together researchers and focused on strengthening IES' research and training grant programs. Additionally, NCER and NCSER released posted a letter on the IES website in August requesting public comment on research and training programs that included these questions:
Dr. McLaughlin said she and Dr. Brock would present the responses to the letter and more detail from the TWGs to the Board at its first 2015 meeting. Among these examples of recommendations made by the researcher TWG were the following:
Dr. Gamoran said some researchers believe that larger grants yield more powerful research activities, while others think smaller grants result in more researchers working independently. Dr. McLaughlin said the topic came up among the TWG members, and IES sees both as important.
Susanna Loeb, Ph.D., asked whether any of the recommendations of the research TWG were surprising. Dr. McLaughlin said the suggestion to focus on early-career researchers stood out; she added that the suggestion came from people who are well established but feel that IES should think more about the future of the field in concrete ways. Dr. Brock said the research TWG encouraged IES to be more assertive in identifying under-researched issues, and naming these as priorities in its Requests for Applications. An example of such an issue was understanding the components of effective teaching and what it takes to help teachers acquire these pedagogical skills.
In response to a request for clarification from Larry V. Hedges, Ph.D., Dr. McLaughlin said IES already offers workshops that provide training on methodology, including a 2-week session in randomized, controlled trials (RCTs); a 1-week session on single case design; and, for the first time starting in 2015, training in cost-benefit analysis. The research TWG suggested training in other types of methodology, such as adaptive treatment design. The same group suggested targeting some training to mid-career researchers so they could train others.
Dr. Bryk suggested IES look for examples where good analytic research transformed practice. For example, in the mid-1990s, schools started using transcripts to track students and provide guidance throughout their school careers. This approach evolved into a rapid analytics method now in use by the New York City school system. Dr. Bryk said other such examples should be identified; they are consistent with the work of the RELs and the goals of researcher-practitioner partnerships.
Dr. Granger said it would be helpful to compare the two TWGs, especially because the practitioner discussion sometimes gets overlooked. He noted that some of the recommendations would be expensive to carry out. He asked how IES could build a constituency that allows it to either get more funding or stop doing things that are no longer useful. Dr. Granger thought that perhaps the interest in evaluation could build Congress' appetite for increasing IES funding. Dr. McLaughlin agreed that IES does not have enough money to implement all the recommendations.
Dr. Singer pointed out that the TWGs may not be representative of the field. That is, the needs of people who work in contract research organizations differ from those of people at research universities, or teacher education programs, or RELs. Also, NIH has been experimenting with different outreach strategies; it recently granted resources to institutions to create massive open online courses (MOOCs) for professional development. Dr. Singer suggested thinking creatively about how to scale up programs. Competition for limited space in training and professional development programs does not necessarily mean those programs reach the best audiences. Other mechanisms could have a larger impact. Dr. Singer noted that IES could follow NIH's lead and offer competitive funding for institutions with established online learning platforms.
Brett Miller, Ph.D., said NIH tries to align its efforts with its goals, and the funding for MOOCs seeks to disseminate education broadly. In-person training is still offered as a complement; technology without human interaction is not always helpful. Dr. McLaughlin agreed that small group training is useful, especially for early-career researchers. Dr. Singer said MOOCs or other approaches are an alternative to in-person training, not a replacement. Blended approaches could achieve even more, she added.
Dr. Bryk pointed out that in the past NCER and NCSER have created knowledge networks—panels of experts gathered to provide a range of perspectives around politically contested policy issues for which a lot of research exists. The networks created consensus reports, which were well received, and the efforts were not expensive. Dr. Bryk said this approach can help clarify what is known when it appears there is no consensus, because advocates are looking at various sources independently. He suggested IES consider the knowledge network approach again.
National Center for Education Research and National Center for Special Education Research (continued)
Dr. Brock described another joint effort to gather stakeholder input: online surveys of applicants and grantees. This past summer, NCER and NCSER took part in a Department-led customer satisfaction survey of grantees. While the survey provided an opportunity for NCER and NCSER to learn how they compare with other grant-making units, Dr. Brock cautioned that the response rate was low (about 40 percent for the Department as a whole), so the findings may not be highly reliable. NCER and NCSER grantees had a higher response rate (about 60 percent) and generally indicated satisfaction with IES's grant programs and particularly with their relationships with program officers.
Grantees gave NCER and NCSER staff high scores for their knowledge, responsiveness, and collaborative assistance. Dr. Brock pointed out that the program officer function at IES is unique within the Department; IES hires content experts and research scientists and separates them from the grant review process so they can engage with grantees directly. Conversely, grantees gave the Department and IES low scores for their online resources. Respondents said they had difficulty submitting information online and navigating the websites. Efforts to improve the website are underway and will launch in spring of 2015.
The first survey of IES grant applicants in 2013 focused on improving the requests for applications (RFAs). In response to the results, NCER and NCSER dramatically revised the RFAs—combining the submission guide with the RFA, reformatting, and clarifying the minimum requirements and distinguishing them from recommendations. The 2014 survey gathered feedback on the changes. Over half of those with experience applying to IES thought the revised RFAs were better, and almost no one thought they were worse.
The applicant survey revealed that some confusion persists among IES applicants about the dissemination requirements in RFAs. Staff have also observed this in their initial screening of proposals for responsiveness. Some of the dissemination plans submitted are simplistic, and few demonstrate much creative thinking about audiences or new methods of communicating findings. Dr. Brock asked the Board to consider whether IES should provide more guidance or tools to help applicants craft better dissemination plans. Also, he asked whether IES should reconsider how the dissemination plan is scored. He also asked whether it should be separated from the research plan so that applicants get more feedback.
National Center for Education Statistics
Peggy Carr, Ph.D., Acting NCES Commissioner
Dr. Carr announced that the National Assessment of Educational Progress (NAEP) Achievement Gaps Report, which addresses gaps by race, will be released soon. Also, the National Teacher and Principal Survey will replace the Schools and Staffing Survey, decreasing the burden on respondents and increase efficiency. The change will also incorporate the Civil Rights Data Collection, EDFacts, and the Common Core of Data program, all in an effort to do more with less funding by improving efficiency and decreasing response burden.
Dr. Carr said that by analyzing the purpose of all of NCES' assessment activities, IES could further improve efficiencies in terms of cost, response burden, quality of data, and efficiency in administration, among other areas. She proposed the NCES Integrated Assessment Systems (NIAS), which would look at all the components of the large-scale assessments—including the NAEP, the Third International Mathematics and Science Study (TIMSS), the Programme for International Student Assessment, Progress in International Reading Literacy (PIRLS), and the Programme for the International Assessment of Adult Competencies—as well as middle school, high school, and some primary school longitudinal surveys. Many of these focus on common areas of content. As an example, Ms. Carr asked whether the algebra focus in the NAEP and the middle school assessments are significantly different.
Dr. Carr said NCES continues to explore new item types for assessments, such as scenario-based tasks. These approaches require a lot of resources to develop, so it is necessary to increase efficiency. Dr. Carr pointed out that the NAEP and TIMSS jointly assessed math and science in 2011; the assessments were administered in the usual manner, and equivalent groups took one or both tests. The engagement of the state NAEP coordinators resulted in significantly improved response rates for TIMSS, saving over $1 million in recruitment costs for TIMSS.
The NIAS approach seeks to gain such efficiencies. Some government bodies would have to agree to changes in the schedule of assessments. Even spiraling some assessments together within the same school would limit resources and the burden on schools.
In addition, TIMSS and PIRLS are adding technology-based assessments, and the information gained from these efforts could be used to support technical design decisions for both. Boston College is field testing proof of concept for e-TIMSS. The NAEP has already provided lessons learned regarding which tablets are viable for the assessments and how to adapt paper-and-pencil items to digital media so that results can be tracked consistently overtime.
Dr. Carr said the NIAS approach has many implications and will take a long time to fully realize, but small steps will make a difference. She requested Board input on the concept.
Dr. Loeb supported the NIAS concept and the notion of improving efficiency, but there are also benefits to having a variety of sampling groups. She asked whether Dr. Carr had considered the balance between combining assessments and maintaining diverse samples. Dr. Carr said NCES does a lot of alignment studies to reveal the similarities and differences among samples; she said the samples are surprisingly similar. Early iterations of the NIAS approach would not combine samples, said Dr. Carr, but in some schools we should seek opportunities to link assessments.
Dr. Granger worried about constituencies losing resources in light of decreased IES funding. He asked what mechanisms NCES has for getting input from audiences. For example, are consumers of NCES reports getting the kind of international and longitudinal data they need? Dr. Carr responded that efforts to link assessments would address such gaps. For example, a lot of civil rights data could easily be linked at the school level, but NCES has yet to accomplish such links. Linking such data would provide researchers more context for their investigations.
Dr. Carr added that NCES has launched a website of lessons learned to share more information. For example, state testing directors and policymakers considering technology-based assessments need to know what has been learned so far. All of the IES centers are focusing on determining what information needs to be disseminated quickly.
Dr. Hedges said sharing lessons learned about technology-based assessments is an example of the kind of steps NCES can take to leverage what it has. He said NAEP once again has the opportunity to lead the field in how assessments are done.
Dr. Bryk suggested that the next time IES receives a substantial funding increase, it should focus on two areas. First, grant funding could help districts and states build the capacity to create large data networks and improve the quality and commonality of data collected. The cost of collection is already shouldered by the states. Combining and coordinating data collection is a natural role for NCES that opens up the possibilities afforded by big data.
Dr. Bryk said the NAEP is an ideal mechanism for innovation in assessment. The second area for future funding consideration should be close evaluation of what is measured—and thus valued—now and how to expand measures to collect the kind of data that can better assess the quality of schools.
Dr. Gamoran asked whether NCES has made any progress toward linking SLDS with other federal data sets. Dr. Carr said the NAEP is doing so in six states, focusing on preparedness in addition to entrance examination performance (because the selected states follow students' postsecondary school pursuits).
Dr. Yoshikawa said the United States could lead the way beyond reading, math, and science to assessing other important domains, such as socio-emotional learning. The global conversation is shifting to a broader vision of outcomes. At present, the major data sets do not capture even the most basic aspects of socio-emotional learning.
Research on Supporting the Development of Boys and Young Men Growing Up in Areas of Concentrated Poverty
Opening remarks by Susanna Loeb, Ph.D., NBES Vice Chair, and Thomas Brock, Ph.D., NCER Commissioner
Dr. Loeb said the Board has discussed how to target research to meet the needs of special populations, such as English-language learners and students with special needs. Boys and young men growing up in impoverished areas are another important population. The presenters were invited to present their research as a basis for Board discussion about how to target or encourage new research to better understand this population.
Dr. Brock noted that the development of boys and young men in poverty comes up frequently in grants funded by NCER. For example, NCER has supported RCTs on the threat of stereotypes and work on sociobehavioral interventions, particularly disciplinary issues (for which young men of color are disproportionately called out). Dr. Brock also noted that NCER is thinking about how to foster more diversity among researchers, which could lead to more focus on such issues in future studies.
Laurie Miller Brotman, Ph.D., Bezos Family Foundation Professor of Early Childhood Development, Center for Early Childhood Health and Development, Department of Population Health, New York University Langone Medical Center
Dr. Brotman said nearly half of American children under age 5 live in poor or "near-poor" families. Poverty is associated with negative outcome across all five key domains of childhood development: behavioral, health, learning, social, and emotional. Dr. Brotman gave several examples of disparities in early education—particularly among the poor, minorities, and boys—that influence achievement later in life.
Dr. Brotman explained that in the research population of over 1,000 low-income Black and Latino pre-K students, boys were at higher risk for academic problems and were more likely to have problems with self-regulation than girls, even after a year of pre-K. There is clear evidence that children who live in adversity have problems with self-regulation, which kicks off a cascade of disruption and dysfunction with long-term implications for health and productivity, she said.
The family-centered, school-based intervention ParentCorps focuses on children deemed not ready for school but is appropriate for all populations. ParentCorps engages and supports communities of parents and early childhood teachers in promoting high-quality home and classroom experiences for young children, thus strengthening learning, behavior, and health. The program involves professional development for school staff, a program for pre-K students, and a program for parents and siblings. The programs increase effective behavior management, nurturing relationships at home and in school, and family engagement, leading to improved self-regulation and early learning.
Early trials demonstrated that ParentCorps can interrupt the cascade of dysfunction that contributes to poor outcomes. Participants improved their stress response, motor skills, and social learning; by the end of second grade, they demonstrated improvements in reading, math, and writing skills. The program also had indirect positive effects on obesity and physical activity levels.
Dr. Brotman and colleagues continue to gather data on ParentCorps from multiple trials with cohorts from pre-K through the transition to middle school. ParentCorps provides professional development during the school day and hires pre-K teachers to implement the afterschool family programs. The family program consists of 13 early evening sessions, 2 hours each, during which pre-K children and their siblings participate in an arts group while the parents receive education. Meals are provided, which, along with childcare, serves as an incentive to participate. Of families eligible, 58 percent participated. On average, parents came to 10 of the 13 sessions; 39 percent of families took part in at least five sessions, which was considered a "full dose" for the purpose of the study.
ParentCorps has had a positive effect on parenting, support, involvement, and behavior management, Dr. Brotman reported. The more sessions attended, the better parents became at these behaviors. Students demonstrated improvements in behavior management and productivity over time; they showed significantly higher reading and math achievement scores. The longer they took part in the program, the higher the impact.
All students benefitted from participation in ParentCorps, but those who took part for 4 years and whose families also took part showed high rates of reading achievement. The findings of the research so far are the basis for a true trial of effectiveness and expansion of ParentCorps to 100 New York City schools over the next several years.
The Link Between Dialect and Literacy Achievement
Holly K. Craig, Ph.D., University of Michigan
Dr. Craig reiterated some of the links between poverty, health, and learning, noting that children in poor communities may start school with problems in cognitive and language development. For example, in first grade, low-income students score about one standard deviation below middle-income students in reading achievement tests, and that performance persists throughout elementary school, at least.
Dialects play a role in reading achievement. Teachers and educators overall tend not to know much about dialect. Every language has dialects, and they are determined by regional and sociocultural factors. Academic and professional discourse and written English rely on Standard American English (SAE). African American English (AAE), like all dialects, has its own grammar; it is as complicated as SAE but judged negatively, said Dr. Craig.
Children who speak AAE begin school with about 40 features of speech that differ from those who speak SAE. Dr. Craig outlined some of the grammar of AAE, which is as rule-governed as SAE. She and her colleagues conducted a pilot study of Toggle Talk, a curriculum to help African American kindergarten and first-grade students learn SAE in a constructive way (not just repetition and correction from teachers). Toggle Talk uses the terms "formal" and "informal" to describe SAE and AAE, respectively. Teachers appreciate having a tool for addressing AAE as a separate dialect, and children appear to be comfortable, too. After 8 weeks of the curriculum, students showed better decoding skills on a standardized reading test. Dr. Craig said she encourages other researchers to study AAE in their schools.
Not Too Late: Reducing Disparities in Academic Outcomes Among Youth
Jonathan Guryan, Ph.D., Associate Professor of Human Development and Social Policy, Institute for Policy Research, Northwestern University, University of Chicago Crime Lab/Urban Education Lab
High school graduation rates for students of color have not improved much over the past 30 years. While much research focuses on early intervention, Dr. Guryan's research demonstrates that intervention in the adolescent years is not too late, nor is it more costly. In fact, because it is easier to identify those students at highest risk for dropping out when they reach adolescence than in early childhood, targeted interventions in later years may be less costly, he noted. Targeting older students also means the effects of the intervention are less likely to fade out over time.
One approach is cognitive behavioral therapy (CBT), which Dr. Guryan and colleagues tested among students at the Cook County (Illinois) Juvenile Temporary Detention Center, which has its own public high school branch. Instead of focusing on the root causes of delinquency and dropout rates, which are difficult and expensive to address (e.g., poverty, family structure, exposure to violence, and systemic racism), CBT encourages an individual to recognize and eventually change negative thinking and behavior. Among those students who took part in the CBT intervention for about 6 hours each week, recidivism dropped dramatically after about 1 year.
Another program, Becoming a Man (BAM), incorporates CBT along with discussions about values, integrity, accountability, expressing anger, and making decisions in a weekly 1-hour group session during school. Dr. Guryan described an exercise that serves as a springboard for discussion about the complex concept of hostile attribution bias. The program had positive effects on rates of violence and arrest while the participants were in the intervention, but those effects faded out completely after 1 year. However, increased school engagement, as measured by attendance and grades, persisted for 1 year.
Dr. Guryan also described a purely academic approach, highly intensive math tutoring, developed by Match Education. Each tutor works with two students with similar skill levels, providing direct instruction on specific problems and progressing at whatever speed is appropriate for the students. A pilot study of 100 students who took part in BAM and the Match Education tutoring in a community with high rates of crime and violence found significant positive effects on the test score gap between Black and White students. The participants also had fewer course failures, better attendance, and higher math scores.
Dr. Guryan pointed out that the cost of such tutoring is between $2,500 and $3,300 per student per year, and one tutor can work with many students over the year. In addition, many tutors take on the job as part of a commitment to service and agree to low pay.
Taken together, Dr. Guryan concluded, these efforts show that adolescence is not too late for intervention for low-income students. He proposed that such interventions target academic mismatches in the classroom or nonacademic barriers to learning.
Effects of School Spending on Adult Outcomes: Evidence from School Finance Reforms
Rucker C. Johnson, Ph.D., Associate Professor, Goldman School of Public Policy, University of California-Berkeley
Dr. Johnson explained that his research looks at the impact of education spending reform and policies to improve equity and access to education for poor children. The research combines evidence from the rollout of school desegregation policies, financial reforms, and pre-K investments, among other efforts. Dr. Johnson emphasized that schools raise a magnifying glass to social problems; for example, they demonstrate how low-income, minority children are largely confined to schools in neighborhoods with high concentrations of poverty and crime. Segregation by race/ethnicity is most concentrated among school-aged children.
While education spending overall began to increase in the late 1960s, differences in the distribution of spending persisted. Inequities in school funding (mostly from disparities in local property taxes) led to a reform movement, which resulted in dramatic changes in spending in K–12 education, some mandated by courts, some by legislation. Dr. Johnson and colleagues compiled all the data on the timing of school financing formula changes from 1962 through 2010, at the school district level, and compared them with outcomes data to determine the effect of the changes on the distribution and level of spending and the degree to which the changes affected student outcomes.
Dr. Johnson's sample includes 15,000 children born between 1955 and 1985 and followed through 2011, more than half of whom are from low-income families. The age of the subjects allows Dr. Johnson and colleagues to compare students who were or were not affected by funding reforms. They also looked at the amount of increases in spending, allowing them to assess both duration and intensity of the effects.
The data clearly show that as education spending increases in a given district, the rates of education attainment, high school graduation, and adult earnings all increase for low-income students, while rates of adult poverty decrease. Across the findings, Dr. Johnson emphasized, the relationships correlate significantly with spending increases and with dose response (i.e., more spending over a longer time leads to better outcomes). Increased spending had less effect on students who were not poor.
To illustrate, Dr. Johnson said that among low-income students who experienced a 10-percent increase in education spending each year from K–12:
Notably, the effects of a 20-percent increase in school spending are large enough to reduce disparities in outcomes between children born to poor and non-poor families by at least two thirds, which would eliminate the education attainment gap.
Dr. Johnson concluded that school resources and money matter. Increasing school spending leads to higher teacher salaries, smaller class sizes, and longer school years, among other things, all of which contribute to better outcomes for students over time. Improved access to school resources can profoundly shape the lives of disadvantaged children, reducing the intergenerational translation of poverty.
Since the Great Recession, schools have faced tighter budgets, and the impacts of state budget cuts on schools will affect student outcomes for many years to come, Dr. Johnson added. Also, the long-term productivity of education spending should take into account measures beyond traditional education outcomes, such as racial disparities in incarceration rates. There is evidence that investing in school quality can reduce crime and other negative social outcomes.
Dr. Chard asked each presenter to discuss how IES can stimulate more research and have a greater impact. Dr. Johnson said that how interventions or policies are implemented matters more than the intervention or policy itself, in terms of effectiveness. A better understanding is needed of the tools implemented. Qualitative research should inform quantitative research, said Mr. Johnson.
Dr. Guryan suggested IES continue to support evaluations of RCTs underway in schools. People tend to think that RCTs are expensive, but that is not always true. The main costs are related to supporting the intervention and collecting data. However, much can be learned about effectiveness using existing data sets. In addition, IES should help build up administrative data sets and facilitate their use for research, said Dr. Guryan.
Dr. Brotman said that for complex interventions, mechanisms for evaluating the effectiveness are underresourced. The burden on investigators is enormous. Linking evaluation of effectiveness to other lines of research, such as cost-benefit analysis, and linking intervention data with administrative data sets would be helpful.
Dr. Craig said she has learned the importance of ensuring that grants are transparent to the stakeholders. The theory, rationale, and potential impact of the intervention should be clear to the schools and the teachers. Dr. Craig said she hoped IES' new focus on disseminating results would lead to more meaningful dissemination.
The Board adjourned for lunch at approximately 12:30 p.m. The public meeting resumed at 1:05 p.m.
Higher Education Rating System
Ted Mitchell, Ph.D., Under Secretary for Higher Education
Dr. Mitchell pointed out that President Obama's education priorities focus on increasing the percentage of young Americans who attend and complete postsecondary education. To that end, the Administration has expanded the Pell grants and loans program and decreased the interest on student loans. By converting bank-operated student loans to direct loans, millions of dollars have gone back into federal financial aid. Dr. Mitchell said these and other steps are sound investments that demonstrate a commitment to youth.
Among the measures of access to higher education are the rates of college acceptance and attendance by African American and Latino students. Research funded by IES has been key in helping navigate the transition between twelfth grade and postsecondary education. However, half of those enrolled do not finish, often because they lack the academic skills required. Developmental or remedial education is a priority for educators and researchers that requires more attention from all stakeholders. Dr. Mitchell said the need for remedial education is a problem of theory, execution, and policy; the Administration wants to help from the policy side once there is settled theory and evidence to support new policies.
Current research on the value of a college degree is compelling, said Dr. Mitchell. Completing college or getting college credits pays off economically for students and contributes to life skills; it also leads to reduced rates of incarceration and lower health care costs. Thus, higher education is a sound investment for society and for individuals.
To address affordability, the Administration increased federal student aid, but the effect has not been as strong as it had hoped, in part because new investments have mostly been used to offset the gap between the price of postsecondary education and diminishing resources invested by the states. More costs have been passed on to individuals and families. Therefore, the Administration continues to be concerned about the increasing costs of higher education, which are already high.
The Administration has targeted four areas for improvement: access, affordability, outcomes, and value. It is clear that the consuming public (parents and students) and the observing public do not see how the costs of postsecondary education relate to these four key areas. Therefore, President Obama tasked ED with creating a rating system to improve understanding and transparency of the price of postsecondary education. The consuming public should have access to information for decisionmaking that comes from a neutral, independent source, and the observing public should be able to see what taxpayer investments ($150 billion in student aid alone) are achieving.
Dr. Mitchell said ED has engaged in public meetings, sought input, and begun working with researchers and methodologists to build a reliable, valid set of metrics that rest on data and from which conclusions can be drawn. In that effort, IES has helped identify and interpret existing data. Dr. Mitchell hoped IES would continue to reveal any data issues or methodological flaws, which is critical to ensuring a reliable product.
One approach to ratings is being tested now using metrics for affordability, access, and outcomes. Staff is looking at ways to measure key indicators and testing them against current data to see whether there is variance that is impossible to bridge or whether methods can be simplified.
Among the overarching principles of creating a rating system are 1) remain humble to the existing data and 2) do not create perverse incentives for institutions. For example, commenters pointed out that a rating system could discourage institutions from recruiting and retaining first-generation, low-income students. The rating system should also be transparent in its methodology. It is intended to evolve over time. The version being studied now is not perfect, but it will help elucidate what should be measured and inform decisions about getting better data. Ideally, the rating system will spur development of increasingly robust data that will be shared. Dr. Mitchell said the recent TWG report on the Integrated Postsecondary Education Data System (IPEDS) represents a conversation between program and policy advocates that benefits everyone.
A model of the ratings system will likely be released this fall, and ED will accept comments. Comments will be addressed, and an initial working version will be released, but staff will begin gathering information for the next iteration immediately.
Dr. Mitchell said IES is a strong conceptual partner that has helped navigate IPEDS and other data sources around ratings. The rating system is intended to change institutional behavior and individual student decision making, and ED will have to gather evidence on that hypothesis. Many questions have been raised about the potential impact, if any, of the rating system. For example, if the ratings provide a clearer picture of the costs, will students and families become more price-sensitive? If they do, will institutions respond? There is already some evidence that simply starting the conversation about ratings and price has had some impact, as some states have capped or fixed tuition rates.
In addition to providing raw data, IES has assisted with development of consumer tools, such as the College Navigator and the Financial Aid Shopping Sheet. Once the ratings are in place, Dr. Mitchell suggested talking with IES and others about how to create more consumer-friendly tools.
Dr. Mitchell asked for suggestions from the Board about large collaborative projects between IES and the Office of Postsecondary Education. For example, the proliferation of models for teacher preparation would benefit from serious evaluation research. In November, ED will be disseminating regulations on teacher preparation and will ask states to begin collecting much more detailed data from institutions and school districts about teachers. Such data collection is made possible by the investment in SLDS. What works in teacher preparation is a big, interesting question, said Dr. Mitchell, and measures are needed to guide program and system improvement.
Dr. Granger asked what are the biggest political and research or methodology barriers that the rating system faces. Dr. Mitchell responded that there has been great willingness across political divides to come together for student growth. The biggest methodological challenge has been measuring learning outcomes, in part because it is hard to measure the contribution of institutions to the outcomes and because of historic and cultural problems. Expectations for learning outcomes differ across settings and so are hard to compare. While there is no easy answer, Dr. Mitchell said that having a national comprehensive test on a given subject is probably a bad idea. Dr. Gamoran pointed out that no one has yet identified a valid indicator of learning outcomes for postsecondary education.
Dr. Gamoran said the growth in postsecondary attendance is largely occurring at 2-year institutions, and he asked how the ratings system would address them. Dr. Mitchell replied that the new normal for postsecondary learning involves students who are older, do not live on campus, and consume education piecemeal over a long time. He acknowledged that ED's mindset and some data construction are based on the old model of young students attending 4-year, residential colleges. Two-year community colleges are the home of the new normal, and ED wants to support those institutions and their students, said Dr. Mitchell. The rating system aims to be comprehensive and should not create perverse incentives that get in the way of what community colleges do, he added.
Dr. Singer said that one drawback of rating institutions is the difficulty of ascribing value added to teachers. Considering the heterogeneity of institutions, both 4-year and 2-year, in terms of goals and programs is complicated. The fact that students do not apply nor are they selected randomly skews the results, as do geographic factors, given that many students attend college within 50 miles of their home. Dr. Singer said existing rating systems imply that all institutions share the same values, which is problematic.
Dr. Mitchell agreed with Dr. Singer, noting that the points raised are among the root problems being addressed. He added that ED seeks to measure similar institutions against each other, if possible. He emphasized the difference between ratings and rankings, saying that minute factors that differentiate one school from another are less relevant than categorical measures compared across peer institutions. He also noted the difficulty of describing student characteristics.
Dr. Singer responded that using broader groups has the advantage of not pitting schools against each other. On the other hand, studies of rating systems used by Amazon and Yelp, for example, indicate that the rating itself, on either side of a cut point, has no meaning but has clear outcomes on success. Dr. Mitchell said staff has discussed how institutions tend to focus resources on the narrow population at the cutoff points.
Dr. Bryk expressed concern that society values what it measures, which affects institutions. By definition, you get more of what you measure and less of what you do not measure, he said. He suggested IES support research on understanding the perverse effects of mechanisms applied to complex systems. It is important to understand the consequences of the rating system. Dr. Bryk said he is less concerned about access to and cost of postsecondary education than he is about the outcomes.
Dr. Mitchell said there are two approaches: measure a few, discrete things that the government and the public should care about or measure more things to get a more complete picture. He asked the Board for input on the tradeoffs of each approach.
Dr. Loeb said that if we could figure out the selection side of the equation, we would not have to worry about the outcomes. In K–12, causes can be measured. In higher education, selections are made on the basis of a lot of unobservable factors and are not subsidized enough. Dr. Loeb suggested looking at high school records in state databases. Dr. Mitchell said ED is testing some metrics in state databases to see whether they can be tracked accurately backward and forward. He added that now is a great time to talk about student unit record systems again.
Dr. Mitchell said there should be a group of scholars (other than the NBES) to advise ED on the rating system on an ongoing basis. The group should consist of people who work closely enough with the data to ask the important research questions. Dr. Hedges suggested an approach similar to that of the NAEP, in which a group of scholars is designated to monitor the rating system to understand the consequences continuously. Dr. Mitchell agreed that having a monitoring body is a good idea.
Dr. Chard returned to the concern about institutions that have different missions. How, for example, would the rating system compare a regional liberal arts institution with a liberal arts institution that has a Tier 1 research facility? Dr. Mitchell said he is open to suggestions. He said commenters have already raised concerns that the rating system should not be reductionist—for example, measuring graduation rates or postgraduate experiences or earnings. It is also important to figure out what the questions of interest really is and whether there is a valid way to measure it. The question can become very complicated quickly, said Dr. Mitchell.
Dr. Yoshikawa asked whether any thought had been given to reverting to a more multidimensional approach and starting with an area where more is already known. He asked whether the first iteration of the rating system is required to measure access, outcomes, and affordability. Dr. Mitchell said ED intends to put forth a model or at least some questions this fall and is open to modification based on feedback. Dr. Yoshikawa pointed out that the research agenda at each institution is different and the criteria for assessing outcomes already vary a great deal. Dr. Mitchell said that over time, the rating system would use more sophisticated metrics and the system would evolve, but Dr. Yoshikawa stressed that recalibrating measures after changing the methods is problematic.
Dr. Granger noted that the rating system is an attempt to improve on the current chaotic, dysfunctional, and perverse system. The goal is to devise a system that improves decisionmaking among families and students, particularly around public institutions serving a lot of low-income students. The highest ranking, most selective institutions are not a major concern, said Dr. Granger. He urged focus on understanding and improving the current decisionmaking processes. Any boundaries between IES and the rest of ED are permeable, he continued; ED can ask IES for information, and if it is not available, ED can dedicate funds to get the information. The rating system is a perfect opportunity for the policy and research sides of the Department to work together more closely, he concluded, and Dr. Mitchell strongly agreed.
Dr. Mitchell said there are serious methodological and institutional issues to think through so that the rating system is credible and useful, which is why ED wants to put something out soon that will elicit reactions from experts on theory and practice.
Dr. Singer pointed out that because so much information is concealed, existing rating systems are useless. Moreover, institutions are not monolithic. An individual's experience is more different from other individuals at the same school than those of students at another school. Users will look at the numbers in the rating system, but there is no way to address the heterogeneity among and within schools, said Dr. Singer.
Dr. Bryk said a rating system will likely cover a lot but not inform any particular problem very well. It would be better to apply a laser-like focus on questions of interest, such as access for low-income students or the retention rate of students whose parents had no college education. Dr. Bryk suggested targeting research on particular questions, because the rating system is unlikely to accomplish its stated goals. Dr. Mitchell countered that ED should do both, establishing the rating system to provide an overall picture of the sector that is informative about individual institutions among a subset of categories and directing research resources on the basis of what is learned from the ratings or otherwise.
Margaret R. (Peggy) McLeod, Ed.D. questioned the need for the rating system. She did not think results would be meaningful to the typical user. What might be helpful is a guide that indicates, for example, the likelihood that a Latino student with a certain grade point average and family income would graduate from a given institution, said Dr. McLeod. She asked whether the rating system would provide enough information beyond costs for parents, counselors, or students to make decisions about their investment. The rating system is not being created for upper- and middle-class students but for low-income students. Latino high school graduation rates are increasing, Dr. McLeod added, and Latinos are the largest group of students going to college but not graduating.
Dr. Mitchell said the example Dr. McLeod provided defines what ED hopes to create with the rating system. He emphasized that ED is not ranking institutions but rating them across different factors. Dr. Bryk said someone will translate the ratings into a ranking.
Dr. Hedges noted that there is important information in many dimensions, and that information will be of varying importance to different user groups. Having a lot of information may make things worse, not better. Therefore, attention should be paid to helping people understand how to create their own meaning out of the ratings, which is challenging, but otherwise the product will not be useful.
Dr. Mitchell said the College Navigator tool does a good job of helping users sift information according to their own interests. He hoped that as the rating system evolved, users could interact with it in more sophisticated ways. Dr. Hedges stressed the importance of testing the tools to learn how useful they are to populations of interest, because the utility of tools is not always obvious. Dr. Mitchell agreed and hoped that researchers would help address some issues while building and testing the prototype.
In closing, Dr. Mitchell assured the Board that discussions are underway and the model is not already built. He looked forward to more in-depth conversations once the model is created. Dr. Chard invited Dr. Mitchell to reach out to individual Board members for input, if desired, and Dr. Mitchell invited Board members to contact him at any time.
Elementary and Secondary Education Act (ESEA) Pooled Evaluation Authority
Opening Remarks by Ruth Curran Neild, Ph.D., NCEE Commissioner
Dr. Neild explained that the Department's evaluation division and program offices work together to develop ideas for evaluation and see those efforts through to publication. Typically, evaluations are conducted by the evaluation division using program dollars. On balance, said Dr. Neild, it is appropriate that programs invest in building the evidence base and bear the cost of doing so.
Dr. Neild pointed out that, because program dollars fund the efforts, identifying the topics to be researched is a policy decision. However, once a research topic is defined, IES takes over, refining the question, determining the study design, conducting the study, and releasing the findings, all independently.
Pooled Evaluation Authority
Robert Gordon, Senior Advisor to Secretary Duncan
Mr. Gordon said ED believes in the power of evaluation to improve practice and, ultimately, the lives of children. He hoped to learn more from experts and those in the field on how to maximize resources to benefit students. Mr. Gordon said both researchers and practitioners should do a better job of communicating and translating information. Some efforts to address the barriers include the Investing in Innovation Fund (i3), in which funding varies depending on the quality of evidence to support the initial proposal. It is hoped that such efforts are creating a pipeline for the future.
Pooled evaluation authority offers another opportunity to learn. Until recently, programs had a fixed percentage of funds set aside for evaluation—sometimes insufficient, sometimes excessive. In 2014, Congress allowed ED to pool program funds to evaluate any ESEA program. An Evidence Planning Group was created to assess evaluation needs, identify priorities, and develop an agenda. Mr. Gordon said ED expects to continue the process in 2015. Reauthorization of ESRA could provide even more funding.
In general, ED seeks to support a handful of evaluations that will have a substantial impact on the ground, addressing the most pressing practical questions facing educators from pre-K through twelfth grade. This year, ED aims to create an open, inclusive process for selecting evaluation projects that engages the public. It also hopes to demonstrate the power of well-designed evaluations to contribute to public education.
Through its blog post, ED is seeking input about what should be evaluated and whether the stated goals and process are appropriate. It is also asking the public to submit specific ideas for evaluation. Mr. Gordon welcomed input from the Board.
Dr. Bryk questioned the validity of the concept that evaluation can have a "game-changing" effect on education. The federal government does not run local schools, and Dr. Bryk could not think of any evaluation efforts that had a strong impact. Rather, researchers should ask what works for whom and in what circumstances.
Dr. Granger said IES could undertake many evaluations around variations or changes in policy issues, similar to welfare reform efforts that led to reorienting welfare to make work pay. Mr. Gordon said he usually thinks about evaluating policies, but he could envision looking at different interventions.
Dr. Granger said evaluations are more likely to succeed when they assess an intervention that can be reliably implemented and that is noticeably different from the status quo. Interventions such as coaching for better performance are difficult to measure, because it is never entirely clear how or whether the interventions were implemented. He asked what sort of studies would likely be useful under pooled evaluation authority. Mr. Gordon responded that many interventions are open to evaluation, such as assessing which pre-K curricula contribute to strong performance and success. Evaluations can be designed to be sensitive to concerns about fidelity of implementation, when needed.
Dr. Chard asked whether the intent was to get away from a blanket dissemination type of evaluation and move toward targeted evaluation. Mr. Gordon said he is leaning toward targeted evaluations at the moment but is gathering input from others. Dr. Loeb pointed out that the capacity to test interventions exists; however, unless pooled evaluation authority provides more power to ensure that policies are implemented in meaningful ways, there is no way to test policies using RCTs.
Dr. Gamoran and Dr. Loeb both said that pooled evaluation authority provides a unique opportunity. That is, it allows ED to look broadly at the rollout of policies and their aftermath. Dr. Gamoran suggested moving from single-district to state-specific accountability to better understand how proximal and long-term outcomes at the student, teacher, and school level change as accountability changes. He pointed out that the effects of welfare reform became apparent after states were allowed to vary their practices. He added that states are implementing the same policies in different ways and on different timetables, so there is opportunity to compare outcomes across states.
Mr. Gordon asked about the federal role in such evaluations, and Dr. Gamoran responded that federal entities can coordinate data and facilitate access to data across states. In fact, Dr. Gamoran said, federal involvement could facilitate access to data that no individual research team has yet been able to achieve.
Dr. Bryk pointed out that research methods are tied to the research question. Welfare reform is an imperfect analogy, because it involves discrete interventions with government controls. When introducing a complex program into a complex system, it is expected that performance will vary. Also, RCTs describe average differences; they do not provide the kind of detailed findings that allow individual programs to improve.
Dr. Granger said that, as with welfare reform, the Secretary granted waivers for policy experimentation, but the waivers were not contingent on evaluation that would demonstrate reliable effects of the interventions proposed—in part because of lack of agreement about who would pay for the evaluation. He said that pooled evaluation authority seems to allow ED to provide funds for evaluation along with waivers for innovative programs.
Dr. Singer agreed that pooled evaluation authority is an opportunity to use resources to figure out how to evaluate policies likely to have a big impact. She hoped the evaluation would be paired with strategic questions that provide useful information. Dr. Granger said the Board should help figure out how IES can be an agent of this vision.
Dr. Yoshikawa pointed out that the evaluation of welfare reform failed to provide observations of interactions—that is, evidence about specific practices within interventions that made a difference. With technological advances, there is an opportunity to create a public data set that can be analyzed by various parties asking different questions. Such data could be tied to ongoing data collection efforts to improve sampling and allow researchers to mix and match data. Dr. Yoshikawa encouraged ED to think creatively about how to make evaluation data more useful.
Dr. Loeb noted that she was thinking about the broad goal of testing policy changes, but policies can have different goals in different contexts. She stressed the need for governance and for understanding who is making decisions and what constraints they face. In addition, as Dr. Yoshikawa said, the pooled evaluation authority provides an opportunity to develop data and knowledge. Ideally, ED will consider how pooled evaluation authority can address all these factors.
Dr. Granger asked for clarification about the proposed process. Mr. Gordon responded that ED is gathering input from the public, stakeholders, and experts. Ultimately, the process will enable ED to explain its goals clearly when it allocates evaluation funding. There will be some flexibility of funding across fiscal years, said Mr. Gordon. Dr. Neild stressed that IES and NCEE are already involved, conducting impact evaluations, for example. Dr. Granger said the Board can weigh in on setting priorities.
Dr. Granger noted that with welfare reform, states reported data as they implemented programs and learned from each other throughout the process instead of waiting for final results. That approach is important in setting priorities, he noted.
Dr. Neild said that the possibility of access to a large amount of funding allows ED to consider whether there is a big, compelling question that requires substantial evaluation funding to address. Dr. Chard said that building a public database would be an important use of funds, but Dr. Neild said such an effort is more aligned with basic research than evaluation. Dr. Chard said a database could identify more questions for evaluation.
Mr. Gordon pointed out that there is a perception that impact evaluation is a hostile, top-down imposition on people who are doing good work in difficult circumstances. He hoped evaluations could be better structured to emphasize improvement and learning opportunities. He welcomed suggestions. Dr. Granger said there may be lessons learned from the i3 program as the first impact evaluations are completed.
Dr. Gamoran refined his original suggestion, saying evaluation could focus on state responses to teacher quality regulations. Under No Child Left Behind, all states implemented the same provisions for improving quality—placing teachers with certain characteristics in classrooms. Problems arose because the correlation between those characteristics and improvement were weak. Now, each state is designing its own regulations for teacher quality (e.g., emphasizing student test scores or providing more resources for teacher professional development). Because of the variation across states and the variations in timing of changes since No Child Left Behind and the subsequent waiver system, there is an opportunity to evaluate the differences in programs. Furthermore, said Dr. Gamoran, additional waivers could be provided to enable states to test other approaches in exchange for conducting RCTs to evaluate those approaches.
In closing, Mr. Gordon encouraged Board members and others to share ED's blog inviting comment on the pooled evaluation authority approach. He also invited Board members to contact him or Dr. Neild directly.
Sue Betka, Acting IES Director, and David Chard, Ph.D., NBES Chair
Dr. Chard asked for feedback and suggestions on the content and format of the agenda. Dr. Singer suggested allotting more time for discussion following presentations.
Dr. Gamoran felt the Board should make more time to address challenges facing IES. For example, given federal budget cuts, the Board should discuss how it can convey that the need for education research is far greater than the resources allotted. In addition, Dr. Gamoran said the Board should talk more in depth at the next meeting about the criteria and process for selecting the next IES director. The position of NBES executive director also remains unfilled, and it is not clear where that issue stands. Finally, Dr. Gamoran advocated for more discussion about the relationship between IES and other ED divisions and beyond ED to other agencies, including those that have NBES representatives.
Dr. Singer suggested a discussion session with members of advisory boards from other agencies (e.g., NIH and the National Science Foundation) to learn how they function. One topic overlooked at this meeting is the fact that the number of grant applications has declined, which is problematic for an agency that is seeking more funding. The downward trend in applications is worrisome, said Dr. Singer, and it is not clear who applies for funding. Dr. Singer also suggested a conversation about the criteria in RFAs for disseminating findings to determine whether new approaches are needed. As presenters demonstrated, long-term follow up can reveal helpful information. For dissemination, Dr. Bryk suggested looking at a new methodology in health science to combat the slow translation of findings into practice, known as rapid learning.
Dr. Yoshikawa suggested that one person take on the role of synthesizing the presentations from a given Board session as a basis for a targeted briefing for policymakers or decisionmakers. He said the Board should think more about its role in synthesizing information for the larger community. Also, the Board should work more closely with other agencies, such as the Department of Health and Human Services, to find links between research and implications in the field. Dr. Loeb said the Board could begin with a session devoted to the topic of synthesis, and Dr. Yoshikawa agreed.
Dr. Hedges pointed out that NCES is the third largest government agency collecting statistical data, but it is mostly ignored and has gone a long time without a permanent commissioner. He asked what the Board can do to strengthen the position of NCES. Honest numbers are in everyone's interest, he said. Current legislation would weaken the independent status of NCES, Dr. Hedges noted.
Dr. Chard said Board members are still encouraged individually to submit suggestions for a new IES director, but the Board's idea to create a task force (raised at the June meeting) fell outside of federal guidelines for an advisory body. Ms. Betka said Board members can write directly to the Secretary or pass suggestions on to her. An audience member noted that recommendations cannot be submitted on behalf of the Board as a whole, but individual members of the Board can jointly make recommendations in a letter.
Dr. Granger said the Board should talk about the skills and criteria it believes are needed for the IES director position. That discussion began at the June meeting but was cut short; Dr. Singer suggested looking at the minutes from June as a starting point for a discussion at the next Board meeting. Dr. Granger worried that the White House already has some candidates, and the next Board meeting (tentatively scheduled for February 2015) would be too late.
Dr. Chard will draft a document describing the criteria for IES leadership mentioned at the June 2014 Board meeting and circulate it among Board members. Dr. Chard will talk with IES staff about setting up a special meeting to discuss the criteria.
Dr. Chard asked why there were fewer grant applications for 2015. Dr. Brock said some applicants may have been discouraged by the budget situation, which for the past two years has meant that some applications were highly rated but not funded. Dr. Miller said NIH also had some policy changes that affected the number of applications in some areas.
Dr. Miller will seek NIH data on recent policy changes that affected the number of applications submitted. He will share his findings with the Board as part of further discussion about the declining number of applications.
Dr. Gamoran said the deadlines for grant applications are too close together. Some institutions lack the capacity to prepare multiple applications at once. Dr. Gamoran suggested NCER and NCSER assess their data to see whether the deadlines may have affected submissions. In response to Dr. Granger, Dr. McLaughlin and Dr. Brock said that he did not think that the proportion of applications that qualify for funding has not changed. A change in "fundable" applications would represent a significant problem, Dr. Granger said. Dr. Brock reminded the Board that in the past two years, there has not enough money to fund all the proposals that are rated Outstanding or Excellent. With resubmissions, the number of proposals with fundable scores may increase, he added.
Dr. Gamoran suggested further discussion about the number of applications at a future Board meeting. Dr. Singer hoped NCER and NCSER would provide data as well as other information to help understand the situation. Dr. McLaughlin added that some people may have been discouraged from applying when NCSER warned that its funding might still be very limited.
Dr. Chard adjourned the meeting at 3:48 p.m.
Report prepared for NBES by Dana Trevas, Shea & Trevas, Inc.View, download, and print the full meeting minutes as a PDF file (167 KB)
The National Board for Education Sciences is a Federal advisory committee chartered by Congress, operating under the Federal Advisory Committee Act (FACA); 5 U.S.C., App. 2). The Board provides advice to the Director on the policies of the Institute of Education Sciences. The findings and recommendations of the Board do not represent the views of the Agency, and this document does not represent information approved or disseminated by the Department of Education.