Skip Navigation
A gavel National Board for Education Sciences Members | Director's Priorities | Reports | Agendas | Minutes | Resolutions| Briefing Materials
National Board for Education Sciences
January 13–14, 2009 Minutes of Meeting
January 13th   January 14th

Location:
Institute of Education Sciences Board Room
80 F Street NW
Washington, DC

January 13th

Board Members Present:
Dr. Eric Hanushek, Chairman
Mr. Jonathan Baron, Vice Chairman
Dr. David Geary [by teleconference]
Mr. Philip Handy
Dr. Sally Shaywitz

IES Staff Present:
Ms. Sue Betka, Acting Director
Dr. Phoebe Cottingham, Ex Officio, Commissioner, NCEE
Ms. Norma Garza, Executive Director, NBES
Dr. Dean Gerdeman, Research Scientist, NCEE
Dr. Stuart Kerachsky, Ex Officio, Acting Commissioner, NCES
Ms. Mary Grace Lucier, Designated Federal Official
Ms. Ellie McCutcheon
Dr. Lynn Okagaki, Ex Officio, Commissioner, NCER, Acting Commissioner, NCSER
Dr. Audrey Pendleton
Dr. Anne Ricciuti, Deputy Director for Science, IES
Dr. Marsha Silverberg

Other Federal Staff Present:
Dr. Duane Alexander, Director, National Institute of Child Health and Human Development
Ms. Dixie Sommers, Delegate, Bureau of Labor Statistics
Dr. Robert Kominski, Ex Officio, U.S. Census Bureau

Also Present:
Dr. Michael Garet, American Institutes for Research
Ms. Sarah Mandell, Society for Research in Child Development
Dr. Paco Martorell, RAND
Dr. Isaac McFarlin, University of Texas-Dallas
Dr. Nicole McNeil, Department of Psychology, University of Notre Dame
Mr. Jerry Sroufe, American Educational Research Association

MORNING SESSION

10:00 – 10:15 a.m. Call to Order, Approval of Agenda, Chair Remarks, and Remarks of Executive Director

Dr. Eric Hanushek, Chairman, National Board for Education Sciences (NBES)
Ms. Norma Garza, Executive Director, NBES

Chairman Hanushek, called the meeting to order at 10:00 a.m. Upon roll call and approval of the agenda, Dr. Hanushek confirmed the presence of a quorum to conduct the meeting.

He initiated the proceedings with a review of the Board's membership status. The NBES currently consists of six members who have been appointed by the President and confirmed by the U.S. Senate; several more nominations were submitted in August 2008, but the nominations expired at the end of the last Congress. Any immediate increase in the membership will be a result of the pace at which the White House appoints and the Senate confirms new nominations.

Dr. Hanushek then introduced. Sue Betka, Acting Director of the Institute of Education Sciences (IES), who assumed this position upon the departure of the former Director, Dr. Grover Whitehurst. Ms. Garza informed the Board of the dates for two regularly scheduled meetings, which are to take place on May 20–21 (changed from May 19–20) and September 9–10, 2009.

Visitor introductions followed.

10:15 – 10:30 a.m. Update on Institute of Education Sciences
Sue Betka, Acting Director, IES

Ms. Betka said that since the Board's last meeting, Dr. Mark Schneider had resigned, and Dr. Stuart Kerachsky became the Acting Commissioner of the National Center for Education Statistics. Dr. Whitehurst's term as IES Director ended on November 21, 2008. The terms of a number of members ended on November 28, and none of the nominations made by the Bush Administration were acted on by the Senate.

She suggested that Board members might want to review Dr. Whitehurst's third and final biennial report, which he completed prior to the expiration of his term. The report contains his reflections and recommendations on the functioning of the Institute. In addition, Ms. Betka said that IES had released numerous reports, including two that received wide attention, the Trends in International Mathematics and Science Study (TIMSS) report, from NCES, and the Reading First evaluation final report, produced by NCEE.

Work is continuing on grant competitions for 2009. The second round of research applications will be reviewed in February 2009.

Funding is still not final. The continuing resolution, which authorizes spending at the prior fiscal year level, allows an expenditure level of 43 percent; this allocation expires on March 3, 2009. Recent attention has been focused on approval of the national "stimulus package."

As reported by former U.S. Department of Education (ED) Secretary Margaret Spellings, transition activities have been very smooth. Transition materials were prepared, and most IES managerial staff have met and had conversations with agency review team members who contributed to transition team reports. Confirmation hearings are underway as the Board meets for the new ED Secretary [Mr. Arne Duncan], who is expected to be confirmed by January 20, 2009.

10:30 a.m. – 12:00 p.m. Update on IES Center Activity—IES Commissioners and Staff
Dr. Phoebe Cottingham, Commissioner, NCEE

Dr. Cottingham informed the panel and visitors that NCEE is the division within IES that conducts evaluations of federal programs, both those that have been congressionally mandated and others that are requested by program offices and funded by monies that are transferred from program offices to the Center. None of the studies under IES's Evaluation Division receive funds through the IES general fund; funding for these reports is derived from ED sources, as Congressionally mandated evaluations or evaluations of interest to program offices.

NCEE's other major division is Knowledge Utilization, which sponsors the Education Resources Information Center (ERIC), the Regional Educational Laboratory Program (REL), and the WWC. Dr. Cottingham said that her report would focus on the Evaluation Division and four primary issues that have emerged from mid-stream assessment of NCEE evaluations.

Dr. Cottingham referred the Board to a series of handouts beginning with Table 1, an overview of five evaluation reports released since September 1, 2008—Professional Development in Early Reading, Literacy in Even Start, Teacher Induction, Reading First, and Adolescent Literacy. Thirteen additional reports are anticipated in 2009.

Table 2, Summary: Findings From Released NCEE Impact Studies, portrays NCEE evaluations of Reading First, Early Reading First, Even Start, and interventions or strategies that have been highly pertinent to current school improvement and reformation interests. These include special interventions to help students in low-performing schools, or examinations of popular curricula or strategies to improve teacher quality.

Table 3, Experts Bring Evidence to Practitioners, presents NCEE dissemination events featuring IES experts. Table 4, WWC Reports, illustrates the release of 18 reports over the past 4 months, culminating in the Procedures and Standards Handbook as well as Using the WWC: A Practitioner's Guide, and efforts to broaden attention to systematic reviews of evidence.

Dr. Cottingham summarized the four primary issues and related questions covered in her memo:

Issue 1—Interventions in NCEE studies that focused on improving teacher practices (as advocated by experts in professional development in reading or in intensive induction services for new teachers) did not produce evidence of student achievement or learning gains resulting from these strategies. The studies did detect that desired changes in teaching practices took place. The changes were in the right direction, but did not produce student impacts.

Dr. Cottingham referred to the Reading First and Professional Development in Early Reading Instruction studies, the second of which would be discussed later in the meeting agenda, as well as the Comprehensive Teacher Induction Study. Results from both studies indicate that teachers who received intensive coaching or mentoring had no more impact on student achievement than the other treatment group receiving professional development without individual attention, or than the control groups.

Issue 2—Reliance on "effect sizes" for designing studies and reporting findings. Aside from the standardized national tests and general outcomes, such as high school graduation, education research lacks a solid set of outcome measures. Most outcomes in education research are presented in effect size metrics. The problem is that this obscures underlying differences in the importance of measures to addressing the "value" of an outcome. Questions arise as to whether a sizeable effect size for a particular outcome has equal importance with another outcome. Whether an effect size is socially relevant should be justified.

Issue 3—The extent to which NCEE should invest in evaluation studies on developing program implementation measures and collecting data for analysis needs attention. Often such measures are unique to the particular interventions attributes, and are not replicable or transferable to other interventions. Aside from assuring intervention fidelity in implementation to a developer's protocol, there may be little predictive relationship to the "true" outcomes measured in the experiment.

Issue 4—Determining how long to track outcomes from interventions in NCEE studies. Some interventions last for one year; some are extending into two or more years. The question is how long after an intervention is completed should studies collect outcome data, either to see if effects are sustained, or if there are late-emerging effects in cases where the first set of outcomes are null. There are significant budgetary impacts, and following samples may or may not be possible. NCEE is currently testing studies with both short and longer term outcomes that will help illuminate the options.

There was extended discussion of Issue 1. Dr. Cottingham commented on the "disconnect" between what are considered good professional practices and lack of evidence of true effects on student outcomes. This is an issue of tremendous significance for education researchers.

Dr. Hanushek questioned whether research strategies are being formulated that can move the field to the next level.

Dr. Cottingham responded that the mid-stream evaluation team felt strongly that IES had developed a solid set of platforms that create a natural transition from the Research Grant Program and Evaluation Center activities. The released studies followed the recommendations of experts whose work pre-dated IES. The methods were limited to very small scale trials. There tended to be expert "group think" that may have had little evidence in support. She cautioned against a too-hasty dismissal of prior research, and recommended moving ahead with large-scale trials on the most popular interventions.

Mr. Baron suggested the need for fresh thinking about interventions that go beyond the conventional wisdom, and focus on things that are very different that might produce sizable effects. Dr. Cottingham agreed, but commented on the difficulty of independent initiatives without reference to other stakeholders within ED. Going forward, she suggested greater reliance on the use of WWC standards of evidence in deciding whether large-scale testing of an idea is warranted.

Dr. Okagaki commented that although most of the teacher professional development projects evidenced positive effects only for teacher practice and not for student outcomes, a few notable exceptions—primarily coming from the Reading and Writing Research Program—did result in positive gains in both areas, as a consequence of identifying specific strategies for teachers. When these techniques are implemented in the classroom, they have positive impacts on student outcomes as well. She agreed, however, that relying on "conventional wisdom" about what is required to improve teacher effectiveness is no longer practical.

Dr. Cottingham noted that stakeholders in professional development express strong criticisms of the recent studies because the conclusions challenge their assumptions. Others, however, express very different views in recent meetings on the federal studies on professional development, seeing the findings as confirming their perspective. In her view, the major problem is not the quality of the studies; rather it is the challenge of getting real respect and understanding of what is good science in education, especial those who have to make decisions about programs such as professional development.

She further remarked that often education stakeholders claim that in an NCEE study the intervention was not "fully" implemented. In response, NCEE notes that effectiveness studies — of interventions as they are on average implemented — are expected to have underlying differences in "dosage", but study designs do not permit making judgments about "dosage" or how program implementation affects outcomes. NCEE studies find significant dispersion in both outcomes and implementation across sites, but correlational analysis (exploratory analysis) does not support conclusions about dosage "effects". Implementation differences are natural variation; not controlled dosage evidence.

Dr. Cottingham elaborated on the remaining three major issues of her report, continuing on to the second issue—the reliance on effect sizes as outcome measures. An effect size on one outcome may not have the same meaning in terms of social and educational relevance as an effect size on another outcome. There is a desire among education professionals to create alternative metrics, without eliminating effect size measurements, to help policymakers understand the significance of particular interventions. Dr. Cottingham noted that there are significant effect sizes in learning growth through second grade, but that the trend diminishes in later grades.

Her third issue covered the question of whether sufficient value is being gleaned from investing substantial resources in measuring the practices associated with particular interventions, and whether or not there is transferability of all these measures, particularly when researchers are using study-specific measures for which parallels do not exist.

Finally, Dr. Cottingham covered a fourth issue, longevity of studies. Short-term, one-year studies cannot detect any late-emerging effects. The recent studies in professional development for teachers did conduct follow-up studies to see if one-year of training had more lasting teacher practice effects, but they didn't. She suggested the advisability of conducting such longer-term studies, provided research funding can accommodate them, to learn more.

Dr. Hanushek invited comments from David Geary, who participated via web teleconferencing. Dr. Geary expressed his agreement with Mr. Baron about the equal importance of negative and positive results in determining funding priorities.

Dr. Hanushek then raised the question of the feasibility of NCEE's having input in the development of projects that are determined by the priorities of the program offices or Congressional mandates external to IES. Dr. Cottingham responded that it is essential that rigorous studies be used early to test new initiatives being proposed before a nationwide program is created. Dr. Hanushek replied that the Board and IES have a responsibility to try to influence among program offices to continue or consider regular evaluation.

Dr. Lynn Okagaki, Ex Officio, Commissioner, National Center for Education Research (NCER)

Dr. Okagaki started by mentioning that Dr. Nicole McNeil of the University of Notre Dame, a meeting participant, is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE). She also informed the Board of the advanced studies seminars sponsored by the National Center for Special Education Research.

She then addressed the issue of relevance, suggesting that in addition to basic questions about which curricula and instructional practices improve student outcomes in reading, writing, mathematics, and science, there are other issues that are important to districts and states. Although individual research projects, as well as NCER's research program on the evaluation of state and local education programs and policies, include partnerships with state or local education agencies, NCER is interested in obtaining input from states and districts on the types of research questions they would like researchers to address.

Mr. Handy recommended reaching out to the superintendents of the six states that have the most well- developed data elements with regard to teacher-student correlations, student identifiers, work, and other criteria.

In response to Dr. Alexander's question about whether solicitations include designs for more specific rather than primarily general research protocols, Dr. Okagaki said that the funding announcements tend to be more broadly framed, although methodological designs may be more detailed than National Science Foundation (NSF) or National Institutes of Health solicitations, for example. Some of the Research & Development (R&D) Center topics, however, are designed to tackle very specific education challenges.

Dr. Alexander commented that although Program Announcements (PAs) also tend to address broader topic areas and outnumber Requests for Applications (RFAs), the RFAs, used very selectively, do invite competition among applicants who wish to address more specific problems. He also said that the National Institute for Child Health and Development puts out both PAs and RFAs.

Dr. Stuart Kerachsky, Acting Commissioner, National Center for Education Statistics (NCES)

Dr. Kerachsky started by expressing his concern about "creeping" misuse of effect size statistics as a tool, and he hopes that this trend is reversed. He then informed the Board that he would bring three primary issues to their attention.

First, Dr. Kerachsky reported on the release of the 2007 Mathematics Assessment for Puerto Rico, which is sponsored by the National Assessment of Educational Progress (NAEP). Puerto Rico receives Title I funding. The issue, which has become political, is that assessment results are significantly worse for students in Puerto Rico, in spite of efforts to provide a Spanish-language version. The Puerto Rico Secretary of Education repudiated the NAEP's validity for Puerto Rico and requested permanent exemption from it. His request that NCES refrain from presenting the results there was honored, although results were reported to the media. The new Governor of Puerto Rico, however, believes that the assessment is meaningful for the residents of the island.

Beyond this situation, calibration problems remain with the assessment instrument, and the decision has been made not to administer the NAEP in Puerto Rico in 2009, but rather to perform a set of diagnostics and get back on track for the next assessment cycle. The problem has underscored the role of politics in education policymaking.

The second issue raised by Dr. Kerachsky is the attempt to pilot a national 12th grade NAEP assessment. Thus far, only 11 states have expressed interest in participating, which is the minimum necessary to go forward. Interest appeared to be greater during the period that NAEP was a voluntary rather than a mandatory assessment. One challenge is the relative lack of motivation among students to take the time for the test.

The National Assessment Governing Board (NAGB) is interested in performing preparedness assessments; it would like to evaluate whether the nation's students are academically prepared rather than emotionally ready to go to the next educational level. However, states are somewhat "leery" and are currently lacking the motivation to proceed in this direction. Discussions among state personnel—teachers and superintendents—tend to center around concerns about national curricula and expectations for testing of achievement levels. NAGB and NCES have not taken sides in these discussions, but realize that if testing is to have any value at this level, it may need to derive from a system that is not currently functioning within the United States.

Dr. Hanushek raised the question about the population of students that are no longer in school by 12th grade—approximately 25 percent. Dr. Kerachsky reflected that the purpose of the NAEP is intended primarily to measure how successful school systems are, rather than to gauge what the population of school-age students broadly know. Dr. Hanushek raised the possibility of the NAEP assessing the 10 percent of students who receive GED degrees in an attempt to measure relative levels of achievement.

The third issue Dr. Kerachsky raised concerns international assessments—the need to determine how students in the United States are doing relative to their counterparts in other countries. The 2007 TIMSS is similar to the NAEP approach but is applied internationally; comparable tests are sponsored by the Organisation for Economic Cooperation and Development (OECD). PISA (Programme for International Student Achievement) is the most notable of the OECD tests, and it focuses on science, math, and reading literacy from a life-skills basis; it is given at age 15, the end of the mandatory schooling process in many countries.

These two tests, along with the PIRLS (Progress in International Reading Literacy Study), reveal that in comparison to advanced Asian economies, Britain, the Russian Federation, and Estonia, the United States does not score well. Although views tend to vary about the meaning and value of the tests, the "bottom line" is determining what there is to learn from these assessments, how to obtain this information, and evaluating how to go forward. NCES is in a position to provide leadership in determining education policy that will improve the international standing of the United States, and Dr. Kerachsky invited the guidance of the Board in this regard.

Discussion ensued about the relative merits of these international testing formats and claims made by PISA and TIMSS, as well as PIAAC (Programme for the International Assessment of Adult Competencies), to be administered in 2011. Dr. Geary raised the question in this context about the meaning of "real world" content with respect to the PISA. Dr. Kerachsky responded that the phrase refers to an accumulation of knowledge—what students are able to integrate as a function of experience gained outside of academic environments—and that its importance educationally should be acknowledged.

Responding to Dr. Geary's follow-up question about what international assessments actually predict, Dr. Kerachsky said that this requires further study, particularly in light of consistent underperformance in the United States relative to Hong Kong, for example. Dr. Hanushek agreed that scores are definitely predictive of performance in school, continuation into college, and for economic performance at the national level.

12:10 – 1:40 p.m. Lunch Break

AFTERNOON SESSION

The afternoon session began with the introduction by Dr. Okagaki of Dr. Nicole McNeil, assistant professor of psychology at the University of Notre Dame and winner of the PECASE.

1:30 – 2:15 p.m. Presentation of Recently Released IES StudiesArithmetic Practice That Promotes Conceptual Understanding and Computational Fluency
Dr. Nicole McNeil, Assistant Professor, Department of Psychology, University of Notre Dame

Dr. McNeil explained that her research focuses on the development of mathematical thinking and symbolic understanding, especially misconceptions that children exhibit on some seemingly straightforward math problems, particularly those with operations on both sides of the equal sign, or "mathematical equivalence" problems.

These well-defined problems can be used for testing general hypotheses about mechanisms of cognitive change. Results can then be applied to investigate the translation between one knowledge state and another, or the role of gesture in the thinking and learning process, which in turn can provide insight into elementary school-aged children's early algebraic understanding and reasons for serious misconceptions in solving these math problems. Studies reveal that between 75 to 85 percent of U.S. children between 7 and 11 years of age cannot solve such math problems.

In essence, Dr. McNeil's findings have indicated that children's understanding of the equal sign predicts and facilitates the transition to algebraic problem solving. She concluded her presentation by reflecting on implications of these findings, primarily: (1) the importance of focusing on what children have rather than exclusively on what they lack; (2) certain types of practice with basic sub-skills does not necessarily improve performance; (3) the findings create an impetus for conceptual reorganization in approaches to teaching arithmetic in elementary school education; and (4) early math teaching should include variation in the types of math problems that children encounter.

2:15 – 3:00 p.m. Presentation of Recently Released IES StudiesThe Effects of College Remediation on Students' Academic and Labor Market Outcomes
Dr. Isaac McFarlin, Principal Investigator, University of Texas-Dallas
Dr. Paco Martorell, Principal Investigator, RAND

Dr. McFarlin began his presentation by commenting on the increased interest in college remediation among researchers and success studies over the past decade, particularly given steady increases in college attendance rates since 1965.

The outcomes studied were attainment of college degree and earnings; additional analyses were performed on credits attempted in the first year and transfer behaviors. They found that being in remediation decreases the likelihood of graduation by 8.2 percentage points for those who initially entered a 2-year college. Estimates, which are likely biased downwards, were based on a regression discontinuity design and compared students just above and just below the cut-point for being required to take remediation courses.

3:15 – 4:45 p.m. Presentation of Recently Released IES StudiesThe Impact of Professional Development Models and Strategies on Teacher Practice and Student Achievement in Early Reading
Dr. Michael Garet, Principal Investigator, American Institutes for Research (AIR)

Dr. Marsha Silverberg, NCEE project officer for this study, informed the Board that the focus of this report, published in September 2008, is an examination of professional development in reading, which was viewed in 2003 at the study's inception as an important strategy for improving reading achievement in elementary school children. The study used the most rigorous research design to study this set of interventions in a school-level, random-assignment study (90 schools).

Dr. Garet described the study. It tested two intensive inservice professional development strategies. The two interventions were compared to a control group. Garet noted that professional development relies on "craft knowledge", some correlational research, but few have conducted rigorous impact studies that look at the impact on either teacher or student outcomes. The belief is that such professional development needs to be sustained and intensive, a rarity. Also it should be focused on the content and how to teach the content that teachers are supposed to teach. This study tested best practices in second grade reading achievement; the research team sought to examine the impact on student achievement in this context, as well as teacher knowledge and instructional practice. The first treatment (Treatment A) was a content-focused 3-day summer institute and 5 1-day seminars during the school year, following Louisa Moats' Language Essentials for Teachers of Reading and Spelling, a well-known intervention. The second treatment (Treatment B) added in-school coaching by coaches selected and hired by the districts and trained by the Consortium on Reading Excellence (CORE). The professional development was delivered as planned with very high attendance at the sessions.

Both treatments resulted in a statistically significant improvement in teachers' word-level knowledge; but the results for the meaning-level knowledge were not large enough to be statistically significant.

The study also gathered measures of classroom instruction using observations and coding of the entire reading instructional "blog." Researchers then created three scales designed to measure three of the primary focus areas of professional development: (1) explicit instruction; (2) independent student activity—students doing what the teachers have modeled; and (3) differentiated instruction—identifying individual student needs and tailoring appropriate instruction. Findings for impacts in all three areas did not support differences between Treatment A and B. Only the explicit instruction measure produced an effect (compared to the control group) that was at statistically significant levels. Differences between the two treatments and the control group in explicit instructions at the end of the treatment year were not sustained at the end of the second year, during which no more treatment was available.

Drs. Hanushek and Garet discussed the plausibility of the hypothesis that teachers simply forget what they've learned in professional development settings and need to be continually reinforced. The study raises questions about why the intervention produced an impact on teacher knowledge and some practice outcomes, but not on student achievement. Another hypothesis is that the teacher knowledge and practice outcomes that were measured are not related to student achievements.

A final theory is that student achievement measures were partial, not sufficiently accounting for phonics, phonemic awareness and fluency, and other focal areas of the research; the policy-relevant outcome in this scenario would be comprehension.

Dr. Louisa Moats has raised questions about the study's design and its focus on only 1 year of schooling (second grade), and she favors a more district-centered approach toward conducting professional development that includes principals, supervisors, and coaches as well.

Dr. Shaywitz suggested that while the intervention may not have been long or intense enough, fresher approaches should be considered to resolving these questions, as the same discussion is being repeated across other dimensions of education research.

In addition, Dr. Hanushek expressed concern about whether there was "bleed-over" between treatment and control groups. Dr. Garet replied that teachers in treatment groups participated in substantially more professional development, institutes, and seminars than teachers in control groups. The differences were particularly striking with regard to coaching, with the control group receiving 6 hours on average and the treatment group receiving 71 hours.

In response to a question from Mr. Baron about general responses to the study, Dr. Garet said that researchers feel "chastened," having anticipated more significant effects from what was considered a well-implemented study, and are thus tending to question the achievement measure. Dr. Geary added that the Math Panel reviews on professional development came to the same conclusion—that current programs were not effective in improving student achievement.

Dr. Cottingham commented on the "dead silence" that followed a presentation on the other NCEE study that also used coaches (the teacher induction study) and had similar findings at AIR's Scientific Evidence in Education Forum (SEE).

Dr. Hanushek underscored the fact that these current findings on professional development run counter to current policy discussions, which often assume that professional development is a global solution to the problem of improving student achievement. Dr. Cottingham added that policies concerning coaching have been very popular; however, some state policymakers tacitly concurred that the policy has no evidence to support it. Mr. Handy agreed, citing a similar study in Florida on National Certification that found no correlation between student achievement and professional development. Legislators are reluctant to admit the ineffectiveness of this research, given that millions of dollars in state revenues are being allocated to support these efforts.

Dr. Hanushek then raised the question of appropriate directions for IES and NBES in terms of following up on these findings. Dr. Garet replied that the report was designed to offer conclusions about impacts, but could not give answers to what to try next. Some exploratory analysis of professional development impacts could be helpful, and perhaps re-examination of the the nature of the coaching process in terms of selection, training, coaching topics, and performance.

Dr. Shaywitz said that although it may not be within the Board's mandate, findings on professional development are profound and cannot be rationalized away; instead, they should be combined and disseminated to avoid propagating misconceptions about professional development among researchers, educators, and policymakers. She said that it is the responsibility of the NBES to provide guidance regarding next steps.

Mr. Baron agreed, urging that IES should recommend that new studies include condensed executive summaries to persuasively and clearly explain what makes each study definitive. Dr. Silverberg said in response that IES is required by the peer-review process to include substantial amounts of information in executive summaries so that they can serve as stand-alone documents. Dr. Cottingham confirmed that this approach is also intended as a platform for cross-agency discussion. She also said that a WWC one-page review for policymakers about essential findings of professional development studies could be help, although it would not in itself serve to bring attention to the study.

Dr. Shaywitz stressed the urgency of disseminating current findings as this study and others, including the Reading First results, are pieces that are beginning to form a picture. She hoped that someone would combine all the findings and that there be dissemination of the profound findings. One can't just rationalize the findings away because there are too many, and they are all consistent, and schools are using these professional development strategies. IES and NBES have a responsibility to determine what can be done as the next step.

Mr. Handy commented that as long as there is money available for professional development, practitioners will basically just follow the money. So alternatives are needed.

Dr. Shaywitz emphasized that while more research is always needed, probably newer hypotheses too, she believed that the priority should be on the urgency of bringing out what has been learned. The Board needs to figure out working with staff at IES what can be done right now to carry out the responsibility of bringing out the consistency of findings on the issue of professional development. To do nothing would be irresponsible. Dr. Cottingham noted that an issue to resolve is whether IES should try to do bigger and better professional development interventions at this point, or try to shift gears to try some other things.

Dr. Hanushek reviewed three interlocking questions for the Board's consideration: First, should evaluation research continue to focus on the professional development strategy, do more, replicate, etc. or go in other directions? How do these studies inform the research program? Second, what else can IES do to disseminate and interpret the studies? And third, what should NBES and outside researchers contribute, especially along the lines of follow-on research designs, dissemination, and interpretation?

Mr. Handy reflected that the challenge of elevating education research and policy to achieve desired outcomes can be met to some extent through improved formats (i.e., executive summaries), but that stakeholders have been more reticent about the deeper issue of "marketing" in determining research priorities. He stressed that the Board needs to be bolder in taking risks or the study findings will stay under a bushel; Dr. Shaywitz agreed, suggesting that the greater risk is in not moving forward to deepen awareness among legislators, educators, and the general public.

Day 1 of the meeting was adjourned at 4:40 p.m.

January 14th

Board Members Present:
Dr. Eric Hanushek, Board Chairman
Mr. Jonathan Baron
Dr. David Geary [by teleconference]
Mr. Philip Handy
Dr. Sally Shaywitz

IES Staff Present:
Ms. Sue Betka
Dr. Phoebe Cottingham, Ex Officio
Ms. Norma Garza, Executive Director, NBES
Dr. Stuart Kerachsky, Ex Officio
Mary Grace Lucier, Designated Federal Official
Ellie McCutcheon
Dr. Lynn Okagaki, Ex Officio
Dr. Anne Ricciuti
Ms. Susan Sanchez, Education Evaluation Specialist, Office of the Commissioner, NCEE

Other Federal Staff Present:
Dr. Arden Bement, Director, National Science Foundation

Also Present:
Dr. Jill Constantine, Deputy Director, What Works Clearinghouse, and Principal Investigator, Beginning Reading; Senior Economist, Education Area Leader, and Associate Director of Research, Mathematica Policy Research, Inc.
Dr. Mark Dynarski, Director, What Works Clearinghouse; Vice President and Director, Center for Improving Research Evidence, Mathematica Policy Research, Inc.
Dr. Jeff Kling, Senior Fellow and Deputy Director, Economic Studies, Brookings Institution; Expert Panel Member, What Works Clearinghouse
Ms. La Tosha Lewis, Assistant Director for Government Relations, Consortium of Social Science Associations

MORNING SESSION

8:30 – 8:45 a.m. Review of Prior Day's Activities and Agenda
Dr. Eric Hanushek, Chairman, NBES

Dr. Hanushek convened the second day of the NBES meeting at 8:40 a.m., after which the roll call was taken and it was determined that a quorum was present to conduct the meeting. He initiated the day's proceedings by describing the WWC, which was created by IES to help provide reliable information to the wider world about what is known from education evaluation research.

Dr. Hanushek then introduced Dr. Jeff Kling, one of six experts who participated in an expert panel formed by former NBES chair, Robert Granger, and commissioned to do a review on where the WWC was in constructing an "evidence review process and reports that are scientifically valid—that is provide accurate information about the strength of evidence of meaningful effects of important educational outcomes". The panel had been chaired by Dr. David Card, an economist at the University of California, Berkeley. Drs. Mark Dynarski and Jill Constantine from the Clearinghouse's contractor, Mathematica Policy Research, Inc. (Mathematica), would later contribute their comments.

8:45 – 10:15 a.m. Next Steps for What Works Clearinghouse
Dr. Jeffrey Kling, Senior Fellow and Deputy Director, Economic Studies, Brookings Institution; and WWC Expert Panel Member

Dr. Kling began by referring Board members to the Report of the What Works Clearinghouse Expert Panel so that they could review the panel's recommendations. He cited the other WWC evaluators who worked on the report (Drs. Hendricks Brown, David Card, Kay Dickersin, Joel Greenhouse, and Julia Littell), who represent perspectives ranging from statistics to public health and meta-analysis.

The WWC evaluators were most concerned with methodological issues and assessing the scientific and statistical validity and usefulness of procedures used to make recommendations about studies. The first 10 pages of the report go through what WWC does, and conclude at every step that the WWC follows reasonable standards, have made reasonable choices, with an output and produces that were quite good. Dr. Kling said that questions about the focus of the WWC's mission centered on whether it should reflect more of a Food and Drug Administration model of looking at evidence for the efficacy of certain types of practices, or rather reflect more of a meta-analytical approach and focus on particular curricula, or "sets" of interventions that would cut across specific branded curricula.

Some panelists recommended the latter approach, but the panel decided the mission question wasn't the purpose of the panel work. Rather the task was to look at the procedures and standards, manuals and documentation, to evaluate the existing mission of the WWC. Panelists devised a five-step process for evaluating how WWC screening for, finding evidence, and interpreting that evidence, coming up with a synthesis and some statements about the evidence. Dr. David Card pulled the results from the five-step process into a final report.

Dr. Kling continued his presentation with a discussion of the recommendations found in the panel's report, enumerated below:

Recommendation 1: Full Review—The panel recommends that IES commission a full review of the WWC, including a review of the Clearinghouse's mission and of the WWC Practice Guides. It also recommends that IES consider instituting a regular review process to ensure that WWC is using the most appropriate standards in its work.

Recommendation 2: Protocol Templates—The panel recommended that the WWC review and update protocol templates, focusing on the following issues:

  1. standards for crossover and assignment noncompliance, and for adjusting intention to treat effects across studies;
  2. standards for documenting the program received in the control arm of randomized control trials, and potentially incorporating this information in making comparisons across studies and/or interventions;
  3. revised standards for multiple comparisons (WWC might review the treatment of multiple comparisons in light of the recent research report by Peter Schochet, entitled Guidelines for Multiple Testing in Impact Evaluations);
  4. attrition standards (WWC should reconsider the current process of setting different attrition standards in different topic areas);
  5. potential conflicts of interest (WWC can establish a new protocol to keep track of potential conflicts of interest, such as cases where a study is funded or conducted by a program developer, and consider making that information available in its reports); and
  6. randomization (WWC should precisely define the standards for "randomization" in a multilevel setting).

Recommendation 3: Documentation of Search Process—The panel recommended that the WWC expand the protocol templates to specify more explicit documentation of the actual search process used in each topic area, and maintain a record of the search process results that can be used to guide decisionmaking on future search process modifications.

Recommendation 4: Reliability of Eligibility Screening—The panel recommends that the WWC conduct regular studies of the reliability of the eligibility screening process, using two independent screeners, and that the WWC use the results from these studies to refine the eligibility screening rules and screening practices.

Recommendation 5: Documentation of Screening Process—The panel recommended that WWC reports include a QUOROM-type flow chart documenting the flow of studies through each review and the number of studies excluded at each point, and a Table of Excluded Studies listing specific reasons for exclusion.

Recommendation 6: Misalignment Adjustment—The panel recommends that in cases where a study analysis is "misaligned," WWC staff request that study authors reanalyze their data correctly (taking into account the unit of randomization and clustering), and that the results from the process be compared to the simple ex-post adjustment procedure currently specified, to develop evidence on the validity of the latter.

Recommendation 7: Combining Evidence Across Multiple Studies—The panel recommended that the WWC reevaluate procedures for combining evidence across studies, with specific attention to the issue of how the rules for combining evidence can be optimally tuned, given the objectives of the WWC review process and the sample sizes in typical studies for a topic area.

Recommendation 8: Reporting—The panel recommends that the WWC's published reports on the website include the Topic Area Protocols, as well as more information on the screening process results that led to the set of eligible studies actually summarized in the Topic Area Reports. It also recommended that the WWC upload its Procedures and Standards Handbook, including appendixes, as well as all other relevant documents that establish and document its policies and procedures.

Recommendation 9: Practice Guides—The panel recommended that the Practice Guides, which contain material that does not meet the high standards of evidence for other WWC products, be clearly separated from the Topic Area and Intervention Reports.

Recommendation 10: Outreach and Collaboration With Other Organizations—The panel recommends that the WWC build and maintain a relationship with national and international organizations focusing on systematic reviews, specifically with the goals of having Review Team leaders engaged in the broader scientific community, and in bringing the latest standards and practices to the WWC. The panel also recommended that the WWC convene working groups with a mixture of researchers (including specialists in education research and systematic reviews) to address the development of new standards for the review and synthesis of studies.

Dr. Kling went over the recommendations and commented or explained further. He said that there are some specific technical things that could deserve more attention, such as how to deal with the fact that not all those assigned to an intervention end up getting it, and setting a basic standard of reporting that could be useful as it is a common phenomenon. Also, how to be more systematic about what happens in the control group of studies, beyond the "business as usual" description. Most studies do not provide enough information to do anything retrospectively, but IES could try to push the field forward by bringing attention to it. Third is the issue of multiple comparisons, or the problem of studying lost of outcomes from a study, where there will likely be some outcomes that look statistically significant. Kling referenced a report on this problem that came out of the NCEE working group on methodology as making recommendations that could be applied. Fourth, Kling noted attrition standards, which WWC has work under way on. The panel considered how to deal with potential conflicts of interest, or studies done by developers of interventions. The panel thought that exclusion would exclude much of the evidence; it was recommended to be more forthright about what the potential interests are. Finally, on randomization, and whether scheduling software was good enough was recognized as an issue. Finally, Kling commented that issues regarding statistical issues such as the scope of studies, reliability of the screening process, and providing summary information on how systematically WWC gets through the exclusion of studies process to those that are used. The WWC misalignment adjustment and the clustering adjustment may also be verifiable, and how WWC might combine evidence from multiple studies that differ greatly in measures and designs. Also making available more of the details regarding procedures and standards, and apparently WWC has already got that in a Handbook on the web.

Dr. Hanushek noted that combining different studies, with different intensities, etc., is not a very well understood process and he believed nobody had a good answer, and perhaps this was a basic research issue. Dr. Kling agreed.

Dr. Jill Constantine, Deputy Director, What Works Clearinghouse, and Principal Investigator, Beginning Reading; Senior Economist, Education Area Leader, and Associate Director of Research, Mathematica Policy Research, Inc.

Dr. Constantine began her presentation by highlighting changes since the panel's report and recommendations, which have largely focused on standards in some areas, and improving documentation. She referred to the 2007 Statement of Work for the contract awarded to Mathematica that required ongoing assessment of standards and procedures. This includes expanding and developing standards where they are needed. Mathematica is currently assessing standards for two types of study designs for which standards are not available: regression discontinuity designs and single subject designs.

Standards are also being assessed for consistency on an ongoing basis in association with a Technical Work Group. This is in addition to recommendations made by the WWC evaluation panel.

Dr. Mark Dynarski, Director, What Works Clearinghouse; Vice President and Director, Center for Improving Research Evidence, Mathematica Policy Research, Inc.

Dr. Dynarski followed up on Dr. Constantine's comments by citing the WWC evaluation panel's work and the hundreds of hours that support many of the reports available on the Clearinghouse's website, as well as the efforts of the WWC to report as thoroughly as possible on unbiased education research. He said that the Procedures and Standards Handbook intended to address and reduce any remaining areas of inconsistency across topics, and that the WWC is also focusing on differences in the structuring of math and reading research protocols.

Dr. Dynarski continued with a review of the evaluation panel's report, beginning with outreach and collaboration with external organizations that are also exploring promising research practices. Dr. Kay Dickersin of Johns Hopkins University, who was part of the WWC expert panel, was one of a number of attendees at a forum held in December 2008 who stressed the need to schedule these events more often in order to offset the "siloing" effect that can result from researchers focusing only within their own disciplines. A scientific housing for these kinds of events would provide a more natural institutional structure for collaboration. An expert advisory group is being planned to institutionalize the type of expert advisory group support provided by the WWC panel, and may focus on bringing together experts regularly on research methodology, synthesis, and dissemination.

Dr. Constantine provided additional detail on expanding the description of the literature searching process, which has been included in the Procedures and Standards Handbook. The WWC has followed through by putting processes in place to help maintain consistency among librarians and researchers performing initial reviews of education research.

Additional resources will be allocated to ensuring reliability tests and, when possible, the allocation of two screeners to conduct initial searches to facilitate the process of finding studies of effectiveness while weeding out bias or errors. An extensive documentation system is in place to monitor the entire review process as soon as a study is identified, and the WWC is exploring methods for reporting on it, possibly through the use of flow charts.

Dr. Hanushek then asked Drs. Dynarski and Constantine whether studies may have been initially missed in the first phase of establishing the WWC that have since become apparent. Dr. Constantine replied that given the volume of work that was initially found, the improvement of search engines, as well as the current practice of adding names of intervention studies to key word searches, has resulted in locating the small percentage of studies that had not been initially discovered. She confirmed that as a result of these improvements in screening technologies, the WWC has not had to "reinvent" the entire screening approach.

In discussing differences in attrition and methods for establishing equivalence for study designs, Dr. Constantine referred the panel to the Procedures and Standards Handbook Appendix, which includes a conceptual diagram of how study designs are considered and evidence standards are reached by each type of design. She explained that the WWC technical team developed a theoretical model for how both overall and differential attrition could generate bias in randomized control trials, and that data from Mathematica models have been used to calibrate it. Ranges of attrition were diagramed to facilitate selection of an acceptable level, which is a function of both overall and differential attrition; thus, apparently separate sets of standards can be viewed as interactive.

Dr. Constantine described two situations in which a study would have to establish equivalence of the analysis sample: (1) quasi-experimental designs based on matching treatment and comparison groups, and (2) studies initiated as a randomized control trial but with high enough rates of attrition to go past the attrition standard. To offset variations introduced by principal investigator discretion and other variables, revised standards stipulate that the equivalence of the analysis sample must be established at baseline and a covariate must be included; if differences at baseline are larger than .25, then evidence standards have not been met.

Mr. Handy commented that the emphasis on rigor in conducting education research was reassuring and that, going forward, the NBES should continue focusing on relevance and accessibility of studies to superintendents and policymakers, given the potential of the WWC initiative on a national level. Dr. Dynarski replied that the WWC seeks to improve accessibility for educators who are more attuned to conclusions rather than technical considerations in conducting research, and to this end, the Clearinghouse's website now includes a new feature, "What Works for Practitioners."

Mr. Baron said that the review was excellent in looking at the overall standards and processes. He commented that he supported the panel's recommendation of a more comprehensive review of how standards were applied to particular studies, because in his own review of a number of the underlying studies, he found cases where the review misses some key flaws that violate WWC's own standards. Mr. Baron noted a study of an early reading program that had, he believed, violation of intent to treat. Another study only reported outcomes for a subgroup of students formed after random assignment. Mr. Baron said that he thought that maybe a third of the WWC reviews could be in error, based on the faults he found in studies that violated WWC's own standards. He also urged that the WWC report on all intervention outcomes in a way that provides a more coherent overall picture of the intervention's effectiveness, and to recognize when effects are sustained over time as being more important than effects found immediately post-intervention but one doesn't know whether they are sustained. Mr. Baron cited the Department of Labor's New Chance Demonstration as a study that WWC found effective in terms of the receipt of a GED, but did not report on the lack of effect on employment and earnings, welfare receipt, subsequent childbearing, or on children's cognitive development and negative effects found on parent-related stress and children's behavior problems.

In response to Mr. Baron's concerns, Dr. Constantine questioned the estimate that one-third of studies reported by the WWC may be flawed. She provided a review of WWC's "extensive and very carefully done" quality-control processes, which begin with reconciliation of the views of the two researchers assigned to a given study, and their report is then forwarded to Mathematica for an internal review. IES then conducts an external peer review and forwards comments back to the WWC. Results are then posted on the WWC website and are available for open-ended comments and questions, which range from concerns about inadequate data or misapplication of WWC standards.

Any findings that are challenged are then subjected to an internal team's review, that is independent of the review process. When changes are required, the WWC will make them and post a revised report on the Web. However, it is not always possible to contact the original authors of a flawed study prior to the study's release. When there is a review error found through the audit by the internal review team, then the WWC will update a report. Dr. Constantine said that WWC was confident that their error rate is very low, and that if and when an error is found, procedures are in place for fixing it.

Dr. Dynarski responded to the concern by Mr. Baron about why WWC does not give all outcomes equal reporting. He noted that review protocols are clear about what set of outcomes are of interest. To follow the systematic review procedures, reports focus on those outcomes, and not introduce study outcomes falling outside the protocol.

Dr. Constantine added that the WWC attempts to avoid weighing in on matters of subjective valuation of education research priorities, such as which outcomes are more important. Therefore the review procedures prevent picking and choosing what is important depending on the value judgments of reviewers. The protocol clarity and transparency of what are considered outcomes of interest for the WWC review serves to control inclinations to vary what is important by study or intervention — all studies and interventions get treated the same, so there isn't any systematic advantaging or disadvantaging of any approach or intervention.

On another topic, Dr. Geary raised the possibility of distinguishing the types of measures that are included in the reports for the benefit of teaching professionals and policymakers accessing the WWC website. Dr. Constantine replied that it could be difficult to clearly discern tests and measures used in various studies without reading the entire report. Although the WWC has provided a means for readers to sort and synthesize information relative to particular domains in tables available on the website, a more developed mechanism for bin sorting needs to be developed.

Dr. Dynarski added that in creating the tables, the WWC has been constrained by the need to maintain consistency in table structure across different areas of the Clearinghouse; however, the WWC should acknowledge that this raises issues of interpretation and judgment in creating the tables.

Dr. Kling said that although the WWC evaluation panel did not conduct an in-depth review of many source materials, the probability of a one-third error rate in reviews of studies was very low, based on an extrapolation from the 20 studies that the panel reviewed for its report and not finding any substantial quality problems in the 20 studies. He raised another issue, however, of potential distortions that may be a built-in function of the WWC protocols that allow short-term assessment instrument findings to produce positive findings, without examining any ultimate education outcomes.

In responding to Dr. Kling, Dr. Constantine acknowledged that the WWC grapples with this issue, but that in spite of researcher feedback about extending the duration of studies and paying less attention to studies of one-semester curriculums or very short-term supplements or software, , principals and superintendents do care about evidence on these kinds of things and do not welcome researchers or WWC telling them what they should care about. A principal may only have control over some funds where they get to decide what they should put in their computer lab, so studies of these options are perfectly useful for them. Educators are interested in all sorts of things.

Dr. Shaywitz expressed appreciation for the very helpful description of the WWC process and the panel's work, giving a much better understanding. With regard to Mr. Baron's concern, it would be helpful to note on the website that the review process only looks at pre-determine primary objectives, and there may be other outcome measures that are different and not included.

Dr. Hanushek asked for a comment that the New Chance review indiscriminately mixed together graduation and GEDs. Dr. Dynarski said that the net effect was still that the rate of high school completion — the key outcome for the dropout review — did increase, due to higher GED rate, which was a protocol outcome.

Dr. Hanushek commented about the difficulty in finding a balance between formulaic approaches to presenting WWC material and more user-friendly approaches, but suggested mentioning observations that would enhance a more thorough interpretation and understanding of relevant study outcomes.

Dr. Cottingham informed the Board that the WWC is organizing a comprehensive external advisory group. Also, the multiple comparisons report produced out of the NCEE methods group was actually stimulated by WWC's own efforts to apply multiple comparison adjustments. There is interplay between the WWC and other advances in NCEE on evaluation standards.

Dr. Arden Bement, director of the NSF, then drew the panel's attention back to the issue of conflicts of interest and recommended that the protocol of the National Research Council be invoked in requiring an independent objective review panel to ensure that study findings are not biased.

In response, Dr. Constantine said that the only way to determine a developer's role in a study and subsequent report is whether the developer lists themselves as the author. The WWC has a formal process in place for contacting authors to this end. However, author/developers are not always forthcoming in this regard. Congress has wrestled with this issue. Not only would additional resources be required to adequately address it, but the length of the review process would be affected. The WWC's temporary resolution of this issue is to evaluate studies against standards and and their application to studies give transparency rather than automatically assuming unethical behavior on the part of developers.

Dr. Dynarski commented that the more serious issue with regard to conflicts of interest is the number of studies that never come to light, and thus cannot be parsed by the WWC. Another flag is the underreporting of outcomes. He added that the WWC has been tracking numerous areas in medical research in which conflicts of interest are even more commonplace, but that no mechanisms are available to do this empirically.

Agreeing with this perspective, Dr. Kling suggested the formulation of a higher standard of evidence, or a "WWC Gold Seal of Approval" level, requiring researchers to preregister trials. Although the evaluation panel did not focus on this in its report, this could be a potential solution to issues related to conflicts of interest. This would also constitute an incentive for developers. Dr. Cottingham said that this task is mandated in the WWC contract and that a trial registry would require details about the intended study sample as well as analysis and outcomes.

Dr. Kling commented that the WWC evaluation panel was particularly concerned about the complicated problem of how to "collapse" down and report on studies that have a large and diverse set of outcomes, a problem that is particularly exacerbated when the study is larger, such as the one on drop-out prevention. Dr. Kling said he thought the problem falls in the general research bucket of how to combine things, especially if one's concern is an area of inquiry that is itself very large. He said that the panel felt that the WWC has done a reasonable job on a fundamentally difficulty thing.

Dr. Dynarski followed up by suggesting that the stock of WWC studies now supports in some areas a statistical meta-analysis to provide an answer to a question regarding general strategies, or classes of interventions. Once one has dozens of studies, then a meta-analyst can to look at other questions.

The issue of the need for improved quality control was raised again by Mr. Baron, who emphasized that the examples he had referred to earlier in the meeting were illustrative of a number of studies that researchers would agree required further review. Dr. Hanushek urged bringing needed corrections to the attention of WWC. Mr. Baron recommended that the WWC undertake a more comprehensive assessment of how standards are actually applied in cases where, in spite of good standards and process, the outcome does not hold up.

Dr. Constantine said that the WWC website includes an invitation to submit input, but not necessarily to comment on errors, and that a clear statement regarding a quality-assurance process should be accessible on the site. She and Dr. Cottingham invited Dr. Baron to put his particular concerns in writing.

Dr. Hanushek closed the session commenting that he hoped at another Board session, perhaps in May, to come back to new open questions. He cited the suggestion by Mr. Handy that there is a need to know about the costs of different interventions as an example. Dr. Shaywitz was concerned that despite all the difficult and complex questions and she was thankful there is a What Works Clearinghouse, noting that it is an extraordinarily powerful tool that's really going to make a difference to practitioners who are the ones who have to make the decisions.

10:30 a.m. – 12:00 p.m. Future of NBES Discussion

Dr. Hanushek then asked the Board to begin considering open questions that were raised during the presentations, particularly within the context of funding priorities for recommended interventions.

Drs. Shaywitz and Cottingham expressed enthusiasm for the potential of the WWC to engage and assist practitioners in determining the value of education research. Dr. Cottingham said that an independent survey will be conducted to obtain more objective assessment.

The subject of the use and value of the WWC was proposed by Dr. Hanushek as a featured topic during the next meeting of the Board in May 2009. He then reflected on the mission of NBES—to provide advice and guidance to IES—and invited Dr. Bement to his share perspective on maintaining the balance between supporting IES functions and conducting independent assessments.

Dr. Bement said that the NSF Board is mandated by legislation to provide policy and oversight for the Foundation as well as advice regarding research and education; to this end, it conducts studies with policy implications regarding the role of research, data synthesis, creating bridges between researchers and decisionmakers, and conducting assessment studies. Joint assessments may also be conducted of programs where policy is an issue.

He said that a constructive tension should be encouraged between the Board and IES in order to avoid being "captured" by the agency the NBES should be advising, thus losing its value in providing oversight. For example, the NSF Board from its inception has always taken a "bottoms up" approach in ensuring that merit reviews are conducted initially by the science community. IPAs have been put in key positions to ensure contact with professionals on the cutting edge of scientific research.

Mr. Handy suggested that the Board's more substantive role is in determining research priorities, in addition to providing oversight and advice, and that as the new Director assumes responsibilities over the next 6 to 12 months, it will be critical for the NBES to set the stage for ensuring authentic input in determining new policy directions.

Dr. Hanushek said that the NBES, unlike the NSF, has no statutory authority to consider budgets, but rather is required to approve the priorities of the agency. Mr. Handy replied that, therefore, the only substantial stake the NBES has regarding IES policy is to disapprove the agency's policies, where appropriate.

Then Dr. Shaywitz raised a question about the point at which the ability to voice concerns and engage in dialogue can be structurally built into the process. Mr. Handy reiterated the need to be proactive as well as using the more conventional venues of the Board's Five-Year Report or the Director's Report.

Mr. Baron reflected on the desirability of working as closely in concert with IES as possible and utilizing the resources of the Board toward developing fresh approaches to research and policymaking—potentially less emphasis on curricula and more on hiring teachers.

Dr. Hanushek then turned to Ms. Betka for feedback in her capacity as acting director about the timing of research priorities, major new studies, and ways of improving interactivity between the Board and IES. She agreed that the Board could be proactive in suggesting new directions and priorities and how they might be reflected in grant announcements, evaluations, and studies that IES might pursue. Once priorities are approved, they can be used to justify budget requests.

Dr. Geary commented that failure is also instructive in weeding out funding for professional development and other programs that are not producing results, that this is of particular value in a time of deep spending cuts in education, and that the Board should encourage a culture in which null results sometimes occur. Dr. Shaywitz then qualified that comment by suggesting that the Board not generalize these sentiments to all professional development programs.

Dr. Cottingham mentioned the need for a major evaluation of early childhood education and said that there is currently no program vehicle in place to study kindergarten or pre-K, etc.

Dr. Kerachsky suggested that the biggest potential challenge for the Board would be transitioning from the prior to the current administration and new Director, while retaining the same level of independence and objectivity. He cautioned against the NBES's falling prey to bureaucracy and the "culture of government" that can come into play in this regard. Dr. Hanushek affirmed the Board's ongoing commitment to meeting this objective, while commenting that NBES had not had the opportunity to weigh in on important "macro" decisions related to NCES funding priorities around large-scale longitudinal panel studies. This is a particular area in which the Board, by virtue of its experience and lack of bureaucratic enculturation, could more directly participate in shaping IES programs. Dr. Bement agreed that the Board might explore ways of implementing a more natural working relationship with IES.

Dr. Hanushek asked Ms. Betka to provide a list of research areas, including nuances, prior to these announcements, to keep the Board informed, particularly if NBES is not involved in setting research priorities. Ms. Betka replied that information about the programs is currently available on the IES website, although specifics in the RFAs have not been posted to date.

Dr. Okagaki said that most of the R&D centers are tied to the legislation. She remarked that the Urban Education Research Task Force identified high-priority issues for urban superintendents, and some of which were translated into R&D competitions.

Following up, Mr. Baron agreed that the Board should establish partnerships that would enable it to more regularly provide guidance about priorities and policy directions regarding curricular and evaluation studies. Dr. Hanushek qualified Dr. Baron's remarks by stating that the Board's function is to act in an advisory capacity.

12:00 p.m. Summary Views, and Next Steps

Dr. Hanushek ended the meeting by stating that the Board should have in mind the high-priority recommendations regarding the quality of education in the United States that should be presented to a new IES Director. Advice should be given covering the full spectrum of the organization and operations of the Institute, including modifications and improvements.

In closing the meeting, he asked participants to forward agenda items for the NBES meeting in May 2009, and said that a number of the very productive discussions the Board had just engaged in would be continued in the next meeting.

The Board meeting was adjourned at 12:10 p.m.

The National Board for Education Sciences is a Federal advisory committee chartered by Congress, operating under the Federal Advisory Committee Act (FACA); 5 U.S.C., App. 2). The Board provides advice to the Director on the policies of the Institute of Education Sciences. The findings and recommendations of the Board do not represent the views of the Agency, and this document does not represent information approved or disseminated by the Department of Education.