Skip Navigation
A gavel National Board for Education Sciences Members | Director's Priorities | Reports | Agendas | Minutes | Resolutions| Briefing Materials
National Board for Education Sciences
February 6, 2015 Minutes of Meeting

Location
Institute of Education Sciences (IES) Board Room
80 F Street, NW
Washington, DC 20001

Participants
National Board for Education Sciences (NBES) Members Present
David Chard, Ph.D. (Chair)
Susanna Loeb, Ph.D. (Vice Chair)
Anthony S. Bryk, Ed.D.
Michael Feuer, Ph.D.
Darryl J. Ford, Ph.D.
Kris D. Gutierrez, Ph.D.
Bridget Terry Long, Ph.D.
Judith Singer, Ph.D.
Robert A. Underwood, Ed.D.

NBES Members Absent
Adam Gamoran, Ph.D.
Larry V. Hedges, Ph.D.
Margaret R. (Peggy) McLeod, Ed.D.
Deborah Phillips, Ph.D.
Hirokazu Yoshikawa, Ph.D.

Ex Officio Members Present
Sue Betka, Acting Director, IES, U.S. Department of Education (ED)
Thomas Brock, Ph.D., Commissioner, National Center for Education Research (NCER)
Peggy Carr, Ph.D., Acting Commissioner, National Center for Education Statistics (NCES)
Joan Ferrini-Mundy, Ph.D., Assistant Director, National Science Foundation (NSF), Directorate for Education and Human Resources
Joan McLaughlin, Ph.D., Commissioner, National Center for Special Education Research (NCSER)
Ruth Curran Neild, Ph.D., Commissioner, National Center for Education Evaluation and Regional Assistance (NCEE)
Brett Miller, Ph.D., Health Scientist Administrator, Child Development & Behavior Branch, Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), National Institutes of Health (NIH)

Invited Presenters
Chris Chapman, Sample Surveys Division, NCES
Jessica Ramakis, Deputy Chief of Staff, Office of Planning, Evaluation, and Policy Development, ED
Anne Ricciuti, Ph.D., IES Deputy Director for Science, ED

NBES Staff
Ellie Pelaez, Designated Federal Official (DFO)

Call to Order
David Chard, Ph.D., NBES Chair
Dr. Chard called the meeting to order at approximately 9:00 a.m., and Ellie Pelaez, DFO, called the roll. Board members unanimously approved the agenda for the meeting.

Dr. Chard welcomed a new member, Michael Feuer, Ph.D. He explained that the agenda for the meeting reflects suggestions made by the Board to allot more time for discussing internal IES processes in depth.

Overview of Proposed Budget, Fiscal Year (FY) 2016
Jessica Ramakis, Deputy Chief of Staff, Office of Planning, Evaluation, and Policy Development, U.S. Department of Education
Ms. Ramakis described the budget proposed for FY 2016, noting that President Obama expressed his continued support for ED in his State of the Union address. As demonstrated in past budget requests, the President sees education as a high priority. Although there have been gains (e.g., an historically high graduation rate and narrowing of the achievement gap across ethnicities), many challenges remain.

The proposed budget would fund a comprehensive approach to strengthen education from early childhood through higher education. It provides $70.7 billion in discretionary funding—an increase of $3.6 billion (or 5.4 percent) over FY 2015. Notably, the proposed budget is based on the premise that the caps put in place by the budget sequestration (enacted in 2013) would be eliminated. It focuses on four core themes:

  • Increasing equity and opportunities for all students
  • Expanding high-quality early learning programs
  • Supporting teachers and school leaders
  • Improving access, affordability, and student outcomes in postsecondary education

The budget also represents a commitment to investing in what works that cuts across the core themes.

Proposed investments in equity and opportunity include $1 billion under Title 1 to help schools, districts, and states meet the challenge of reaching high standards for disadvantaged students. Programs under the Elementary and Secondary Education Act would see a $2.7-billion increase, including $93 million for Promise Neighborhoods, $50 million for a Native Youth initiative, $36 million for English-language learners (ELLs), and new support for streamlining assessments. Equity and opportunity pilot programs offer flexibility for districts to test new approaches in high-poverty schools. Programs under the Individuals with Disabilities Education Act would receive a $175-million increase. An additional $31 million would go to civil rights enforcement, including increasing staff to respond to complaints.

For high-quality early learning, the budget includes $75 billion over 10 years for Preschool for All, $500 million more for the preschool development grants program that launched in 2014, and $130 million more for two programs supporting preschool for children with disabilities.

Among the proposals to better support teachers and school leaders is a bold new initiative, Teaching for Tomorrow, which would devote $5 billion over 5 years to rethinking how teachers are prepared and supported. The Excellent Educators Grant program, a rebranding of the Teacher Incentive Fund, would receive $350 million. Three current programs would be modernized to improve teacher and principal preparation, with support of $139 million. Another $200 million would go to the Education Technology State Grants, which would support teachers' use of technology to improve instruction.

The President proposed America's College Promise, which would provide 2 years of free community college through a $60.3 billion investment in a new federal–state partnership over the next 10 years. The budget also includes funding to increase Pell Grants (and index them to inflation) and simplify the federal student financial aid forms. The American Technical Training Fund, a new partnership with the Department of Labor to improve job training, would receive $200 million.

Demonstrating a commitment to investing in what works, the budget would increase funding for the Investing in Innovation (I3) and First in the World programs. Other programs that offer incentives to use evidence-based approaches would also receive budget increases.

Specifically for IES, the budget would increase funding for statewide longitudinal data systems (SLDS), the What Works Clearinghouse (WWC), and statistics for administration of the National Assessment of Educational Progress (NAEP), among others, Ms. Ramakis concluded. Sue Betka added that IES would also receive more funding for early childhood research and special education evaluation. The proposed budget represents 17-percent increase for IES, she confirmed.

Discussion
Dr. Chard asked why there appeared to be no funding for special education research. Ms. Betka explained that special education has recovered some of the budget it lost in 2010, but she had no further explanation for the lack of funding. In response to Darryl J. Ford, Ph.D., Ms. Betka said, if the budget were approved, funding for research development and dissemination, statistics, and NAEP would be at peak levels. Funding for SLDS would be high, but not as high as 2009, when the American Recovery and Reinvestment Act provided $250 million for the program.

Ms. Ramakis pointed out that Congress is considering the budget, and ED is responding to requests for information and evaluating the effects of various proposals. She hoped that the new budget would be finalized by October 1, 2015—the beginning of FY 2016. Spending caps will be part of the funding discussion, said Mr. Ramakis, and she believed there is “a lot of room for agreement.”

Scientific Review Process
Anne Ricciuti, IES Deputy Director for Science
Dr. Ricciuti gave an overview of the process of scientific peer review and the Standards and Review Office (SRO). The SRO is part of the Office of the Director of IES and separate from the four IES Centers. It is responsible for handling the peer review of Center reports as well as research grant applications. Dr. Ricciuti noted that this session will focus on the peer review of research grant applications.

The Requests for Applications (RFAs) are written by the Centers. The IES director, Dr. Ricciuti (as Deputy Director for Science), and SRO staff give input to the Centers on the (RFAs), and the Centers work closely with the SRO on the timing and deadlines in the RFAs. Dr. Ricciuti reminded the group that the Centers had two submission dates each year for the main research competitions until funding constraints led to the decision to have one submission date each year.

Typically, RFAs are released in the spring. Through the summer, SRO staff discuss input from previous review panels, plan the next cycle of review, identify needs for the upcoming review, and recruit reviewers. During this time, the Centers provide technical assistance to potential applicants.

By the deadline for submission, applicants submit their applications via Grants.gov. IES has an on-line peer review system (PRIMO), which is maintained and managed by a contractor.The contractor also manages logistics and other review activities for the panel meetings. SRO and the contractor process the applications, which involves screening them for general compliance with and responsiveness to the requirements of the RFAs. The contractor conducts the initial compliance screens, and the Center program officers conduct the initial responsiveness screens. The SRO reviews applications that are flagged for problems with compliance and/or responsiveness and makes the final decision on the disposition of every application, often working closely with the Centers (but the final decision is made by SRO).

Dr. Ricciuti said the initial screening, sorting, and processing is intensive. Frequently, applications are missing key information or may be submitted for the wrong competition. Dr. Ricciuti emphasized that the SRO does not select the best competition for a given application but rather works to ensure it reaches the competition intended by the applicant. Compliance screening ensures that the application meets format requirements (e.g., page length), while responsive screening assesses whether substantive requirements are met, such as the type of work eligible for support.

Next, the SRO assigns applications and reviewers to panels or panel sections (as needed). Then SRO releases the abstracts for the applications and the personnel included in them so that reviewers can identify any conflicts of interest they may have. For most panels, each reviewer is assigned about 8–10 applications, which they receive with instructions about 6–8 weeks before the panels meet. Reviews are due about 4–5 weeks before the panels meet.

Reviewers submit their initial scores and narratives. The SRO staff perform a triage process and obtain third reviews as needed (for applications with widely discrepant initial scores). Most applications have two or three primary reviewers. The triage process is used to send the most competitive applications forward to the full panel for discussion and final scoring. Typical, no more than 25 applications go forward to the full panel. After triage is conducted, SRO sets the order of review of applications by the panel. SRO and Center staff attend the panel meetings as monitors and observers, respectively.

The final scores provided by the panels are validated and made available to the SRO and the Centers about 24–48 hours after the panel meeting. The contractor provides top-priority summary statements to IES for consideration by the Centers about 2 weeks after the meeting. SRO staff prepare peer review reports for each competition, and the Centers that are include the peer review reports in their funding slates. SRO sends the peer review reports to the Centers about 3 weeks after the conclusion of the meetings. The Centers make their initial funding decisions on the basis of final scores and summary statements and prepare a funding slate memorandum for review by the IES director approximately 4 weeks after the panel meeting.

Finally, the SRO contractor completes all the summary statements within 4–5 weeks of the meetings—which translates to about 7 months after the initial submission. Summary statements are uploaded into the Applicant Notification System (ANS)., The Centers and the Grants Administration Office notify Congress of the awards, regret letters and summary statements are released in ANS, and the awards are announced (about 8–10 months after submission).

Dr. Ricciuti said that, despite complaints about the timeline, the time from submission to award is not very different from that of NIH, although the two agencies differ in their release of scores and summary statements. She said IES is looking at ways to speed up the release of that information.

At a recent NBES meeting, Board members expressed concern about the decline in the number of applications received. Dr. Ricciuti said the total number of applications received by the two research centers together peaked in FY2010, then declined through FY2014 (NCSER did not run competitions in FY2014 because of insufficient funds). In FY2015, with only one submission date and NCSER running competitions again, Dr. Ricciuti said that the number received in FY2015 is the most IES has ever received at one time (because in prior years there were two submission dates). She presented disagreggated data on the number of applications reviewed for the NCER main education research competition grants only, which also peaked in FY2010. The number reviewed for the NCSER main education research competition did not spike as dramatically and peaked in 2012.

Discussion
Dr. Ricciuti said there is ongoing analysis of the data on resubmissions (e.g., the funding rate for resubmissions compared with initial submissions).

Follow-Up Item
When finalized, SRO will present the results of its analysis of resubmissions and other data.

Brett Miller, Ph.D., said that 2–3 business days after an NIH review, applicants can get their scores and learn whether their applications were discussed by a panel. Dr. Ricciuti said IES could send scores out sooner. However, IES must notify Congress about its awards before it notifies unsuccessful applicants. Dr. Miller pointed out that for NIH, the panel scores are just one part of the funding decision. The NIH's summary statements are typically prepared within 30 days. The time from submission to NIH award, said Dr. Miller, is about 9 months.

Joan Ferrini-Mundy, Ph.D., said the NSF process is very different. There is no separate review office. Rather, the staff who set up and manage the grant programs also make the funding recommendations. The NSF aims to respond to proposals within 6 months, but the process is not centralized. It uses a rating scale rather than a numeric score. Staff consider the ratings and the balance of projects within a portfolio, then make a proposal to the program director. Dr. Ferrini-Mundy acknowledged that the process is very subjective. She added that NSF is moving to virtual panel review, which has proven effective for reviewing very large numbers of applications.

Discussion
Dr. Loeb asked whether resubmissions are handled differently. Dr. Ricciuti responded that if the same reviewer is still empaneled and has no conflict of interest, he or she also reviews the resubmitted application. She noted that efforts to ensure the same reviewer assesses the resubmission are not always successful.

Dr. Long wondered what could be done to mitigate the work of processing incomplete applications. She asked Dr. Ricciuti whether IES spends time addressing issues that applicants should be able manage on their own. She agreed that screening should be straightforward, and added that an applicant whose submission is declared noncompliant or nonresponsive will typically protest, and that the government should not be seen as putting up barriers to the applications getting to the expert peer reviewers. Dr. Long said a closer look at the relationship between the quality of the submissions and problems with compliance or responsiveness could identify some areas that IES might deem non-negotiable.

Dr. Long noted that relying heavily on scores is problematic, because scores are “noisy”—that is, reviewers all have their own interpretations of the scale. She said that NIH includes program priority as part of the funding decision. She suggested a mechanism for capturing the enthusiasm of the panels for a given application and identifying proposals likely to have the greatest impact on education if funded.

Dr. Ricciuti responded that the Centers make the funding recommendations. The IES used to collect scores from reviewers indicating their enthusiasm for a proposal, but the score correlated highly with the overall scores and so was dropped, she added.

Dr. Miller said NIH added an impact score, which prompts reviewers to consider broad impact and innovation. The use of program priorities varies across the NIH Institutes and Centers. Dr. Miller said there is discussion underway about how to make priorities more transparent in advance, so that applicants can see how their proposals align. At NICHD, all programs have areas of emphasis, but those do not necessarily translate into priorities. With resubmissions, staff have an opportunity to talk with applicants whose submissions were deemed low-priority about what would align better with NIH needs.

Dr. Ferrini-Mundy said NSF has two criteria for every program: intellectual merit and impact. Every application should make a case for both areas. Most panels aim to reach consensus on a rating, which provides some calibration and is useful to program staff.

Adding to Dr. Long's point, Judith Singer, Ph.D., said the burden should be on the applicants to ensure they meet the compliance requirements, not the staff. Just as the NIH requires a standardized biosketch, IES could create checklists or other standardized forms that make the application process more transparent and reduce staff processing time.

Dr. Singer added that reviewers' assessments are driven by the criteria provided to them, and she called for a broad discussion of the criteria used by IES and other agencies. She also suggested asking reviewers immediately after the panel review process for input on improving the instructions to reviewers and providing meaningful feedback to applicants.

Dr. Singer also asked for a systematic analysis of the quality of reviews and reviewers. Reviewing proposals is time-consuming, and the quality of review may suffer if the top people in the field are unwilling to commit their time. Furthermore, the reviewers should be as good or better (in terms of professional achievements) than the applicants. It may be worth rethinking the rubrics used to make assessments, Dr. Singer concluded. Dr. Miller noted that NIH makes public the rosters of each review panel and the end-date of each reviewer's term on the panel. Dr. Singer pointed out that serving as a reviewer for NIH or NSF is a criteria for promotion in academic medicine and science but not in education, which affects the quality of reviewers.

Dr. Ricciuti assured the Board that the SRO has been working on ways to make the application processing efficient , but addressing some of the substantive issues is more challenging. She said the SRO works to ensure reviewers are of high quality. Before reviewers are invited to serve as principal members of a panel, the SRO staff assesses their reviews, scores, and performance in panel meetings. Dr. Ricciuti said the review process requires experts from various areas, and she said she would be happy to talk more about how to get the best reviewers.

Robert A. Underwood, Ed.D., said the NIH and NSF provide reviewers with clear instructions about their priorities, but IES does not seem to. Also, he wondered whether poorly written or poorly understood RFAs are the reason for problems with compliance or high rejection rates. Dr. Ricciuti said the priorities of IES are communicated in the RFAs, and the RFA is the guiding document for reviewers. Over the years, RFAs have been revised and clarified to address common compliance issues. Dr. Ricciuti said IES continues to struggle with responsiveness, and she welcomed insights into the issue.

Dr. Miller agreed that communicating priorities is “an art.” NIH aims to make funding announcements as clear as possible, but staff cannot provide more specific direction, unless there are special review criteria. Dr. Ferrini-Mundy added that for NSF, the content of the solicitation defines what reviewers should consider.

Thomas Brock, Ph.D., said that NCER and NCSER invite field-initiated proposals under broad topic areas, such as early childhood education or math and science. Special calls for applications, such as for Research and Development Centers, may focus on particular issues, signaling IES priorities.

Kris D. Gutierrez, Ph.D., said that, in her experience with NSF, the ultimate goal of the grant programs is to strengthen the field. The reviewers focus on that goal in their feedback, which is very instructive to applicants. Dr. Ricciuti said the summary statements are intended to give applicants constructive feedback that would be useful for resubmission. Dr. Ferrini-Mundy noted that NSF program officers can include additional comments for applicants with declination letters. If a proposal is being considered for funding, NSF staff often engage with the principal investigator, which offers a chance to address issues raised about the application.

Dr. Feuer pointed out that the pursuit of high-quality reviewers and experts is almost certainly likely to come up against conflicts of interest. Clearly, experts should be involved, but human nature dictates that they will have tacit or explicit preferences for certain other people working in field. One mechanism for addressing potential conflicts is to facilitate discussion among the reviewers about personal contacts or other biases, said Dr. Feuer. Dr. Ford agreed that it is very difficult to balance the expertise needed to ensure a pipeline of reviewers who promote excellence in research against individuals' inherent biases (e.g., hiring people who “look like us” or come from familiar institutions).

Dr. Ford asked whether IES considers the diversity of panels broadly and over time. Dr. Ricciuti acknowledged that ensuring diversity and lack of bias are difficult. As a result of discussions about diversity prompted by NBES, the SRO requests demographic information from reviewers (on a voluntary basis) and has been working to increase the diversity of reviewers on the panels. Dr. Ricciuti welcomed further suggestions for increasing diversity.

Dr. Ferrini-Mundy said NSF holds larger reviewer panels and gathers 4–6 reviews of each proposal, which allows for a wider breadth of expertise and experience. Reviewer training includes a presentation from the agency on implicit bias.

In response to Susanna Loeb, Ph.D., Dr. Ricciuti said IES still has difficulty recruiting reviewers with expertise about ELLs and students with disabilities, among others. She encouraged Board members to recommend reviewers, which has been fruitful in the past.

Follow-Up Item
Dr. Ricciuti will provide the Board with a list of topic areas for which expert reviewers are particularly needed.

Dr. Gutierrez suggested a closer look at the topics and problems of practice being addressed by the RFAs to learn whether IES is asking the right questions. Dr. Loeb agreed that IES should consider “areas of need.”

Dr. Chard pointed out that an individual's academic pedigree and experience do not always correlate with good review skills. The hallmarks of high-quality reviewers should be built into the recruitment strategy. Review skills are learned over time, and a 3-year term may not be long enough. Dr. Miller said NIH has standing study sections and also brings in experts as needed, which provides a good opportunity for interaction as well as a chance to broaden the diversity of the section. The NIH term for a standing committee is 4 years (12 meetings), and the transitions are staggered, so the section always includes experienced and novice reviewers.

Dr. Ricciuti said that principal panel members serve 3-year terms and must make up at least 50 percent of the panel. Others may be invited to serve as reviewers for one session or 1 year, and they can be invited to participate on a temporary basis again. At this point (with one submission date per year), a 3-year term involves one panel meeting a year; Dr. Ricciuti said IES could consider longer terms.

Dr. Feuer said that to encourage diversity among institutions and applicants, IES should not be too draconian about enforcing compliance requirements. Applicants who are new to the process may make mistakes in their submissions but will improve over time as they gain technical expertise. Dr. Miller said NIH staff work with specific programs, such as university offices of minority affairs, to encourage more diversity among applicants.

Dr. Ferrini-Mundy said NSF is tracking the number of applications it receives and the quality of the match between its solicitations and the responsiveness of applications. She wondered what kind of capacity-building in the field would improve the match and how NSF could determine if it is attracting the right people to its programs. In response to Dr. Singer, Dr. Ferrini-Mundy said she was sure that some applicants submit the same proposals to multiple agencies, but she could not track it. She said NSF reviews the IES basic awards for math and science education to determine if the grantees received NSF awards earlier, which helps make the case that the NSF awards are foundational.

Dr. Chard asked whether a return to two submission deadlines per year would be better for the IES staff, and Dr. Ricciuti said she would know more after the debriefing from 2014. The single deadline allows IES more time between cycles to make adjustments and improve systems, she said. In addition, staff can focus on other responsibilities in the downtime between submissions. On the other hand, staff are challenged to manage so many applications at once and to recruit enough suitable reviewers. Dr. Ricciuti reiterated that the decision to have one deadline was not made on the basis of the review process.

Dr. Bryk asked whether IES could speed up the process for resubmissions. Waiting nearly 2 years to learn the fate of a proposal that requires resubmission is not optimal for stimulating research. Everything is accelerating, he said, and IES has to be faster. Dr. Long noted that applicants are required to develop partnerships as part of their proposals, then must put their partners on hold while awaiting a funding decision. In that time, school leadership can change and other partners can drop out. She suggested moving the award notification date to February or March, so that grantees can use the summer to prepare for the upcoming school year.

Dr. Chard concluded that he has heard positive feedback about the IES review process from both reviewers and applicants, and the process seems to have improved significantly since IES began.

Adaptive Design for NCES
Peggy Carr, Ph.D., Acting NCES Commissioner, and Chris Chapman, Sample Surveys Division, NCES
Dr. Carr said today's presentation focuses on using adaptive design to improve the response rate for surveys. Ultimately the goal of adaptive design is to improve outcomes. She noted that IES spends a lot of money on assessments and, in some cases, response rates are so poor that the data cannot be used to draw conclusions. The Trends in International Mathematics and Science Study (TIMMS) has suffered from low response rates for many years. In 2011, NAEP state coordinators working with schools linked NAEP and TIMMS, saving ED more than $1 million and improving the response rates.

Using new technology, assessments can monitor students' responses to scenario-based tasks and drive them to more or less difficult tasks as a result, which is a kind of adaptive design. Another example is the use of various spirals of assessment items, as is currently underway in Puerto Rico, to better identify student capabilities.

Mr. Chapman described the application of adaptive design to two NCES longitudinal studies, which involved altering the sample and the data collection methods during the course of the data collection. He noted that the private sector is also struggling with poor survey response rates.

In the Beginning Postsecondary Students longitudinal study, Mr. Chapman and colleagues simultaneously controlled for response rate improvement and looked at bias. They sampled about 37,000 students using a web-based data collection effort and telephone follow-up. They assessed variables and separated the students into five categories according to the likelihood that they would respond. As a pilot, they then identified 10 percent of the sample across the five categories and offered a range of incentives to complete the survey (from $0 to $50, paid on completion).

The analysts then compared the predicted response rates with the actual response rates. For those with a low propensity to respond, the response rate spiked with the $45 incentive but then declined. There was a marked difference in response from the low-propensity students between the $30 and $45 incentive rates. Incentives less than $45 did not yield much change in the response rate.

In the next phase, analysts modeled the response propensity along with the potential for bias. From this group, they selected 500 students identified as likely to have the most impact on reducing bias and likely to respond. They offered them incentives of $0, $25, or $45. The analysts found some increase in response for all the incentive levels, although the $25 level was only marginally better than $0. The $45 incentive resulted in significantly better response rates.

On the basis of these results, Mr. Chapman identified 6,000 targets from the entire sample who were likely to respond and likely to contribute to bias, then offered them all the $45 incentive. The final results are not yet available.

For the High School Longitudinal Study of 2009, researchers took a slightly different approach, focusing on contribution to bias rather than response rate. After 8 weeks of sampling with no incentive, nonrespondents most likely to contribute to bias were offered a $5 incentive, paid upfront. Analysts then reran the bias model, and increased the incentive to $15. The $5 incentive decreased bias, and the $15 incentive reduced it further. Mr. Chapman hoped the final results would eliminate the bias completely.

In conclusion, Mr. Chapman noted NCES hopes to learn from the U.S. Census Bureau's efforts to reduce the $1.6-billion cost of nonresponses. Adaptive design will also be incorporated in other NCES studies. Dr. Carr added that the National Household Education Survey and the Programme for the International Assessment of Adult Competencies are NCEE's two most expensive surveys. Improving them could save a lot of money.

Dr. Carr asked Board members for their input on the policy implications of the adaptive design approach. She noted there are issues of face validity when respondents get different incentives (especially if the respondents learn of the variation in incentives).

Dr. Carr noted that in Puerto Rico, a lot of students taking the NAEP assessment leave many questions blank. To ensure that those students are measured on the same scale as mainland U.S. students, the assessments must include more items to which they can respond. Tensions arise around having different standards and different tests for different students. The adaptive design model for Puerto Rico involves a two-stage design that is being piloted now. Schools will receive different spirals of items on the basis of current information about student abilities. The variation in assessments could raise questions.

Another concern is the lack of data on the cost-effectiveness of using adaptive design to improve survey data. Finally, there are implications for weighting results when the process varies over time, said Dr. Carr.

Discussion
Dr. Chard noted that the financial incentive essentially punishes early responders and rewards nonresponders. He asked whether some mechanism could reverse the dynamic—something akin to a fee for paying your electric bill late. Mr. Chapman said no such approach has been contemplated, but it might be possible to offer an incentive that decreases over time. In some cases, researchers offer incentives to all respondents, which can be very expensive with large samples. Anthony S. Bryk, Ed.D., pointed out that if some people are paid to respond, others may see no benefit in responding for free.

Dr. Singer and Dr. Bryk raised questions about the effects of incentives on weighting, and Mr. Chapman said such issues will be assessed in depth with the final data.

Regarding the Puerto Rico tests, Dr. Loeb said every school has a range of high and low achievers. A test geared to get more responses from low achievers could lose important information needed to understand the high achievers in that school. She asked whether it is harder to adapt a design at the student level than the school level. Dr. Carr explained that the NAEP is not designed to provide student-level or even school-level data. The findings are aggregated at the group level.

Dr. Carr said Puerto Rico's state education chief is worried about the perception that some schools will get easier test items. The issue is problematic, but modeling student responses using a small amount of data introduces problems.

Dr. Chard wondered whether communities would be surprised to learn that their schools are getting a different test; he said most teachers and administrators know if they are in a low-achieving community. Dr. Carr said that those who advocate for uniform standards resist the idea. Dr. Loeb suggested and Dr. Carr agreed that a student-level adaptive test would address the concerns.

Dr. Feuer recommended looking at the economics literature for examples of the long-term cultural effects of incentives, such as paying plasma donors.

Dr. Singer said that if the federal government is moving toward new designs and incentives, there should be a broad discussion of policies and practices, because the opportunity for unintended consequences seems large. She praised the approach described by Mr. Chapman but said the data do not match what students learn about sampling. Apparently, the situation is changing, and the findings do not mesh with current literature.

Dr. Bryk pointed out that the shift to technology-based approaches facilitates large-scale adaptive testing. The approach is promising, because researchers can gather more information from fewer, better-targeted items. He suggested IES look at adaptive design experiments across fields to determine whether some conditions introduce significant unanticipated biases. Dr. Singer said it would be helpful to find ways to engage people with psychometrics expertise in this effort.

Mr. Chapman agreed that some of the results from the longitudinal studies contradict earlier literature, such as the use of incentives. Dr. Feuer suggested paying close attention to the optics and politics of incentives, especially offering money to high school students to do better on the NAEP. Dr. Carr said one study provided incentives for getting a certain number of items correct on a 12th-grade math assessment, but those who lacked the ability did not perform better. Some students who have the ability but lack motivation may perform better in response to an incentive. Dr. Feuer said the sample might represent only those who were likely to perform better anyway.

Dr. Ford questioned the resistance to incentives. He pointed out that many people do not like their jobs but persist because they like getting paid. He asked how to incentivize students to engage in work that is not meaningful. He also wondered whether efforts to gather baseline data would improve if the questions were more engaging. The work students get in school is boring, and they do not like it, but people will spend a lot of time on meaningless pursuits if they are engaging, said Dr. Ford.

Dr. Carr said she is talking with representatives of the gaming community on how to better capture individuals' attention. They noted that tasks need to be commensurate with ability. Dr. Carr said that scenario-based assessment tasks will incorporate some gaming components.

Lunch
The Board adjourned for lunch at approximately 12:00 p.m. During the lunch break, NBES members participated in ethics training, delivered by Marcia Sprague of the Ethics Division of ED's Office of the General Counsel. The public meeting resumed at 1:05 p.m.

IES Commissioners' Reports

National Center for Education Evaluation and Regional Assistance
Ruth Curran Neild, Ph.D., NCEE Commissioner
Dr. Neild described the holdings and services of the National Library of Education, an IES partner. It is located in the ED headquarters, and houses a collection of nearly 26,000 rare books, as well as current volumes. It is open to employees and others by appointment. Efforts are underway to digitize valuable pieces of the collection not available elsewhere.

Dr. Singer pointed out that housing a library is expensive and involves a lot of work; she asked whether the Department should keep a library of rare books given that the Library of Congress has much more capacity to maintain a collection. Dr. Neild responded that it is difficult to find a home for a complete collection of books. She noted that a survey of federal libraries found surprisingly little overlap among them.

Dr. Neild summarized some efforts to improve the Education Resources Information Center (ERIC)—for example, including the grant number with published work funded by IES. NCEE is encouraging IES grantees to submit their published work online. Staff are talking with representatives of PubMed about making the two databases interoperable.

Among the numerous reports released recently by NCEE and its regional education laboratories was a descriptive summary of teaching residency programs. It evaluated the programs' approach and characteristics, not their effectiveness. Dr. Neild said the report demonstrates NCEE's evaluation role beyond evaluating the impact of funded programs. In addition to evaluating the impact of programs, NCEE evaluations also examine the effectiveness of interventions that are eligible for Federal funding. The report is part of a portfolio addressing teacher preparation and professional development, one of NCEE's largest and best-funded portfolios.

The WWC will host a webinar in March on designing strong quasi-experimental studies. The impetus for the webinar came from conversations with program officers who work on tiered-evidence grant competitions, such as I3. The webinar will explain that, to meet WWC standards, randomized controlled trials (RCTs) are preferred, but quasi-experimental studies can meet standards with reservations. NCEE also is producing a fact sheet about quasi-experimental study designs.

Increasingly, IES is being asked to provide more technical assistance to program offices and others doing evaluation, said Dr. Neild. She raised the issue of how the role of IES might be different when research and evaluation s not “off to the side,” but rather incorporated into programs that are not housed at IES. Dr. Neild said NCEE currently manages a contract that provides technical assistance to Investing in Innovation grantees to increase the probability that their evaluations will meet What Works Clearinghouse standards. These efforts, and the direct assistance that NCEE provides to program offices regarding incorporating evidence into their competitions, requires staff time. NCEE gradually has been taking on this expanded role in the Department without a formal directive or additional staffing to do so.

National Center for Education Statistics
Peggy Carr, Ph.D., Acting NCES Commissioner
Dr. Carr said the TIMSS for math and science now includes more diagnostic rubrics in the scoring protocols that allow analysts to better understand what students cannot do (i.e., what their mistakes and errors reveal), not just what they can do. The new method will provide more feedback about skills and processing issues. The United States is the only country supporting the transition of the TIMSS to a technology-based assessment. Dr. Carr hoped that in the next round of the TIMSS, NCES would pilot the transition to tablet-based tests, as the NAEP is doing now.

By conducting pre-assessment visits online instead of sending individuals to schools to prepare for the NAEP, ED has saved millions of dollars, said Dr. Carr. NCES is conducting some bridge studies as the NAEP transitions from paper to tablets. In 2017, the NAEP for reading and math in grades 4, 8, and 12 will be completely digital. Because the NAEP is used as an independent indicator across sites, it is important to ensure a smooth transition throughout which the trend lines remain viable and defensible.

Dr. Carr explained that NCES has requested increased funding to validate bridge studies at the state level. Currently, bridge studies are validated at the national level. Some are nervous that the move to technology-based assessments will ruin the data set, but Dr. Carr contends that NCES must make the switch to the digital world before paper becomes completely outdated. She added that funds have also been requested to increase the number of districts included in the Trial Urban District Assessments.

Among the findings of upcoming NCES reports, Dr. Carr cited a small increase in high school graduation rates from 80 percent to 81 percent. Another report includes the first nationally representative assessment of vocabulary of students of all ages.

National Center for Education Research and National Center for Special Education Research
Thomas Brock, Ph.D., NCER Commissioner, and Joan McLaughlin, Ph.D., NCSER Commissioner
Dr. Brock said the two Centers work closely together and prepared a joint report. The proposals for FY 2015 are currently under review and funding decisions will be made in the spring. The RFAs for 2016 grants will be released by the end of March and due in the summer.

Dr. Brock and Dr. McLaughlin sought input from the Board about funding for low-cost, quick-turnaround evaluations. The notion has been gaining momentum for three reasons:

  • Growing acceptance of using evidence to make decisions
  • Major advances in state and local data systems, which give researchers more opportunities for evaluation
  • The perception that rigorous traditional evaluation approaches, especially RCTs, are costly and slow to produce evidence on interventions

Along these lines, NCER and NCSER are considering a funding opportunity that would target areas for which state and local decision-makers need information rapidly to determine whether policies are working (e.g., within 6 months of implementation). The studies could be RCTs or use regression discontinuity design or other quasi-experimental designs. The studies would rely chiefly on administrative records data and would not include an in-depth implementation study. Dr. Brock emphasized that these studies would be in addition to, and not replace, the comprehensive evaluations that the Institute supports through its Education Research Grants and Special Education Research Grants programs.

Discussion
Dr. Bryk applauded the direction of the proposal but suggested thinking more critically about how findings will be used. School districts are unlikely to end a program on the basis of a short study, but they may use the findings to tweak the program. He did not think the tradeoffs were significant. Some student-level evaluation can be conducted rapidly and inexpensively, Dr. Bryk noted.

Dr. Brock said the initiative is designed for state and local policymakers who are introducing new programs and who expect to see results relatively quickly, such within a semester or single school year. Dr. Bryk cautioned that no one wants to see data showing that the first implementation of an intervention did not work. In some cases, it takes time to modify the intervention to make it effective.

Dr. Loeb agreed that there are possibilities for collecting implementation data in less costly ways. Quick-evaluation studies are opportunistic in nature; the key to success will be the ability to process the grants rapidly so that investigators can use the findings to learn as they go. Dr. Brock agreed the review process would need to move quicker than usual. He said the Institute would likely accept applications on a rolling basis and make decisions relatively quickly (for example, within three months of receiving the proposal).

Dr. Ferrini-Mundy said the NSF offers Early-concept Grants for Exploratory Research (EAGER) awards, which involve minimal external review and are decided quickly. The awards allow for time-sensitive situations, such as research following a natural disaster. Dr. Miller said NIH has a few options for rapid funding, including one program intended to gather data related to policy changes. He said there are challenges in speeding up the review process.

Dr. Singer pointed out the disconnect between the current timeline for reviews and the notion that IES can be nimble in response to a small number of proposals. She noted that programs that accept applications on a rolling basis can run out of money early in the process, leaving nothing for good research proposals submitted later on. Dr. Brock said the funding would focus on opportunistic research, and he did not expect an overwhelming response. Just to be safe, the Institute would introduce the program on a pilot basis and limit the number of awards during the first year.

Dr. Feuer said the new approach would speak to the question of whether some policy decisions can be made with less research data than traditionally expected. He recommended that IES develop a theory of use to guide its decisions about funding. There is a difference between evidence that clinches an argument and evidence that vouches for the argument, said Dr. Feuer, and this initiative would be a model for gathering evidence that vouches. He appreciated that it invites approaches other than RCTs, because they may be overvalued. Dr. Feuer also suggested IES evaluate the impact of the initiative to better understand what policymakers and others do with the results.

Dr. Ferrini-Mundy praised the proposal as useful. She said it was not clear whether grants would support, for example, descriptive studies of implementation that would be helpful in rolling out an intervention (even before it is assessed).

Dr. Long supported the idea and advocated for a pilot to demonstrate proof of concept. If the program seems to be effective, IES would have two tiers of funding competitions: one longer process with fixed deadlines for large programs and one short process with no deadline for smaller efforts. This approach would allow investigators to pursue some research as needed, instead of trying to pack as many research goals as possible into one large grant proposal, said Dr. Long.

Dr. Gutierrez added that decision-makers often incorporate local knowledge into their decisions, but those are harder to sell without collecting some data. Reaching out to local decision-makers engages them in the research findings.

Improving IES' Research and Training Grant Programs
Thomas Brock, Ph.D., NCER Commissioner, and Joan McLaughlin, Ph.D., NCSER Commissioner
Dr. McLaughlin said she and Dr. Brock have been taking stock of NCER and NCSER research programs in light of changes in the education landscape and the Government Accountability Office report suggesting IES do more to ensure the timeliness and relevance of research to practitioners. They have gathered input through public comments and stakeholder discussions, including recent technical working group (TWG) meetings with practitioners and researchers. The Board was asked to weigh in on three questions:

  • What research studies have had the most influence on education policy and practice in the past 10 years? What lessons can we draw from these studies to inform IES's future work?
  • What are the critical problems or issues in education today on which new research is needed? 
  • How can IES target its resources to do the most good for the field?

Regarding influential research, the researcher TWG concluded that cumulative evidence (e.g., meta-analyses, syntheses) usually has more impact than single studies. Rigorous studies with long-term follow-up can also be influential, even if they are not large studies. Dr. McLaughlin cited the HighScope Perry Preschool Study that began in the 1960s with 123 subjects and continues to follow them today. The researcher TWG also identified the following factors related to influential research:

  • Appeal extends beyond the field of study
  • Can include well-done descriptive studies, not just RCTs
  • Practical importance of findings demonstrated by cost-benefit analysis
  • Easily translated to local context
  • Effectively disseminated (e.g., early intervention and special education research disseminated through ED Office of Special Education Programs' technical assistance centers)

Dr. Brock said both the researcher and practitioner TWGs identified critical issues for new research, and he summarized the common areas identified by both:

  • Implementation and implementation science: Explore why programs do or do not work, variation across programs and populations, and practical aspects of modifying interventions to different contexts
  • Collaboration between researchers and practitioners: Encourage more researcher/practitioner partnerships as a strategy to make research more relevant to the needs of school personnel and more likely to be used in making improvements..
  • Quick-turnaround grants: Take advantage of natural experiments (e.g., new policies implemented in phases) and target issues of key interest to policymakers and practitioners.
  • Longitudinal research: Use administrative data sets to follow students across school, and link data sets across fields (e.g., labor, criminal justice) to understand the relationships between education and other outcomes.
  • Research gaps: Support more research on the craft behind good teaching and good teacher preparation, meeting the needs of ELLs and special education students, strengthening early childhood programs, and increasing college readiness.

In light of limited resources, the researcher TWG suggested IES identify some bold research priorities and concentrate funding around them. It also suggested investing in a long-term research agenda around big problems, such as raising math scores or closing achievement gaps, which will not be solved with short-term research funding. In doing so, IES must make a strong case to policymakers about the need for a longer-term investment. The researcher TWG also called for more collaborative research (among researchers and between researchers and practitioners) and for earmarking funds for early-career researchers.

Dr. Brock said some of the suggestions are being incorporated into the 2016 RFAs. The Centers are considering some new grant competitions, such as low-cost, quick-turnaround evaluations. The Centers are analyzing the public comments and will share the findings later this year.

Discussion
Question 1: What research studies have had the most influence on education policy and practice in the past 10 years? What lessons can we draw from these studies to inform IES's future work?

Dr. Ford asked for more recent examples of influential research, and Dr. McLaughlin cited the Tennessee Student-Teacher Achievement Ratio (STAR) study of class sizes, work by Sean Reardon (who studies the effects of poverty and inequality in education), and findings of the National Academy of Sciences (NAS).

Dr. Singer said the studies cited by the researcher TWG were all promoted by a champion (not necessarily the principal investigator) who called attention to the findings and pushed to disseminate them. Funders are always challenged with deciding whether to support projects or the people who do them. When it comes to influential research, the importance of aggressively promoting findings and the role of the individual champion cannot be ignored. Dr. Singer added that the influential studies cited addressed small, narrow populations, so it may be that generalizability is not as significant a factor as many would suggest. Dr. McLaughlin welcomed more input on how to identify or develop champions of research.

Dr. Singer suggested IES evaluate the impact of its funding on the field by looking at data such as citations, publications in high-profile journals, and additional awards. Dr. Neild said the National Library of Education can analyze citations. Dr. Singer said an evaluation should also look at the impact of funded research on practice. Dr. Feuer pointed out that some important advances in research have a significant impact on policy but not on the field directly. (For example, the NAS' work on revising how poverty is measured was significant for policymaking but did not eradicate poverty.)

Dr. Feuer said data from early-career awards would be helpful. Dr. McLaughlin said NCSER funds early-career awards and Dr. Brock added that NCER funds early career awards in its statistics and methods competition. Data from NCSER postdoctoral fellows indicate that they are all working in higher education, and many are involved in education or special education research. These data suggest the awards are working as intended. Dr. Chard added that IES funded response-to-intervention studies in special education, which evolved as a response to 40 years of failed policies for learning disabilities. Most states now use the response-to-intervention model, and the impact has been enormous.

Dr. Brock pointed out that the TWGs gave only a few examples of influential studies; more often, in their view, it takes a body of work to move a field in a different direction. Dr. Chard said IES must find the right balance between funding replication studies and facilitating innovation.

Dr. Loeb said research on charter schools offers a good example of a range of studies (from single schools to large samples) that have provided a lot of information over the past 10 years. Dr. Gutierrez noted that the practitioner TWG did not include representatives of national organizations, who could have provided insight comparing different states and districts.

Dr. Long pointed out that the influential research cited shared some key components: they yielded data about a public good, and they looked at a variety of outcomes over time. Current research tends to focus on isolated issues in an effort to get precise results, which limits the applicability of the findings. Individual researchers have an incentive not to share data so that they can build up their own portfolios. In other cases, research may include administrative data that cannot be shared because of legal agreements. These structural issues may prevent current research from having broad influence.

Dr. Bryk added that the Perry Preschool Project and Tennessee STAR findings were disseminated by advocates promoting preschool education and small class size. He also noted that in the past 10 years, research documenting the wide variability of teacher effects in classrooms and schools has had a tremendous impact, focusing attention on the problem identified by researchers. Research on practice is hard to trace, Dr. Bryk said, because practice represents the cumulative effects of work over long periods. When the research goals are broad, it is difficult to build a body of work that moves the field.

Dr. Ford said the recommendations of the practitioner TWG are locally driven and do not seem respond to the question about the role of IES. He suggested that the researcher TWG is the best group to identify influential research, while the practitioner TWG can address specific areas where practitioners need help. For example, the finding that the more a teacher teaches, the more students learn is transferrable, but it has not policy implication unless efforts are focused on ensuring the quality of the teacher. Dr. Loeb added that it is difficult to trace what teachers do as a function of what they know.

Question 2: What are the critical problems or issues in education today on which new research is needed? 

Dr. Chard said there is little good evidence comparing the impact of various teacher preparation models. Dr. Singer suggested IES establish some long-term projects around important questions that are broad enough to incorporate a lot of research but targeted enough to have an impact.

Dr. Bryk was surprised that the TWGs did not mention the use of technology, the cost of education, or how to make systems more efficient. Technology is changing education, he said. Ideally, these changes would be based on a thoughtful, empirically-informed approach. A lot of special interests are pushing for more technology in education, so a lot change could happen without evidence to support it, he warned.

Dr. Loeb said the field does not need more “bad” research but rather new opportunities to promote good research. She said IES should figure out what it needs and test out different approaches.

Dr. Long called for more research focused on specific populations—that is what works for whom and under what conditions. A lot of rigorous research focuses on finding average effects, but with so much diversity in and across schools, the average does not matter. Research should look at heterogenous effects and build testable hypotheses.

Dr. Long also said questions are often raised about scaling up effective interventions, but little time is spent helping teachers change their practices. She asked how to convince individuals to change when they have no incentives or fear the results.

Dr. Ferrini-Mundy said the NSF also needs to ask itself, “In 10 years, what will we wish we had done now?” For example, as technology plays an increasing role in society and the workplace, how should students be prepared now for the data-intensive careers of the future?

Dr. Chard asked whether researchers or practitioners in the TWGs felt that research moves too slowly in light of technological advances (i.e., are study findings obsolete once they are published?). Dr. Brock responded that the researcher TWG had some discussion of how to take advantage of “big data,” but that there was not much discussion in either TWG of how new technologies are changing classroom practices or creating new research opportunities.

Dr. Chard returned to Dr. Bryk's point, noting that the Gates Foundation is investing heavily in personalized learning, and a lot of school districts will pursue that funding, even though there is no evidence supporting the approach. Dr. Singer pointed out that the massive open online courses (MOOCs) that are used increasingly for postsecondary education will filter down and are already being used for Advanced Placement courses. The effects of MOOCs on classrooms and the role of teachers in MOOCs are just some areas for which research is needed. In addition, there is no education science to address how to interpret big data gathered from, for example, recording key strokes of students taking assessments.

Dr. Singer advocated for more partnerships across agencies. Involving IES in studies funded by NSF and NIH could be transformational and would be an efficient investment. Agencies should look for opportunities to piggyback on each others' studies and apply an interdisciplinary approach to research questions.

Dr. Singer also noted that the questions posed to the Board do not seem to be connected with the President's priorities for education as outlined in the proposed budget. The IES research priorities should be responsive to the Administration. To address broader priorities, IES should leverage its dollars and focus on strategies for improvement. Dr. Singer suggested IES ask where the agency should be in 5–10 years and how to get there.

Dr. Underwood pointed out that the Department of Defense runs its own school system, through which it collects a lot of data and responds rapidly to findings from those data. He also noted that a lot of attention is paid to teachers and environments but very little to the learners.

Dr. Ford said brain research is beginning to influence learning. He would like to see more investigation of how MOOCs or other technology can help close achievement gaps. He added that a lot of schools are not waiting for researchers to figure out how to make things better; they are synthesizing data and moving forward. The AIM Academy, for example, focuses on smart students who learn differently. Its approach demonstrates some best practices in action.

Question 3: How can IES target its resources to do the most good for the field?

Dr. Chard suggested starting with Dr. Singer's question: Where does IES want to be in 5–10 years? Dr. Long said IES has taken on all the issues in education research; there do not seem to be clear priorities. IES should consider awarding smaller, more frequent, targeted grants and leveraging partnerships with other funding organizations to improve impact.

Dr. Bryk countered that it is difficult for IES not to think broadly, but he agreed that it should set some targets, such as improving teacher quality quickly. Setting a target could serve as an exercise in quality improvement for IES. By focusing on an important problem in the field and funding individual projects, IES could build a learning community around solving that problem.

Dr. Chard said that IES' upcoming webinar on quasi-experimental design could improve the evaluation tools used by investigators. He suggested harnessing the energy of top evaluation scientists to educate more people.

Dr. Loeb liked the idea of a portfolio approach, such as NIH's. She called for more innovative research on measuring development over time—for example, how to classify different experiences or better understand how dimensions vary. One way IES could make a difference, said Dr. Loeb, may be to fund research in ways that are easier to evaluate. She added that improving teacher education means better understanding the complex interactions of individual behaviors and choices.

Dr. Gutierrez suggested paying close attention to lessons learned from IES-funded researcher-practitioner collaborations. They could have an immediate effect.

In response to Dr. Singer, Dr. Ferrini-Mundy said that partnering across agencies is feasible, and NSF and IES have a history of collaboration. Such efforts bring together not only different disciplines and perspectives but also new venues and materials. For example, NSF has teacher education programs around big telescopes. In addition, said Dr. Ferrini-Mundy, agencies could come together around the issue of improving teacher quality quickly. The agencies could work together to map out what needs to be done, and each agency could take on some pieces. That approach would obviate the need to pool money across agencies, which is complicated.

Dr. Miller added that NIH has worked closely with IES and NSF. He noted that NIH has high-fidelity health data but low-fidelity education data, and more linkages could improve the situation. Dr. Singer noted that the NIH's Clinical and Translational Science Awards promote multidisciplinary teams, but education researchers are not included.

Dr. Long said it may be hard for IES to set priorities, but it can send signals to the field about areas in which it would like to see more research. She hoped that, where all else is equal, IES would focus on projects that produce a public good.

Dr. Feuer noted that nongovernmental education researchers struggle with the same questions. Some philanthropic efforts seek to target core areas for investment strategically while not deterring exploratory research or broader approaches. He suggested IES periodically select some themes and dedicate the largest funding opportunities to research on those themes (e.g., reading research), while maintaining other funding for a broad range of topics.

Dr. Feuer wondered whether IES could invest in research that leads to better understanding of how research in education is used. Education research is snubbed because it has not “fixed' education, he noted. Efforts to bridge the cognitive, neurological, and sociological aspects of research would be helpful, said Dr. Feuer. He added that public interest around brain science is growing.

Dr. Loeb cautioned against discontinuing the open call approach, because the field has blossomed, and such efforts have generated benefits.

Dr. Gutierrez said that on her campus, she is seeing more multidisciplinary teams working together. She agreed that partnering across agencies could lead to a better understanding of issues of significant consequence and provide a mechanism for tackling the complexity of problems.

Closing Remarks & Adjournment
David Chard, Ph.D., NBES Chair
Dr. Chard said there is no new information about the selection of the next IES director. He and Dr. Loeb have been authorized to search for a new executive director of NBES, and they hoped to hire that individual by the end of March. Dr. Chard encouraged Board members to communicate with him and Dr. Loeb about agenda items for future meetings. Dr. Chard adjourned the meeting at 3:50 p.m.

Report prepared for NBES by Dana Trevas, Shea & Trevas, Inc.

PDF File View, download, and print the full meeting minutes as a PDF file (127 KB)
The National Board for Education Sciences is a Federal advisory committee chartered by Congress, operating under the Federal Advisory Committee Act (FACA); 5 U.S.C., App. 2). The Board provides advice to the Director on the policies of the Institute of Education Sciences. The findings and recommendations of the Board do not represent the views of the Agency, and this document does not represent information approved or disseminated by the Department of Education.