Skip Navigation
A gavel National Board for Education Sciences Members | Director's Priorities | Reports | Agendas | Minutes | Resolutions| Briefing Materials
National Board for Education Sciences
June 16, 2014 Minutes of Meeting

Location
Institute of Education Sciences (IES) Board Room
80 F Street NW
Washington, DC 20001

Participants

National Board for Education Sciences (NBES) Members Present
David Chard, Ph.D. (Chair)
Susanna Loeb, Ph.D. (Vice Chair)
Anthony S. Bryk, Ed.D.
Darryl J. Ford, Ph.D.
Adam Gamoran, Ph.D.
Robert Granger, Ed.D.
Larry V. Hedges, Ph.D.
Bridget Terry Long, Ph.D.
Margaret R. (Peggy) McLeod, Ed.D.
Judith Singer, Ph.D.
Robert A. Underwood, Ed.D.

NBES Members Absent
Kris D. Gutierrez, Ph.D.
Hirokazu Yoshikawa, Ph.D.

Ex-Officio Members Present
John Q. Easton, Ph.D., Director, IES, U.S. Department of Education (ED)
Thomas Brock, Ph.D., Commissioner, National Center for Education Research (NCER)
Joan McLaughlin, Ph.D., Commissioner, National Center for Special Education Research (NCSER)
Brett Miller, Ph.D., Health Scientist Administrator, Child Development & Behavior Branch, Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health (NIH)
Sarah Kay McDonald, Ph.D., National Science Foundation (NSF), Directorate for Education and Human Resources
Ruth Curran Neild, Ph.D., Commissioner, National Center for Education Evaluation and Regional Assistance (NCEE)

NBES Staff
Teresa Cahalan, Designated Federal Official

ED Staff
Karen Akins, Office of the Secretary
Elizabeth Albro, Ph.D., NCER
Corinne Alfeld, Ph.D., NCER
Leah Anderson, NCER
Lauren Angelo, Ph.D., NCEE
Sue Betka, IES
Monica Herk, Ph.D., IES
Jonathan Jacobson, Ph.D., NCEE
Meredith Larson, Ph.D., NCER
Robert Ochsendorf, Ph.D., NCSER
Kristen Rhoads, Ph.D., NCSER
Anne Ricciuti, Ph.D., IES
Wendy Rodgers, NCSER
Allen Ruby, Ph.D.,NCER
Katina Stapleton, Ph.D., NCER
Amy Sussman, Ph.D., NCSER
Wendy Wei, NCSER
Rebecca McGill-Wilkinson, Ph.D., NCER

Invited Presenters
Sandra M. Chafouleas, Ph.D., Neag School of Education, University of Connecticut
Lynn Fuchs, Ph.D., Department of Special Education, Vanderbilt University
Rob Horner, Ph.D., College of Education, University of Oregon
Deborah Simmons, Ph.D., College of Education and Human Development, Texas A&M University

Members of the Public
Craig Fisher, American Psychological Association
Jean Gossman, ED Daily
Carla Jacobs, Lewis Burke Associates, LLC
Jim Kohlmoos, Edge Consulting, LLC
Sarah Mancoll, Society for Research in Child Development
Myrna Mandlawitz, SEDL & Learning Disabilities Association of America
Josh McCrain, Consortium of Social Science Associations (COSSA)
Michele McLaughlin, Knowledge Alliance
Wendy Naus, COSSA
Trisha O'Connell, Education Development Center, Inc.
Irelene Ricks, American Association of State Colleges and Universities
Paula Skedsvold, American Educational Research Association (AERA)
Sarah Sparks, Education Week
Gerald Sroufe, AERA
Greg White, National Academy of Education

Call to Order
David Chard, Ph.D., NBES Chair

Dr. Chard called the meeting to order at 9:00 a.m. (and called the roll later in the meeting). Board members unanimously approved the agenda for the meeting.

Update: Recent Developments at IES
John Q. Easton, Ph.D., IES Director

Dr. Easton announced that he would be leaving IES in the fall to become a Distinguished Senior Fellow at the Spencer Foundation. On behalf of the Board, Dr. Chard acknowledged Dr. Easton's wonderful service to schools and educators, as well as his dedication to maintaining the integrity of IES, promoting innovation, and seeking ways to make research more applicable to schools and classrooms. He invited comments from other members of the Board.

Robert Granger, Ed.D., praised Dr. Easton for building on the foundation of the previous IES director and seeking to make IES' work more relevant to policymakers and practitioners. Bridget Terry Long, Ph.D., said Dr. Easton's efforts have helped improve the quality of education research. She also appreciated his emphasis on partnerships and two-way communication between researchers and practitioners. Speaking as a practitioner, Peggy McLeod, Ed.D., thanked Dr. Easton for his leadership. Adam Gamoran, Ph.D., appreciated how Dr. Easton preserved the best efforts of the previous IES director, engaged the education research community, and recruited talented people to staff IES.

Dr. Easton summarized IES grant funding for 2014. At the beginning of the calendar year, IES had estimated that in addition to funding ongoing grants and other commitments, it had $40.1 million for new grants. However, one funded project was terminated, freeing up another $1 million. Therefore, in fiscal year (FY) 2014, IES provided $41.1 million in first-year funding for 88 awards spread across five major grant programs:

  • 49 education research grants, totalling $27.3 million
  • 8 research training grants, worth $2.4 million
  • 2 grants worth a total of $3 million to support knowledge use and developmental education projects at education research and development (R&D) centers
  • 11 statistical and research methodology grants, worth $2.4 million in the first year, including three early-career projects
  • 18 grants worth a total of $6 million to support state, local, and other research partnerships

Reviewers gave high scores to another 32 proposals, but IES was unable to fund them. Those proposals, if funded, would have totaled another $19.4 million. Dr. Easton pointed out that in FY 2014, IES funded 73% of the highest-quality proposals it received, compared with 60% in FY 2013.

Discussion

In response to Dr. Granger, Dr. Easton said that if IES receives the same level of funding for FY 2015 as 2014, it will have about $80 million for new grants, because several large funded projects are ending. Dr. Gamoran reminded the group that the current funding level for IES is lower than previous years, and Dr. Easton added that IES funding has not recovered from sequestration (i.e., across-the-board federal spending cuts that took place in 2013 and are scheduled to continue through 2023).

Asked about the process for selecting the next IES director, Dr. Easton pointed out that the Board may make recommendations to the President. The Education Sciences Reform Act of 2002 states:

The Director shall be selected from individuals who are highly qualified authorities in the fields of scientifically valid research, statistics, or evaluation in education, as well as management within such areas, and have a demonstrated capacity for sustained productivity and leadership in these areas.

Dr. Easton suggested the Board create a subcommittee to flesh out the criteria to be considered in selecting the next IES director, such as the type of experience and expertise that are important to the mission. Karen Akins of the Office of the ED Secretary confirmed that the NBES can create a subcommittee; she pointed out that any discussions about individual candidates could be closed to the public. However, the deliberations of the subcommittee on selection criteria should be formally presented to the Board in a public meeting. Ms. Akins will contact the Office of the General Counsel about the regulations governing a Board subcommittee charged with recommending selection criteria for the next IES director.

Dr. Chard noted that the subcommittee must move quickly and thus may need to meet via teleconference. Several members agreed that the subcommittee should also have leeway to discuss and recommend individual candidates for director. Such recommendations would go to ED's White House liaison.

To provide a foundation for the new subcommittee, Dr. Chard asked Board members to identify some qualities they believe the next IES director should have. Members suggested that the next IES director should be able to:

  • make the case for education research to the Secretary, the President, and Congress by demonstrating how IES efforts are paying off;
  • ensure that strong research is conducted in the service of practice;
  • establish more partnerships across the Department and with other federal agencies to ensure that education research is not marginalized; and
  • demonstrate a history of competence managing a large organization with many moving parts.

Dr. Gamoran, Dr. Long, Judith Singer, Ph.D., and Anthony S. Bryk, Ed.D., volunteered to serve on the Subcommittee on IES Director Selection Criteria; Dr. Singer will chair the subcommittee. The subcommittee will also consider specific candidates. Dr. Easton and Ms. Akins will provide direction on how to proceed.

In response to Board members' questions, Dr. Easton said that there is a succession plan for naming an acting director (typically a deputy director), and it includes a provision for the President to appoint an acting director.

IES Commissioners' Reports

Dr. Chard noted that commissioners of the IES centers sent written updates to Board members in advance to allow more time and preparation for discussion during the meeting.

National Center for Education Statistics
John Q. Easton, Ph.D., IES Director, NCES Acting Commissioner

Dr. Easton said he volunteered to take on the role of acting director because NCES recently underwent a major reorganization, completed in March, that involved adding a new unit to collect EDFacts data. The goal was to integrate the EDFacts effort into the NCES and create parallel processes for data gathering across the Center. EDFacts collects a lot of important data that are used for sample frames for NCES' big studies. Typically, data collection and release is very slow, but new efforts at NCES are speeding up data release. In addition, NCES took the lead on the ED Data Inventory, which combines data from 33 major data collection efforts.

NCES is prolific, said Dr. Easton; data are released frequently and refreshed online as soon as possible. NCES recently put out data on preschool enrollment (that included important, useful, up-to-date information) and school crime and safety (including postsecondary data) relevant to current issues. Recent data on high school graduation and dropout rates presented 2 years of data on both the averaged freshman graduation rate and a new adjusted cohort graduation rate; it received a fair amount of press because it showed an 80% graduation rate.

The National Assessment of Educational Progress (NAEP) is transitioning to technology-based assessments, with the goal of having NAEP computer-administered by 2017. In support of a Presidential initiative to improve opportunities for boys and young men of color, My Brother's Keeper, NCES played a large role in finding high-quality data on race and gender in a very short timeframe.

Upcoming releases include data from NCES' first-time participation in both the Programme for International Student Assessment (PISA) Financial Literacy study and the Organization for Economic Cooperation and Development's (OECD's) Teaching and Learning International Study (TALIS). For TALIS, NCES did not get a sufficient response rate to be included in the OECD average, but the U.S. results will be included in the report.

Discussion

Dr. Gamoran asked whether NAEP has succeeded in convening an advisory committee of principals. Dr. Easton reported that NAEP has had much improved participation rates for the past 4–6 years, including very high participation in the 12th-grade NAEP. Under the No Child Left Behind legislation, states are required to participate in some NAEP assessments. Historically, overall participation in 12th-grade NAEP reached a low of about 55% but is now up to 70%, reflecting better planning and activities around participation.

Dr. Granger said that practitioners are not always confident that the research data they receive reflects the contexts in which they work. He asked whether NCES data will move toward describing not just how well kids are doing but the correlates of how well they are doing (such as the communities where they live). He also asked whether data about student context are likely to be incorporated into data collection, and, if so, whether they will be transparent enough for analysts to use. Dr. Easton said NAEP has been rethinking "context variables"—information provided by students via a background questionnaire—and the information is publicly available using an excellent tool provided by NAEP. He confirmed that users could search the data to find, for example, the percentage of students eligible for the free and reduced lunch program in Rochester, NY.

Larry V. Hedges, Ph.D., emphasized the importance of coordinating contextual data from individual studies with population-based data so that the same variables are measured in the same way.

National Center for Education Evaluation and Regional Assistance
Ruth Curran Neild, Ph.D., NCEE Commissioner

Dr. Neild reported that IES is still reworking its website. She is very pleased with the drafts presented so far, and the project is on track for completion in early 2015. The website includes more than 30,000 pages overall.

NCEE recently released a guide for researchers and policymakers (and a smaller piece intended for school districts) on identifying and conducting opportunistic experiments in education. The guide uses plain language to describe processes and answer common questions about experiments. NCEE ran an opportunistic experiment of its own, creating four different news flashes about the same event targeted to different groups of randomly assigned recipients of the NCEE news list. NCEE will evaluate how the responses to the news flashes vary depending on their placement, use of graphics, etc. Later, NCEE will let participants know that they were involved in the opportunistic experiment and direct them to the resources developed for conducting such experiments.

Dr. Neild explained that the Open Data initiative is an international effort to make more public information freely available and accessible. The Education Resources Information Center (ERIC) already has data back to 1965 that are freely available for download. Over the years, there have been discussions about creating a bridge between the What Works Clearinghouse (WWC) and other agencies' clearinghouses, but those discussions are limited, because each Department's clearinghouse serves different audiences with different purposes.

The Open Data initiative may be a way to harmonize the information while preserving the differences. Dr. Neild said the approach would involve a common organizational framework and description of the contents. The recent Presidential initiative My Brother's Keeper uses third parties that are combining information from multiple clearinghouses for a given audience. These models suggest ways that the WWC can open up its data to the public, allow users to drill down, and possibly encourage the field to take new approaches to using and evaluating the data (e.g., meta-analysis).

The Open Data initiative is on a fairly fast track, noted Dr. Neild. She asked for Board input on whether NCEE should pay special attention to particular types of variables or issues as it seeks to create a common organizational framework. Dr. Neild pointed out that the framework would include all the elements that NCEE and others hope to see, but she did not expect the framework to be fully fleshed out right away. She also asked the Board for feedback on potential consequences of this new approach.

Discussion

Dr. Long said the term "open data" can be confusing, because the Open Data initiative would not allow access to the actual data (e.g., from schools) but to citation data. Only certain kinds of data would be open. She cautioned NCEE to consider terminology in its policies. Dr. Long asked whether NCEE is working with major journals, which are also considering how to make their data available and searchable. Dr. Neild agreed that the terminology can be confusing; she said that "open data" applies to any information the government holds. The current approach allows some process data to be available in WWC, for example, while other data are "behind a wall."

Dr. Singer suggested using focus groups to better understand how and what people might find useful, because the current discussion reflects the lack of clarity about "open data." Before building a new framework, NCEE and others should talk with the potential clients about their needs. Dr. Neild said she envisions outside parties coming up with new ways to use the data. Dr. Granger said this approach would be very useful to policymakers and practitioners seeking specific data about whether an intervention would work in a given setting. A better framework would allow analysis of factors at the individual, organizational, and community level, so information from government records is needed and is not often provided now.

Dr. Hedges noted the recurring theme of contextual data and the importance of measuring such variables. The framework could be a catalyst for change in the collection of data, especially population data. He added that the WWC coding process accounts for most of its cost, and making coding data available would a great idea. He asked whether Dr. Neild is proposing to make coding data available for studies that do and do not meet standards for inclusion. To IES' credit, the WWC is the model of a high-quality clearinghouse, said Dr. Hedges, and the issue of harmonizing standards across Departments is a complicated one. Dr. Neild said that the issue had not yet been addressed, although her first instinct would be to make all information available, regardless of whether it meets standards. Of 10,000 studies reviewed by the WWC, only about 3,000 qualified for ratings, said Dr. Neild. She added that the clearinghouses have some features in common, and the focus would be on creating a common framework about the contents.

Dr. Long pointed out that the WWC is expanding its categorization, and coding is complicated. Dr. Neild responded that NCEE created a new guide for study authors that uses a two-page template to describe the necessary components for including a study in the WWC, and it is available online.

Dr. Hedges suggested NCEE consider harmonizing with major international investments, such as the Consolidated Standards of Reporting Trials (CONSORT) Statement. Dr. Bryk said the data in the current format for the WWC would not be useful for identifying the level of specificity that has been requested. Dr. Neild said some information can be pulled out of studies and put into a database while the WWC is reviewing the studies. Dr. Granger said he believed that researchers would begin to provide more context if journals and clearinghouses asked for the information (e.g., details about research subjects), if funders required it, and if mechanisms were available to make it easier to do. Dr. Hedges added that the availability of a checklist can influence how people collect data.

National Center for Education Research and National Center for Special Education Research
Thomas Brock, Ph.D., NCER Commissioner, and Joan McLaughlin, Ph.D., NCSER Commissioner

Dr. Brock and Dr. McLaughlin presented updates on their respective Centers jointly because the two work so closely together. Dr. Brock said that NCER will formally announce all of its funded grants for 2014 by July 1. NCER is now funding an additional competition for FY 2014 to create an R&D center focused on programs for gifted and talented children and youth. The effort addresses the lack of information about the effectiveness of such programs. The new center, with help from IES staff at the outset, will identify gifted and talented programs that reach out to the underserved and will look for indications that participants do better than their peers. If possible, in a few years, there will be an in-depth evaluation of a program.

Dr. Brock expressed optimism that NCER would have more grant funding available for FY 2015 because some of its programs are ending. Among the 2015 competitions are three new R&D centers:

  • Knowledge Utilization, which would address how to make research more useful to practitioners
  • Standards in Schools, which would look at how new college- and career-readiness standards are implemented and how they affect students
  • Virtual Learning, which would focus on technology, such as online tools for rapid experimentation and working with big data

Dr. McLaughlin reminded the Board that NCSER made no awards in 2014 because the budget only allowed for funding continuing grants. In FY 2015, NCSER will fund two research competitions, one on primary research in special education and one focused on training, early career development, and mentoring. If NCSER receives the same level of funding for 2015 as for 2014, it will not be able to fund all of the high quality applications it receives, and many excellent proposals will not be funded.

Every year, NCER and NCSER revise their requests for applications (RFAs) on the basis of feedback from reviewers and others. For 2015, NCER and NCSER are undertaking a more significant redesign of RFAs. For example, the RFA and submission guide will be merged, so that potential applicants can more easily search the document and find what they need. Also, all requirements are now clearly labeled and distinguished from items that represent advice from program officers. Dr. McLaughlin clarified that NCER and NCSER work together on most of the RFA and guide language.

NCER and NCSER are developing a customer satisfaction survey of current grantees. It will solicit feedback on the quality and usefulness of technical assistance and information provided by IES staff and on the website as well as perceptions about the content and format of the annual principal investigators' meeting. If possible, Dr. McLaughlin will present preliminary findings from the survey of current grantees to the Board at the next NBES meeting.

Dr. Brock pointed out that NCER and NCSER are always looking for ways to strengthen the RFAs. For example, the discussion of moderating and mediating variables, previously buried in dense language, will be clarified and highlighted. Most researchers do consider moderating variables such as student characteristics, said Dr. Brock. The new RFAs will also encourage applicants to consider and discuss the contextual factors that influence their hypotheses and findings. Specifically, the R&D center on statistical research methodology calls out the need to clarify variables and context. Dr. Brock believes the field needs more time and more focus on methodology to understand what variables are most important.

Future of IES Training Grants
Opening Remarks by David Chard, Ph.D., NBES Chair

Dr. Chard said that in his discussions with IES Commissioners, the topic of IES investments in research training came up repeatedly. The Board was asked to provide advice on future areas of investment and the relationship of current training support to research grant awards.

Overview by Thomas Brock, Ph.D., NCER Commissioner, and Joan McLaughlin, Ph.D., NCSER Commissioner

Dr. Brock emphasized that the goal of the discussion is to assess whether research training needs have changed since NCER began its grant program 10 years ago (and NCSER 6 years ago). IES training programs grew out of a 2002 National Research Council study calling for more support to train education researchers on quantitative methods, causal inferences, and study design. By nature, education research is an interdisciplinary field and involves working in schools with students. Thus, researchers must understand ethical concerns and know how to build relationships with teachers, parents, principals, students, and others.

Dr. Brock gave an overview of NCER and NCSER training programs, briefly summarized here:

  • NCER predoctoral training: Supports institutions with 5-year grants; 31 grants awarded to 18 universities since 2004; more than 700 fellows trained.
  • NCER and NCSER postdoctoral training: Supports institutions with multiyear grants and individuals with stipends; 51 grants awarded and 148 fellows trained (combined).
  • NCER early career development and mentoring: Supports individuals with 4-year grants; three grants awarded since 2013.

Since 2008, NCER and NCSER have been surveying their predoctoral and postdoctoral fellows. In early 2014, they reached out to 786 fellows, of whom 56% responded. Overall, satisfaction with the training programs is high, and fellows especially like the opportunities to attend and present at conferences. They also rated mentoring and opportunities for meaningful independent research highly. Satisfaction ratings were slightly lower in the areas of grantwriting and collaboration (with education policymakers, practitioners, and other stakeholders).

The indicators of productivity and employment are good, Dr. Brock reported. Most have submitted research grants in the past year. Over 80% had at least one research publication in the past year (and many had two or more). Most reported being employed at a college or university—including 100% of NCSER fellows who responded. Some NCER fellows were working at nonprofit research organizations and the like. Dr. Brock said it is encouraging to see that fellows are mostly working in their field of training.

Dr. Brock and Dr. McLaughlin presented six questions for Board consideration:

  • Have the needs of the education field changed since the training programs were established?
  • Are there other approaches to training that IES should consider?
  • Should IES consider new approaches to assessing the outcomes of the training programs?
  • Should IES seek to expand the number of universities that offer predoctoral training?
  • What strategies might be adopted to increase diversity among the fellows?
  • Should the training programs try to direct fellows to underresearched topics?

Discussion

Dr. McLeod asked what percentage of fellows are Hispanic, African American, or of other ethnicities. Meredith Larson, Ph.D., of IES reported that percentage of minorities in the training programs has not changed much over time. About 30% of postdoctoral fellows are minorities; 18% come from historically underrepresented groups, and 70% are women. There are few current data for comparison, said Dr. Larson, but the NSF reports that about 12% of its postdoctoral fellows come from historically underrepresented groups.

Among predoctoral fellows, Dr. Larson continued, about 64% are women (below the national average of 75% for education students), and more than 20% are minorities (compared with a national average of 28% for all education students). When education is combined with other social and behavioral sciences, about 32% of students come from ethnic minorities. About 14% of predoctoral fellows come from historically underrepresented groups, compared with a national average of 25% for all social and behavioral sciences and 24% of all education students.

Dr. McLeod said the demographics of the country are changing, and recruitment efforts should be increased to ensure that the education research profession adequately reflects the populations studied. She encouraged more careful examination of recruitment efforts and consideration of how to incentivize colleges to increase the diversity of students in the pipeline. Dr. McLaughlin agreed and welcomed suggestions on recruitment.

Dr. Gamoran gave an assessment of the current funding approach. By providing grants to institutions, IES does not have the opportunity to judge the quality of the trainees, because faculty select the individual participants. However, competition for IES training grants results in a dramatic increase in the quality of participants when compared with programs in which the funder selects the individual participants. To win an award and run a program successfully, faculty must be cohesive and cooperative. Other training programs do not foster an interdisciplinary approach, as IES programs do.

Dr. Gamoran pointed to some other strengths of the IES training programs. For example, IES training programs transform the entire university education research enterprise by pushing faculty to train students and conduct research in ways consistent with IES goals. In addition to recruiting students, programs must provide training opportunities by doing research. Thus, the IES training programs shape ongoing research efforts.

Dr. Gamoran commended IES for demonstrating patience with the programs over time, recognizing that the payoffs for training can take decades to realize. It is likely that the impact of training will be more visible 20 years after the fact, so Dr. Gamoran recommended continuing to fund training for at least another 5 years.

IES could do more, Dr. Gamoran continued, by thinking about the programs as a means of developing a cadre of well-trained young researchers who will lead the next generation of education research. IES used to bring all of the trainees together at a national conference. That approach was extremely valuable, providing an opportunity to foster discussion and collaboration across programs.

Dr. Gamoran said the question has been raised of whether IES should require universities to provide matching funds for training. He argued that universities already supply substantial funds to support training because of the cap on tuition and fee payments (limited to $10,500 per student from grant funds). Therefore, universities invest $10,000 to $20,000 more per student per year, at least.

Dr. Long offered some suggestions for recruiting minority and underrepresented students. First, recruitment planning should take into account that if students are selected from among those currently enrolled, the pool is smaller. Recruitment efforts should be targeted to reach students earlier in the pipeline. For example, the American Economic Association (AEA) has sponsored a summer program to provide undergraduates with research methods courses, access to mentors, and support for some small research projects as a glimpse into the graduate school experience and the research field. AEA's Committee on the Status of Women in the Economics Profession (CSWEP) fosters networking and mentoring. Dr. Long said such networks are vital; IES should consider how to support information sharing, informal networking, and mentoring, especially for scholars of color. Dr. Singer added that a study of CSWEP provided good evidence that mentoring is effective.

Dr. Long said it is not clear how fellows are evaluated or what criteria are applied to assess the quality of their training. When the IES training programs began, students clearly needed more training on causal methods. Now, they also need to understand how to collaborate, both with internal and external partners, and how to communicate within the field.

Dr. Long called for consideration of the broader goals of funding training. That is, should awards go toward sustaining those schools that have already built their programs or toward drawing in new schools that could not create such programs otherwise?

Brett Miller, Ph.D., requested more data on the number of fellows who come from historically underrepresented groups or who have a disability, because those individuals often get lost in the data. He asked why predoctoral and postdoctoral training programs are separate. He also asked whether IES has strategies for capturing the long-term outcomes of its fellows, because NIH struggles to find data on the trajectory of a trainee's career. Dr. Larson replied that the IES survey now asks fellows to self-identify if they have a disability; programs are often uncomfortable or unable to collect data on disabilities.

Dr. Singer said she is dismayed by the survey response rates, especially if viewed through the lens of the standards that IES applies to research. Although research training represents one of the largest ED investments in education intervention, the evidence of impact is almost nonexistent. She noted the following areas for consideration:

  • The logic and criteria for awards: Conflicts of interest abound, and it is not clear that programs are selected objectively on the basis of high quality.
  • Impact of program: There is no counterfactual (i.e., comparison with programs not funded) to demonstrate whether the IES-funded training programs produce higher quality researchers.
  • Participation in surveys: Given that the fellows are education researchers, a 56% response rate is "appalling." Fellows should be obligated to participate.
  • Participant satisfaction: A mean satisfaction rate of 4.0 on a 5-point scale suggests a lot of dissatisfaction; it is not clear why so many fellows did not respond to the survey or why those who did are not that happy with the program.
  • Labor market: Are IES-funded training programs preparing students for jobs that will be available in 5–10 years, or are they contributing to an oversupply, given the future of the labor market?

Dr. Singer concluded that IES should apply the same criteria to its own training programs that it does for funding grant proposals, and it should require built-in mechanisms to evaluate programs. Dr. Hedges added that evaluation of such programs is possible.

Dr. Bryk noted that IES training programs have transformed the field, elevating the level of statistical expertise and expertise in conducting clinical trials. However, that success has caused a problem: new researchers now believe that the goal of research is to conduct randomized, controlled trials (RCTs). In the future, training should focus on addressing important problems, not just finding the best place to conduct an RCT. There should be more focus on drawing evidence from practice and how to move from empirical evidence to practical use, said Dr. Bryk. At present, research studies do not provide much information on the utility of the findings in any setting other than that of the study population.

Dr. Gamoran suggested and Dr. Hedges confirmed that IES could conduct a regression discontinuity analysis to assess the effects of IES training programs. Such an analysis could compare effectiveness across programs. The natural time to collect data is after the end of a program, said Dr. Hedges. Such a study would be easy to implement, and IES should do so, he added.

Dr. Chard disagreed with Dr. Bryk's characterization of trainees. He has hired several fellows, and they have a sophisticated grasp of methodology. Dr. Chard reiterated the concern raised by Dr. Singer that the number of grants funded has been diminishing and more researchers are finding it difficult to secure grants. He wondered whether IES is funding training at the expense of research. Dr. Hedges emphasized the importance of paying close attention to the allocation of finite resources.

Dr. Bryk modified his earlier comments to say that the intensive focus on causal inference puts us only halfway toward the goal of moving from empirical evidence to practical utility, so it does not help the field. Evidence that can help practitioners on a day-to-day basis is not the central focus of research, and it should be, Dr. Bryk concluded. Susanna Loeb, Ph.D., countered that she sees more emphasis on collaboration and involvement with practitioners in recent grant cycles.

To assess outcomes, Dr. Loeb suggested looking at fellows' dissertations to determine whether they address areas in which IES would like to promote development. In addition, it is difficult to assess the influence of training programs on other students, such as non-U.S. students who do not receive IES funding but participate on their own.

Dr. Loeb agreed with the need to focus on diversity earlier in the pipeline, at the undergraduate and master's degree level. However, such efforts usually focus on the "public good," or the broad benefits of developing awareness about the field; as such, they are more difficult for IES to fund than targeted research training.

Finally, Dr. Loeb believed that IES training programs have changed the field. They have created mechanisms within universities to support the work that IES does, which would not have occurred otherwise.

Dr. Granger said that if one could anticipate the negative results of a regression discontinuity analysis, it would make more sense to address those issues than it would to spend the time and resources to conduct the study. Rather, IES could create incentives within programs to do rigorous continuous quality improvement. In addition, IES could enable other organizations, such as nonprofit research firms, to host predoctoral fellows in affiliation with university training programs. Dr. Granger encouraged IES to create incentives to encourage organizations to do what they already do well.

Dr. Brock said he heard from the discussion a strong emphasis on diversity and some good ideas about how to improve diversity; he believes NCER and NCSER can follow up by considering different types of grant opportunities or amending programs to tackle diversity. He also heard strong views about the evidence on effectiveness of IES training programs. Dr. Brock said IES could push harder to increase survey response rates; it could also consider whether annual surveys are too frequent, resulting in survey fatigue. Dr. McLaughlin noted that NCSER had a higher response rate, but it has only made three survey requests.

Finally, Dr. Brock appreciated the Board's continued attention to practical applications of research. Last year's RFA included more such emphasis, and reviewers took it to heart in their assessment of applications. He agreed that IES can further strengthen its training programs by identifying more ideas for collaboration and partnering. Dr. Easton added that 5-year grants awarded in 2014 include attention to building new relationships with educational and nonprofit organizations.

Lunch

The Board adjourned for lunch at approximately noon. The public meeting resumed at 1:03 p.m.

Multi-Tiered Systems of Support in the Context of College- and Career-Readiness Standards
Opening Remarks by David Chard, Ph.D., NBES Chair, and Joan McLaughlin, Ph.D., NCSER Commissioner

Dr. Chard explained that laws around special education passed in the 1970s led to a dual system of education in which children with disabilities were identified and pulled out of mainstream classrooms into other systems of support. Over the past 15 years, educators have been developing multi-tiered systems of support (MTSS) that focus on assessment, interventions that prevent failure or increase the likelihood of success, and reducing the risk of long-term negative consequences for those who struggle in school. MTSS focus more on systems of support than on an individual's disability. They have the potential to redefine the relationship between special and general education.

  • Tier 1 is a general education classroom that offers content for all children and incorporates screening to identify those at risk for failure without more support.
  • Tier 2 represents a clearly specified, supplementary intervention that has been validated for the at-risk population of learners and includes frequent progress monitoring.
  • Tier 3 is intensified intervention that has been validated for the persistently unresponsive population of students with severe academic and/or behavior difficulty.

Dr. McLaughlin said MTSS have already made a great impact and have great potential for special education. More research is needed to understand:

  • how best to implement MTSS,
  • how to support and train schools and teachers in implementation,
  • what constitutes good Tier 1 instruction,
  • what Tier 2 and 3 interventions should look like as the intensity increases,
  • how interventions should be delivered,
  • how children should be monitored to facilitate easy transitions through tiers, and
  • whether children should move through tiers in a stepwise fashion.

Since 2006, NCSER has invested $136 million in research via 63 awards, mostly on assessment and training around MTSS, in the fields of reading, math, social skills, and behavior. Four grantees were invited to present to the Board their work in the following areas:

  • Developing and validating screening and progress monitoring tools
  • Supporting the development and testing of multiple tiers of instruction
  • Providing tools for school personnel to help make decisions on tiered instruction

Assessing Core Behavioral Competencies within Multi-Tiered Systems of Support
Sandra M. Chafouleas, Ph.D., Neag School of Education, University of Connecticut

Dr. Chafouleas pointed out that behavioral assessment follows a medical model that focuses on diagnosis, particularly early identification, while MTSS seek quick assessments through brief screening and progress monitoring. Behavioral assessment traditionally takes a long time, involves multiple perspectives, varies depending on the construct of interest, and lacks a "gold standard" for comparison. Ideally, a tool for behavior assessment within MTSS would be defensible (i.e., valid), efficient (and easy to use), flexible (i.e., can be modified for individual children), and repeatable (frequently, for progress monitoring).

Dr. Chafouleas and colleagues modeled their Direct Behavior Rating (DBR) scale on the visual pain scale created by NIH. It allows anyone—teachers, parents, student—to assess academic engagement at a given time by pinpointing how the student appears in relation to a scale that uses sad, neutral, and happy faces. The DBR targets three core behaviors that every student should display to access instruction: academic engagement, respectfulness, and the absence of disruptive behavior.

Research findings indicate that the DBR tool is useful for monitoring classwide and individual behavior and assessing the effects of an intervention. Additional studies will evaluate the utility of students' monitoring their own behavior and compare the results of assessment by a teacher, a student, and an external observer. More research is underway to determine thresholds of behavior assessment scores (and the appropriate frequency of assessment) to identify those at risk early. Dr. Chafouleas said her group believes that because scores vary across time and grade, combined scores appear to function the best for screening.

Dr. Chafouleas and colleagues are now looking at the broader landscape of behavioral assessment to assess the following:

  • What is happening? That is, are schools and districts assessing behavior? What do practices look like?
  • Does it matter? Does behavior screening affect student outcomes? If so, do practices serve as a partial mediator and moderator for district characteristics, perceived usability, and behavior curricula practices?
  • What is the perceived purpose and value of behavior assessment? For those who implement practices, what is the perceived effectiveness?

Tier 2 Early Reading Intervention: What To Do When We Have Them in Tiers?
Deborah Simmons, Ph.D., Texas A&M University

Dr. Simmons described studies assessing Tier 2 reading instruction for students who have not made adequate progress under Tier 1 (general) instruction. Specifically, she and her colleagues

evaluated the Early Reading Intervention (ERI) for kindergarten students at risk of reading difficulties or disabilities. Dr. Simmons emphasized that the studies aim to determine not just whether ERI works but how to intervene effectively when students are sorted into instructional tiers.

Dr. Simmons said it is still not clear who should deliver Tier 2 instruction, but there is strong evidence that it can prevent early difficulties with reading from becoming intractable barriers. The ERI is a supplemental reading program designed to teach children how to read by the end of kindergarten. She presented the following findings from two studies comparing ERI with a school-designed intervention that aimed to reveal whether ERI works in real-world settings, whether the effects last, and whether data can be used to improve the intervention:

  • Intervention effects did not replicate across settings because the strength of the comparison interventions (the school-designed intervention) varied across settings.
  • Compared with prior research, the effect sizes were generally lower than in studies with less rigorous comparison groups.
  • The interventions were not sufficiently effective for students with lower entry level scores.

Reading instruction using a response-to-intervention (RTI) approach uses data to adjust the intervention in response to student performance, which is difficult in practice, said Dr. Simmons. So, a third study evaluated the ERI among a group of kindergarten students who demonstrated low reading performance with and without periodic adjustments (experimental and conventional groups, respectively). The experimental group performed better than the conventional group on all measures, and the gains lasted into the first grade.

Dr. Simmons said the findings of the third study support the effectiveness of an essential component of RTI models—individually tailored adjustment. She believes that early reading research for students at risk of reading difficulty or disability supports the promise of prevention. She noted the following:

  • Results have not been fully realized.
  • Some districts have strong supports in place, while others require significant support to realize the benefits of MTSS.
  • Effects are stronger when interventions were delivered by interventionists in pullout settings (but then the student's primary teacher may not know what kind of instruction the student is getting).

Enhancing Fraction Performance of At-Risk Fourth Graders: A Series of Randomized, Controlled Trials
Lynn Fuchs, Ph.D., Department of Special Education, Vanderbilt University

Dr. Fuchs explained that competence with fractions is foundational for learning more advanced math and that U.S. children are falling behind those in other countries in math performance. Typical teaching relies on understanding fractions as parts of a whole, but later math learning also relies on measurement interpretation of fractions, which is less intuitive and depends heavily on formal instruction. Dr. Fuchs and colleagues compared the conventional approach with an innovative method for teaching measurement interpretation of fractions to fourth graders in a Tier 2 setting.

Dr. Fuchs described in detail the design of five RCTs comparing the two approaches, which yielded the following findings:

  • The mean effect size of the intervention involving measurement interpretation of fractions is one standard deviation over the conventional approach.
  • For adding and subtracting fractions, an area of more focus in the conventional group than the intervention group, the effect size was 1.5 standard deviations higher in the intervention group.
  • In terms of posttest achievement gap (the effect size for at-risk versus not-at-risk classmates), the mean gap for intervention students was +0.74, while the mean gap for the conventional students was -0.77.
  • In terms of NAEP outcomes on 19 measures involving fractions, the mean effect size of the intervention group was 0.66. At-risk students in both the intervention and conventional groups performed below their not-at-risk classmates, but among at-risk students, the intervention group performed better than the conventional group.
  • Each year, improvement in measurement interpretation (but not improvement in part-whole understanding) mediated effects of the intervention versus the conventional approach on NAEP outcomes.

Dr. Fuchs said these findings support the hypothesis that improvement in measurement interpretation is a key mechanism explaining fraction learning, and instruction should move in that direction. In addition, she noted, students' initial working memory capacity moderated the effects of that year's program component contrast: 5 minutes of practice on conceptual activities versus fluency-building activities (both focused on the same measurement interpretation tasks). Students with very low working memory learned better with conceptual practice activities, while students with adequate working memory learned better with fluency practice activities.

One year of the study focused on word problems, finding that instruction on arithmetic-thinking word problems improves outcomes on arithmetic word problems but not on multiplicative word problems. In contrast, instruction on multiplicative-thinking word problems improves performance on both types. Another year focused on instruction on formulating sound explanations, which produced better explanations than multiplicative word problem instruction, and both types of instruction produced better explanations than conventional approaches.

Dr. Fuchs concluded that the research shows positive effects of the intervention across several outcome measures and provides some insight for understanding both mediating and moderating factors. The research also contributes to identifying program components that maximize effects.

Team-Initiated Problem Solving
Rob Horner, Ph.D., College of Education, University of Oregon

Dr. Horner said that NCSER's role goes beyond influencing special education. MTSS represent a giant shift in the approach to teaching that ties together all the tiers of instruction. We are no longer seeking one intervention that works but rather multiple interventions at various levels of intensity, said Dr. Horner. Ideally, every school would have a series of programs.

MTSS require teachers and administrators to assess whether they are providing Tier 1 instruction level well enough to form the foundation for future learning. They also require them to make decisions in real-time about which students should be in which groups receiving which interventions—something not typically done at the school level. Schools need tools to make such real-time decisions effectively, said Dr. Horner. Therefore, not only do schools need interventions at all tiers but also ways to measure content in all contexts.

Dr. Horner's group focuses on how teams in schools make decisions using the logic and data systems that other researchers are creating. The group seeks functional examples of iterative interventions. Often, school teams engage in a dysfunctional process of identifying a problem without actually taking steps to solve it as a team, said Dr. Horner. From direct observation of school teams (and input from other sectors, such as business, which have models for team problem-solving), Dr. Horner and colleagues determined that teaching people about problem-solving is not sufficient, but coaching them through the process is.

The research focused on teams in four schools, all of which improved in measures of understanding meeting foundations (i.e., the components that support team problem-solving) and problem-solving after some coaching. Dr. Horner and colleagues are refining the training and coaching process for their intervention, Team-Initiated Problem-Solving (TIPS); they have found that those who receive the intervention are not only more likely to use the process with their teams but also are more likely to implement the plans that result from the team problem-solving effort and to document changes in student outcomes.

The next step is revising the training materials so that personnel at the state and district levels can use them to train school staff in a variety of contexts. Dr. Horner said the research does not follow as linear an implementation process as IES usually likes to see, but it is more truly iterative. As other researchers learn more about interventions that work and how to measure their impact, Dr. Horner and colleagues hope to revise training approaches and materials so that schools can apply the new knowledge effectively.

Dr. Horner offered some recommendations to IES on behalf of all the presenters:

  • NCSER funding is focused on topics of high relevance. IES can tell Congress that 500 schools are behaving differently as a result of NCSER funding, and if that funding continues, the number will rise to 1,000.
  • IES should continue to embrace the iterative development process, which is showing evidence of collaboration by design.
  • IES should be proud of its grantees' use of a range of research methods. Single-case research methods can be critical to technology development in special education and to moving to larger scale research.
  • Defining what should be done in schools is necessary but not sufficient; people need protocols that allow them to implement interventions.
Discussion

Dr. McLaughlin said the presentations demonstrate how IES is trying to address education at all levels and all tiers of instruction, as well as provide support at the system level to allow schools to implement interventions broadly.

Dr. Bryk said that Dr. Horner's presentation in particular can be helpful in explaining to policymakers and the public what IES is doing. He would like to see even more intense focus on iterative research; that is, when an intervention fails, why did it fail and how can it be modified? Dr. Horner agreed and said his team can now describe the most common reasons for failure and incorporate them into training.

Dr. Loeb asked how researchers determined where a student falls on the continuum of learning when assigning the student to an intervention. Dr. Fuchs said her group looked at incoming competence in certain areas that are predictive of future outcomes. She said that fractions provide an unusual opportunity in the development of math skills, because they belong to a domain of understanding that is very different from whole-number skills. Notably, Dr. Fuchs pointed out that there are breakthrough moments in both math and reading where prior levels of competence can be overcome if the student is not constantly presented with tasks that rely on skills in which the student is not competent. Dr. Fuchs said researchers wish there were more criterion-referenced markers to identify levels of skill for word-level competence, for example; if such markers existed, we might have better MTSS, she noted.

Dr. McLeod pointed out that Tier 2 and 3 are only as effective as the intervention and Tier 1 instruction combined. She asked about the results of RTI in schools with weak Tier 1 instruction in which everyone is in Tier 2 and a lot of students are Tier 3, which defeats the purpose. Dr. Horner said that for schools, Tier 3 instruction creates the most pain, and systems want to reduce that pain. Other models demonstrate that with enough support to decrease the pain caused by the worst problems, systems can work backward, building a protocol and implementing it in stages to address less-severe problems.

Dr. Simmons said that successful schools know where they want to be and can map backwards to get there. For example, they look at their reading programs and know how to evaluate them in terms of goals. They also have grade-level and across-grade teams and professionals to implement interventions. It is important for schools and districts to figure out where they want to be at the end of year and how to get there, Dr. Simmons observed.

Darryl J. Ford, Ph.D., asked how to maintain the effects of interventions, such as coaching around problem-solving. Dr. Horner said that sustainability requires that new practices be implemented only when the systems necessary to maintain those practices are also implemented, and those systems involve policies, funding, team organization, and data structures. It is exciting to see interventions that produce change, said Dr. Horner, but sustainability depends on putting systems in place.

Dr. Bryk agreed on the need to consider systems—in particular, how to ensure the quality of interventions when they increase in scale. Lack of implementation fidelity and variable teacher effects are issues raised in other research. Dr. Bryk asked how those issues can be addressed. Dr. Fuchs replied that her team aims to keep interventions simple, having learned from previous efforts that complicated interventions with lots of moving parts did not take hold. The most successful interventions are ones that can be implemented without much support. It is important to design with users in mind, Dr. Fuchs noted. Dr. Bryk concurred, saying that he hopes to incorporate reduced variability as a design concept.

Dr. Chard pointed out that 20 years ago, efforts in special education focused on disabling conditions, such as developmental and learning disabilities. He wondered what kind of impact the MTSS approach should have on the preparation of researchers in both general and special education. Dr. Simmons replied that her group first worked with general education teachers, but not a lot of students were identified as needing interventions. She said that, ultimately, the label of "disability" does not matter if MTSS can help teachers address their students' problems.

Dr. Fuchs added that over the past 20 years, research has become more focused on those at risk rather than those with identified learning disabilities. She emphasized the need to look closely at those students who do not respond to interventions, because she believes many of them have both reading and behavioral problems. Dr. Chafouleas advocated for focusing on the problem instead of labeling the person. Dr. Chard said that position is revolutionary in that it requires attention to acceptable behavior in context—that is, defining a child not by a deficit but by the child's relationship to the knowledge and skills desired.

Dr. Granger said the conversation tied in with the morning discussion about contextual and system-level factors that IES should pay attention to. The research presented seeks to pinpoint what does and does not work at every level, from the system to the teacher to the student. He asked whether special education researchers had any data for their general education counterparts about characteristics (of schools, students, or classrooms) that would inform their work. Dr. Horner said that George Sugai identified preconditions in which interventions are more likely to be effective and recommended testing interventions in those conditions. In addition, organizational variables can predict the fidelity of implementation and the likelihood of sustainability. Dr. Granger said it is important to study interventions in places that need help and to identify the constructs that define where an intervention can make a difference.

Dr. Fuchs said it is hard to study fidelity and to identify which features of an intervention are crucial to success, but such information would be helpful. She noted that teachers always have their own spin on interventions. In fact, one study compared two approaches to an intervention, said Dr. Fuchs. The top-down approach aimed for fidelity, while the bottom-up approach allowed teachers to change the intervention as long as they maintained a set of arbitrarily designated essential features. The teachers' methods included things that were silly, others that were brilliant, and some that were unexpected, and the bottom-up approach beat the top-down method in the final analysis. Dr. Fuchs concluded that researchers have much to learn.

Closing Remarks
John Q. Easton, Ph.D., IES Director

Dr. Easton said the search for a new NCES commissioner is ongoing, but he did not believe that IES would have a new commissioner in place by the time he leaves in the fall. If Board members have candidates to suggest, they should let Dr. Easton know. Dr. Chard will send Board members a description of the usual process for appointing an acting director.

Dr. Easton thanked the Board for being supportive and constructive, pushing and prodding as needed. He encouraged the Board to continue thinking about how people use research and evidence and to push IES to keep both research and practice in mind in its efforts. Dr. Easton said it has been a pleasure serving as IES Director.

David Chard, Ph.D., NBES Chair

Dr. Chard summarized the follow-up and action items from the meeting. He asked for feedback on the revised format of the meeting. Dr. Hedges said the presentations on MTSS demonstrate how IES makes a difference in the lives of kids in schools, something that should be communicated to those who question why IES exists. Dr. Bryk added that the research presented is problem-centered; in that sense, it is easier to explain than some of IES' other work.

Dr. Gamoran suggested a future meeting focus on getting the word out about how IES has transformed educational research. It is within the mandate of the Board to foster greater understanding of IES among decision-makers.

Dr. Long suggested the annual report showcase the cycle of continuous improvement, in which the Board and IES staff discuss topics of concern and offers suggestions, then IES evaluates concerns, implements changes, and reports back to the Board. Dr. Chard noted that the Board's annual report is due July 1. He will send a draft to the Board before submitting the final report to IES.

Dr. Granger appreciated that commissioners sent their updates in writing in advance of the meeting and that they included links to materials available online. The approach allows for more open discussion time. Dr. Long also praised the commissioners' inclusion of discussion questions.

Adjournment

Dr. Chard adjourned the meeting at 3:15 p.m.

PDF File View, download, and print the full meeting minutes as a PDF file (118 KB)
The National Board for Education Sciences is a Federal advisory committee chartered by Congress, operating under the Federal Advisory Committee Act (FACA); 5 U.S.C., App. 2). The Board provides advice to the Director on the policies of the Institute of Education Sciences. The findings and recommendations of the Board do not represent the views of the Agency, and this document does not represent information approved or disseminated by the Department of Education.