Skip Navigation
A gavel National Board for Education Sciences Members | Director's Priorities | Reports | Agendas | Minutes | Resolutions| Briefing Materials
National Board for Education Sciences
November 1, 2010 Minutes of Meeting

Location:
Institute of Education Sciences Board Room
80 F Street NW
Washington, DC

Board Members Present:
Deborah Ball
Jonathan Baron, Vice Chair
Adam Gamoran
Philip Handy
Eric Hanushek, Chair
Bridget Terry Long
Margaret R. McLeod

Ex Officio Members Present:
John Q. Easton, Director, Institute of Education Sciences (IES)
Stuart Kerachsky, Commissioner, National Center for Education Sciences (NCES)
Robert Kominski, U.S. Census Bureau
Cora Marrett, National Science Foundation
Rebecca Maynard, Commissioner, National Center for Education Evaluation and Regional Assistance (NCEE)
Peggy McCardle, National Institute of Child Health and Human Development
Lynn Okagaki, Commissioner, National Center for Education Research (NCER)
Dixie Sommers, U.S. Bureau of Labor Statistics

NBES Staff:
Designated Federal Official:
Mary Grace Lucier

IES Staff
Sue Betka, IES
Matt Devine, IES
Linda Marshall, IES
Audrey Pendleton, NCEE
Anne Ricciuti, IES
Susan Sanchez, NCEE
Marsha Silverberg, NCEE

Invited Presenters
Emily Anthony, NCES
Howard Bloom, MDRC
Rebecca Maynard, NCEE
Lynn Okagaki, NCER
Marilyn Seastrom, NCES

Members of the Public
Judith Anderson, Department of Education
Allison Cole, Office of Management and Budget
Kris Gutierrez, American Educational Research Association (AERA)
Monica Herk
Ryoko Yamaguchi, Serve Center, UNCG
Carla Jacobs, Lewis-Burke Associates
Jim Kohlmoos, Knowledge Alliance
Marissa Latte, Society for Research in Child Development
Augustus Mays, Knowledge Alliance
Beena Patel, B&D Consulting
Gail Ribalta, REL Southwest at Edvance Research
Arthur Sheekey, CAN Corporation
Sarah D. Sparks, Education Week
Trevor Sparks, Washington Partners, LLC
Gerald Sroufe, AERA
Charles Youman, U.S. Government Accountability Office

Contractor Staff
Robbyn Harris, AFYA, Inc.
Sheritta Cooper Porter, AFYA, Inc.
Maryellen Thirolf, Note Taker

Call to Order, Approval of Minutes, Chair Remarks
Dr. Eric Hanushek, Chair, NBES

The morning session began when Dr. Hanushek called the meeting to order at 8:30 a.m. and asked for a roll-call of board members and ex officio members. Other meeting participants and attendees also introduced themselves. Dr. Hanushek explained that board members have a 4-year term of office and that he and two other members will rotate off the board at the end of the month and be replaced by four newly nominated members.

Dr. Hanushek reviewed the day's agenda, beginning with the completion of the discussion from the previous meeting on the priorities of IES, on which the board is required to vote. He also announced that the final 2010 annual report of NBES is available online and in hardcopy. After the board approved the minutes of the September 29 meeting, Dr Hanushek introduced Dr. John Q. Easton to present the "Director's Final Proposed Priorities for the Institute of Education Sciences."

Board Discussion with Institute of Education Sciences Director and Voting on the Priorities of IES
Dr. John Q. Easton, Director, IES

Dr. Easton explained that IES statutes require that the board approve research priorities submitted by the Director. During the April meeting, the board discussed six possible new directions for the priorities. After feedback from that meeting, Dr. Easton prepared a draft set of proposed priorities, which were posted for public comment and then revised. At the September 29 meeting, Dr. Easton brought the set of proposed priorities to the board for discussion, and Dr. Easton incorporated the advice of the board in minor additions and modifications to clarify certain issues raised by board members. Dr. Easton offered the final document for discussion and approval by the board.

Dr. Easton pointed out the modifications that were made in the document (e.g., the final sentence in the first paragraph, changes in the first full paragraph on the second page, clarifications in grammar, usage, and diction). The board members thanked Dr. Easton for his work on the document and his receptiveness to incorporating their revisions. Dr. Hanushek announced that he would like to delay the vote on the document until Dr. Bridget Long's arrival from the airport.

Preliminary Discussion of Proposed Agenda for the Regional Educational Laboratories (RELs)
Dr. John Q. Easton, Director, IES

Dr. Easton presented some preliminary ideas about the reauthorization of the RELs. He began with background information about the 10 RELs, which operate on 5-year contracts that will expire between January and March of 2011. Since a new statement of work (SOW) and request for proposal (RFP) are needed to award new contracts, the 2011 budget request included a request for a 1-year extension of the RELs to allow some time for writing a careful SOW and RFP.

Dr. Easton solicited comments from the board regarding what the labs should do and, in particular, how the labs can distinguish themselves from other support centers. The broad idea is to create RELs that are much more supportive of building analytic capacity in states and on the local level. IES can work with the RELs to guide and support them in conducting research and evaluation and can support them in developing capacity to do higher quality work in states and districts. A subsidiary theme concerns the state longitudinal data systems. In addition, the role of an REL is to forge partnerships in states and districts.

Dr. Rebecca Maynard added that the RELs in place now are quite different and much stronger in their focused work than during the previous 5-year contract. To give a sense of the history of the RELs, Dr. Maynard explained what the labs were tasked with doing in the last 5-year contract was different from what they were tasked with before. The current contract holds the RELs to a much stronger commitment to serious needs assessment and developing research, analytic support, and programs tailored to local needs. The RELs now serve the entire region instead of focusing on a few client groups, with a greater emphasis on analytical capacity and responsiveness. In addition, IES has more of an oversight role in the products coming out of the RELs and is more in tune with what is going on across the RELs. A goal has been to provide technical assistance and support to encourage sharing across the RELs, sharing with other parts of IES in terms of capacity building, and providing a better vehicle for disseminating the products. The RELs are much stronger in terms of capacity because of the partnerships they have built, the staff they have hired, and the experience they have gained, all of which is a significant change from the previous contract. Now the RELs have quick-response product lines tailored to local needs with interest extending across RELs. IES has played an important role in helping the RELs through pushback and oversight. There has been tremendous growth in building capacity for the RELs to negotiate their usefulness with clients. Impact evaluations of the RELs are in the pipeline. One of the benefits of the extension is that 25 random controlled trials (RCTs) will be completed over the next year.

Mr. Jonathan Baron expressed his hope for the RELs to continue progress in building their capacity to develop good evaluations and research through collaborative work with school districts and other partnerships.

In response to a question from Mr. Philip Handy, Dr. Easton stated that 10 RELs are mandated in law, with a fair amount of discretion regarding their activities. He also stated that IES provides technical assistance to the labs.

Dr. Margaret McLeod asked whether the idea is to have the IES research priorities affect the work of the RELs. Dr. Easton replied that the answer is yes when the priorities are viewed as principles.

Dr. Adam Gamoran made a number of suggestions regarding the RELs:

  • there is a need to think strategically about the labs as a network;
  • there is a need for coherence within the labs so that it is obvious how the projects fit together;
  • it will be useful to build analytic capacity in states and localities;
  • a better thought-out strategy and a more coherent approach are needed to cement the relationship between labs and their clients; and
  • IES's 8-year history shows that it is necessary to be more deliberate and thoughtful before beginning an RCT. Theory and conception must be carefully specified regarding outcome measures, and careful attention must be given to high-quality implementation.

Dr. Deborah Ball called for more discussion about improving the capacity for better research design. Weak or poorly specified designs result in failure to show impact.

Dr. Hanushek interjected that the labs can be thought of as being positioned between the research community and the states and local districts. The 10 labs have different capacities, needs, and demands in different areas. They try to build capacity to use information effectively in a particular area. Since the last authorization of the RELs, they have been at the front line of generating new information that might be useful for a locality.

Dr. Peggy McCardle asked for information about the role of the RELs. Dr. Maynard responded that the role of the RELs is not to provide technical assistance on the day-to-day job of the schools or the administration; rather, it concerns using data, building evidence, doing research, and building research capacity.

Mr. Baron noted that the labs have a local or regional focus. When practitioners, policymakers, or researchers think of a RCT or a rigorous impact evaluation, they think in terms of an extensive, expensive enterprise, but the RCTs do not have to be that way. One role that the labs might play is in working with state and local educational agencies to design and implement low-cost evaluations. For example, administrative data can be used instead of interviews to reduce costs. Good impact evaluations have been done in conjunction with school districts at low cost. One example is in Seminole County, FL, where three different remedial reading programs for ninth graders were evaluated with a 2-year follow-up. The labs can work with school districts or state officials to address important questions in low-cost impact evaluations.

Dr. Bridget Long expressed some confusion about the mission and goal of the RELs. Their regional nature and relevance to a national audience raises questions about the need for networks and dissemination. Much of the information produced by RELs is not specified to just a particular state or region; it might have national implications. Research capacity must be knitted with national dissemination. On the other hand, if the RELs are to engage in specialized studies that IES would not undertake on a national level, then the mission of the RELs must be rethought. Dr. Maynard commented that a contract for coordination across the labs has to be made more useful. In addition, dissemination coordination strategies must be improved.

Dr. Gamoran remarked that administrative data should be available to researchers. In 2008, the board resolved that longitudinal data system grants be given to states that agreed to make data available to researchers. Dr. Gamoran asked whether there has been any progress on that front. Dr. Kerachsky stated that making the data available to researchers was not an explicit condition of the grants. He pointed out that the purpose of the datasets is to improve education in the districts, in the states, and nationally. The grant programs are not prescriptive in how that is accomplished. Some states want to do their research in their own controlled centers; others emphasize collaboration with university-based researchers. No simple answer exists. Dr. Hanushek remarked that IES was given the responsibility for running a grant competition with the power to state stipulations regarding awards. Dr. Kerachsky remarked that these grants to states are to build state programs, not federal data systems. Mr. Baron pointed to the possibility of a middle ground—namely, a competitive preference priority for grant applicants that agree to make the data available (rather than a requirement). A number of grant programs include a requirement or precedent for participation in evaluation.

On the topic of using local capacity to obtain low-cost information that can be used elsewhere, Dr. Hanushek mentioned that the RELs can help facilitate this, but many local districts are suffering from budget problems that forestall their ability to pursue analytical capacity. Dr. Long remarked on the capacity to do research and the fact that universities offer free labor. Dr. Maynard called for doing a better job of helping folks think proactively about their decisions.

Dr. McLeod expressed concern talking about capacity-building when states have taken huge cuts to their budgets. Mr. Baron remarked on the immediate crisis in the economy in state and local budgets, but he stated that the longer-term issue is whether states will commit money to analysis and evaluation once the economy recovers. One answer goes back to relevance. Do practitioners and school districts see that evaluations can answer practical questions about, for example, whether one remedial reading program works better than another and can produce actionable information that will show the value of the research and result in building support? Dr. Long remarked on the importance of partnerships in capacity building in difficult times. Dr. McCardle mentioned that capacity building should be broadened to include building capacity in educational research at colleges and universities as well as state educational agencies.

Mr. Baron asked about partnerships and their role in evaluation. If a school district is working with an REL to carry out an evaluation, must the school district have an analytic capacity or does it merely need to make the commitment? Dr. Gamoran answered that to get an evaluation carried out by the REL, interest is needed; on the other hand, human capital (i.e., people who have some preparation to engage in that type of work) is needed to build capacity. In other words, structure is needed to build capacity.

Dr. Hanushek presented two points for future thinking.

  • The emphasis on state longitudinal data is important but too narrow because in many cases local districts have better data than the state longitudinal data system; therefore, districts must be convinced to work with RELs to use their data to provide analysis.
  • One of the biggest costs of NCEE doing these impact studies is putting together a reasonable sample of districts to participate in the experiment. This problem can be solved by convincing local districts that they need the information to enhance their decision-making.

Dr. Lynn Okagaki remarked on the capacity needed in state and local education agencies. Instead of needing buy-in from superintendents and district leaders, what is needed is an education workforce that understands the importance of using data to make decisions and understands distinctions in types of evidence that can be used as the foundation for decisions. The education community broadly speaking must buy in to the idea that we need to use data and understand evidence that leads to conclusions. Building capacity goes beyond teaching individuals how to perform evaluations. RELs cannot answer all the questions, but they can help to educate the education workforce in terms of using data.

Mr. Handy mentioned that new thinking is going on in certain states about reform in education. RELs could be helpful in that effort. It takes little money to drive behavior; instead, we have the capacity to encourage and enhance behavior in places where change is occurring. Florida's ARM (a division of the Florida Department of Education) espouses that accountability, research, and measurement all go together.

Dr. Easton stated that a public document will be available for discussion at the next board meeting.

Dr. Hanushek reopened the motion to approve the priorities. Dr. Long stated that she supports the priorities and is grateful that Dr. Easton responded to the board members' feedback.

Privacy Technical Assistance Center (PTAC)
Stuart Kerachsky, Commissioner, NCES
Marilyn Seastrom, Chief Statistician and Program Director, NCES
Emily Anthony, Research Scientist and PTAC Program Officer, NCES

Dr. Kerachsky presented some information about the history of PTAC. In the 1990s there was talk of creating a national student record system, but a law was passed to deny such a system because of concerns about privacy and confidentiality. An attempt was made to turn responsibility back to the states through federal funding and direction to help states build statewide systems. Grant solicitations encourage states to integrate their data systems with consortia of researchers, to build systems to reach across states, and to build systems that reach out to postsecondary institutions and the workforce. Research and privacy concerns must be balanced because they are both legitimate and powerful interests. Dr. Kerachsky introduced the two presenters.

Dr. Seastrom presented some background information on the Statewide Longitudinal Data Systems (SLDS) grant program. Title II of the Education Sciences Reform Act (ESRA) and Title VIII of the American Recovery and Reinvestment Act (ARRA) provide funds to states to develop SLDS. The purpose of SLDS is to support data-driven decisions to improve student learning and facilitate research to increase student achievement and to close achievement gaps. The Family Educational Rights and Privacy Act (FERPA) requires educational agencies and institutions receiving funds from the U.S. Department of Education (ED) to protect the privacy of personally identifiable information in students' education records. The role of NCES is to share with the states the information they need to be the best data stewards possible so that they know how to protect students' personal information while making the data available to researchers.

Dr. Seastrom referred to the list of SLDS data requirements under ARRA and referred to SLDS data "customers," including local and state educational agencies and institutions; parents; ED; local, state, and national policy-makers; education researchers; and the general public.

ED's response to the concerns of the customers is a three-pronged strategy to strike a balance between researchers' need for data and students' privacy: (1) announce an intent to issue a Notice of Proposed Rulemaking to amend the 2008 FERPA regulations; (2) publish a set of technical briefs to give "good practice" guidance on privacy-related topics to help states, districts, and schools to protect personally identifiable information in SLDS; and (3) fund PTAC to provide centralized privacy resources for educational agencies and institutions with SLDS. PTAC will make available the information in the technical briefs and from other fields that have dealt with these issues and develop training to get the information into the hands of the individuals who need it.

Dr. Seastrom described the six SLDS technical briefs on privacy, the first three of which will be released at the end of November 2011.

  • The first brief (15 pages) contains basic concepts and definitions of privacy and confidentiality in student education records. For example, deidentification and anonymization are two different concepts defined in the first brief, which also discusses a privacy framework tied to Fair Information Practice Principles (FIPP).
  • The second brief (30 pages) concerns data stewardship, that is, how to manage personally identifiable information (PII) in electronic student education records.
  • The third brief (30 pages) involves statistical methods for protecting PII and contains a set of rules for reporting aggregate data.

The next three briefs will cover data sharing, electronic data security, and privacy training.

Emily Anthony presented information about the Department of Education's PTAC, which is a resource for state, local, and higher education agencies to obtain information about data security, privacy, and confidentiality. PTAC will work with representatives from various ED offices to develop guidance and materials related to privacy, confidentiality, and security and will provide a central location for all of these materials. It also will communicate updated knowledge to the field, share best practices, and provide technical assistance.

Products and services that PTAC will offer include FAQs on privacy, confidentiality, and security; a privacy toolkit; training materials; site visits; a support center; a technical assistance tracking database; regional meetings; and presentations at national conferences.

Referring to the report on data sharing, Dr. Gamoran asked for some elaboration on what the brief will constitute, whether it will state clearly that sharing data for research purposes need not be inconsistent with FERPA, and whether it will provide examples of existing cases of data-sharing to show how data-sharing practices have benefited the states and localities that have shared their data. Dr. Seastrom responded that including examples is an excellent idea, but doing so involves striking a delicate balance of singling states out to report on their positive or negative results. In regard to FERPA regulations, all written materials are reviewed by the Family Policy Compliance Office and General Counsel's Office. The issue is helping people understand the fuller definitions, such as those of deidentification and anonymization.

Dr. Long stated that the language in the technical briefs will determine individuals' understanding of the privacy regulations. The data-sharing report should use clarifying language, take into account the difference between theory and practice in the field, and account for the problems that well-intentioned people encounter in data-sharing negotiations. Dr. Seastrom responded that if the agreement should be structured properly, so that the researcher starts off with a statement about what he or she needs depending on what data are available.

Dr. McLeod asked whether there is a way to speed up the process of getting information from PTAC. Dr. Seastrom stated that ED is creating a position for a Chief Privacy Officer whose primary responsibility will be to streamline processes and ensure quick turnaround. Ms. Anthony pointed out PTAC's awareness of the need to provide proactive assistance to individuals in the field and that over time the process will gain efficiency.

Dr. Cora Marrett posed two questions: (1) Since the ARRA provides the impetus for much of the change, is ED expanding the grants that are funded through that act or does the expansion apply to everything that involves privacy and confidentiality? and (2) Will there be an emphasis on expanding the research and knowledge base? Dr. Seastrom responded that the underlying privacy principles are the same and that privacy information applies across the board. The technical briefs will be updated and additional briefs will be added when new methods for data protection surface. In addition, every year PTAC will produce a set of white papers on these topics. Dr. Marrett mentioned that the Office of Science and Technology Policy and the National Academies have an interest in this topic. Dr. Seastrom responded that two committees sponsored by the Office of Management and Budget (OMB) meet with agency heads up to four times a year and exchange information. Dr. Hanushek added that the agencies emphasize a number of technical ways of dealing with privacy and confidentiality. He expressed his concern about technical solutions regarding statistics and software and stated that a balance has to be struck in the way that the protections are conveyed to the states. Dr. Seastrom remarked that for purposes of data-sharing, it is important to have Social Security numbers (SSNs) without using them on a daily basis. She recommends identifying sensitive items such as SSNs and then putting them aside. Dr. Seastrom emphasized that one technical brief is on protections for reporting technical data and another is on sharing data. The distinction is important to both the federal government and researchers.

Mr. Baron expressed his hope that the technical assistance and briefs are simple and concise and include concrete examples or templates of language to be used. Ms. Anthony agreed that the states respond to concrete examples and that PTAC will work with the state advisory boards to vet the products to ensure their usefulness. Dr. Seastrom stated that the briefs' attempts to minimize the use of acronyms, include definitions, avoiding jargon, and summarize information.

In response to a question from Dr. Robert Kominski, Dr. Seastrom stated that the Family Policy Compliance Office and the General Counsel's Office are in charge of enforcement regarding technical assistance. Regarding enforcement of practices that maintain privacy versus enforcement of practices that encourage data sharing, Dr. Gamoran reiterated the suggestion to require states to participate in data-sharing practices. He inquired whether recommendations from PTAC could provide states with guidelines for carrying out data-sharing practices while maintaining the privacy of individuals whose data are being shared. Grants should stipulate this provision. Dr. Kerachsky stated that the end of data sharing is using the data to improve education across the country. Data sharing is only one method of doing so. To prescribe to states that they must share data is probably not easily justified. Dr. Seastrom pointed out that individual state laws preclude data sharing; therefore, those states would be precluded from getting grants if the grants were overly prescriptive on this point. Working with states cooperatively is the way to reach the goal. The cooperative model results in progress. Dr. Gamoran stated that the Race to the Top legislation has resulted in substantial changes to education systems, including changes that involve the use of data. The same kind of leverage can be used regarding data sharing. Dr. Hanushek stated that a national strategy is needed to make better decisions in education by broadening our knowledge through the data and tools we have.

Dr. Long added that the greater concern is that states do not share their data primarily because of the fear factor. She hopes that the PTAC briefs can assure the states that the General Counsel's Office has cleared the information and guarantees that it meets the requirements of FERPA. Ms. Anthony reiterated that PTAC will be a strong ED voice in interpreting federal privacy laws. After Dr. Kerachsky reiterated the importance of the balance between sharing and protection, Dr. Hanushek raised the question of developing a dialogue about possible alterations in FERPA, which is a 1974 statute. It might be in ED's interest to have a discussion about balancing absolute privacy with other purposes. Mr. Baron noted the common theme in this discussion about encouraging research and evaluation at the state level and the previous discussion about the RELs. NCES runs a grant program about evaluating state and local policies. Mr. Baron called for a future discussion of NBES on IES's role in promoting research and evaluation in the states and localities for federal policy analysis and decision-making. How can IES as a whole, across its many units, create a holistic response to this issue?

When all of the attending board members were present, Dr. Hanushek asked to revisit the motion to approve the "Director's Final Proposed Priorities for the Institute of Education Sciences, November 1, 2010." After a brief discussion about when the priorities would be revisited and how their effect would be measured, the motion passed unanimously.

Presentation and Discussion of State-of-the-Art Approaches to Research on Implementation
Howard Bloom, Chief Social Scientist, MDCR
Lynn Okagaki, Commissioner, NCER

The afternoon session began with Dr. Bloom's presentation, which he stated would cover learning from natural variation and estimated impacts and learning about the relationship between implementation and impacts. A distinction should be drawn between natural variation and planned variation and the possibility of capitalizing on natural variation and estimated intervention effect. Dr. Bloom stated that his goal is to help build a framework for future research on the magnitudes, causes, and consequences of variation in intervention effects. His presentation focused on two elements: (1) key questions to address variation, and (2) an example of how to learn from this variation.

Dr. Bloom raised four key questions regarding variation in intervention effects. The first question was: What predicts observed variations in effects in intervention? The answer involved the components of the intervention, its implementation, its clients, its context, and its evaluation. In response to a question from Dr. Ball about components, Dr. Bloom explained that distinctions in components must be made in any application. His example of Welfare-to-Work described three major service components.

The second question was: What are some important consequences of variation in intervention effects? The consequences for policy and practice are generalness and robustness of effects, predictability of effects, need or ability to target the intervention, and equity of the intervention. Consequences for research include fixed-effects versus random-effects analysis, subgroup analysis, quantile regression analysis, and instrumental variables analysis. In response to a question from Dr. Hanushek, Dr. Bloom explained the difference between fixed effects and random effects and described the trade-off between the two analyses as being between a smaller standard error and more statistical power in fixed-effects analysis versus a more limited generalization and bigger standard error in random-effects analysis.

The third question regarding variation in intervention effects was: What are the best ways to study variation in intervention effects? Dr. Bloom explained the best ways to study variation in intervention effects: (1) across sites within a study, and (2) across studies. If variation in intervention effects is studied across sites within a study, a number of questions must be answered, including how many sites are needed, how many subjects per site are needed, what should be measured, how much variation in effects is needed, and how much variation in implementation is needed. Variation in intervention effects across studies can be determined through meta-analysis and, most important, through secondary analysis of primary data. Dr. Bloom commented on variations in compliance and the importance of the role for circumstantial evidence.

The fourth question regarding variation in intervention effects was: What are some current opportunities for conducting such research? Current opportunities for research include ED's Investment in Innovation grants, the White House Social Innovation Fund, and the Department of Health and Human Services' programs to replicate evidence-based innovations in home visiting, teen pregnancy, and fatherhood support.

Dr. Bloom gave the example of mandatory Welfare-to-Work programs, which used secondary analysis of primary data from three multisite randomized trials. The research question was: How does implementation influence the effectiveness of mandatory Welfare-to-Work programs? After describing the study design, study sample, and statistical model, Dr. Bloom listed the key findings: (1) a mean program-induced earnings gain of $879 per client per year or 18 percent; (2) other things being equal, program effects increase with a stronger employment message to clients, greater staff emphasis on personal client attention, smaller staff caseloads, less reliance on basic education services, and lower unemployment rates; and (3) other things being equal, program effects do not vary substantially with differences in client characteristics. Dr. Bloom displayed two histograms showing the distributions of unconditional impact estimates and empirical Bayesian shrunken impact estimates to point out the difference between noise and true variation in effect, respectively.

Dr. Okagaki began her presentation by saying that for the past 8 years people have accused education research of being behind many other fields such as medicine, health-related sciences, prevention sciences, and agriculture. Policymakers and practitioners need immediate answers, but research takes time. Prevention science researchers talk about evaluating the efficacy of an intervention in trials that are rigorous experimental evaluations under a hothouse environment. The next stage is effectiveness evaluations, which are done under the conditions of routine practice and use more diverse samples. If an intervention is shown to be effective in an effectiveness trial, it produces the desired impact. Interventions implemented under hothouse conditions are better than those in routine practice. In the implementation research phase, researchers try to look at supports that should be embedded in the delivery system to help make the implementation under routine practice as good as it can be.

A two-pronged approach is used to accelerate the process: (1) a development phase, and (2) an implementation phase. The development phase considers the capacity of schools and school districts when considering implementation of an intervention. Dr. Okagaki used the example of a computer tutor in her explanation of efficacy trials (teachers engaged in problem-solving in a common planning period as a scaffolding procedure) and effectiveness, or scale-up, trials (testing implementation supports under conditions of routine practice with an independent evaluation). The second approach focuses on organization and management in schools and districts. The question involves what makes schools good implementers.

In response to a question from Dr. McLeod about the characterization of some schools as good implementers, Dr. Okagaki explained that research shows that schools with strong leadership and a strong focus on instruction are associated with better outcomes for students. The question is whether average principals can be trained to help average schools become better.

In response to a question from Dr. Hanushek about the research agenda on leadership practices and professional development, Dr. Okagaki remarked about descriptive research on good leaders and stated that the research on the strategies must be operationalized. Dr. Ball remarked that the question researchers are interested in involves whether implementation effects can be transferred to other settings, that is, identifying the boundaries of an intervention. She advocated for more concentration on "scaling in," or concentrating on the details of the intervention itself and using that to study something about its effects.

Mr. Baron sought to clarify the notion of looking across different interventions at different sites to determine factors that produce the largest impacts. Are there variations that can either improve implementation or improve impact?

Dr. Gamoran noted that (1) a discussion of variation in impacts that may be due to differences in implementation should also be sensitive to the possibility of differences across sites, (2) a parallel investigation should be carried out, and (3) in the distinction between efficacy (implementation under ideal conditions) and effectiveness (implementation under routine conditions) studies, effectiveness research should be characterized as implementation under "replicable" conditions instead of routine conditions.

Discussion of Recent IES Reports and Dissemination
Rebecca Maynard, Commissioner, NCEE

Dr. Maynard stated that it is time to focus on how information can be disseminated to ensure that it is translated to the field in an inviting and relevant way. Three brief presentations followed on the (1) DC Choice final report, (2) the infrastructure of the What Works Clearinghouse (WWC), and (3) ways to improve the dissemination of findings.

Dr. Marsha Silverberg presented information about the DC Choice study, also known as the DC Opportunity Scholarship Program (OSP). She began by explaining that NCEE develops study ideas in three ways: (1) congressional mandate, (2) program office request, and (3) staff-generated studies, which result from staff scans of evidence gaps or promising strategies that have not yet been evaluated.

DC Choice was a legislative mandate that (1) called for the Secretary to conduct an independent evaluation with a strong research design to determine the program's effectiveness, (2) specified the outcomes and reporting requirements, and (3) designated the achievement test to use in evaluating the study. The legislation shaped the study design. OSP was the first federally funded voucher program; it provided up to $7,500 per year for low-income DC students to attend private schools. Lotteries were used in oversubscribed grades to determine which students would receive the offer of a scholarship.

Dr. Silverberg reported on the key findings from the final report: (1) there was no conclusive evidence (no statistically significant effects) that DC Choice affected achievement over the longer term, although there were positive impacts on reading scores; (2) regarding high school graduation rates, based on parent report, DC Choice significantly raised the likelihood that students graduated, and (3) parents of DC Choice students reported significantly higher ratings of school safety and satisfaction than parents in the control group, but there were no differences between the treatment and control students on satisfaction. In response to a question from Dr. Long, Dr. Silverberg stated that response rates on both the achievement testing and the graduation rates were the same for treatment and control students.

In terms of the dissemination of the report, Dr. Silverberg stated that NCEE produced a regular NCEE report with comprehensive information. In addition to the comprehensive report, NCEE produced a four-page "study snapshot" and a technical executive summary so that people had more than one way of finding out about the study. There was also a presentation at the IES conference during the week of release in June 2010.

Dr. Susan Sanchez presented information about the WWC. Dr. Gamoran recused himself from the WWC discussion. The clearinghouse has been producing reports for 3 years to build a substantial research evidence base. The dissemination vehicle for WWC is the IES website where products can be viewed, downloaded, and printed. Dr. Sanchez briefly reviewed the content of the three different report products: (1) intervention reports (162), which assess all studies of a specific intervention within a topic area; (2) practice guides (14), which assess all studies of practices and strategies in a particular area; and (3) quick review reports (55), which assess the quality of the research evidence from a study reported on by the media.

The quick review reports are on topics related to school organization and governing, curriculum and instruction, teacher programs, student behavior and student incentives, and supplemental academic programs. All WWC products apply a common set of standards, include a complete study coding guide, and provide full documentation of how decisions are made and full transparency about how the clearinghouse arrived at its rating. Another example of the rich database of research evidence accumulated by the clearinghouse is the RCT registry, which includes more than 100 study abstracts. Dr. Sanchez stated that as the evidence base continues to grow, there is a great need for (1) ideas about how it can be better synthesized, (2) interpretation of results by content experts, and (3) expansion of the growing body of researchers.

Dr. Audrey Pendleton presented information about ways in which to situate the evaluations in the context of evidence from WWC. The growing body of evidence can inform the work early in an evaluation, and more work is needed to improve the presentation of findings from evaluations relative to existing evidence from WWC and other areas. The idea is to be able to say more about what worked and why and what did not work and why, and to generate hypotheses for additional studies.

Dr. Pendleton described the Evaluation of Reading Comprehension Strategies for Fifth Graders study. The rationale for the evaluation was to gather evidence regarding strategies combined into curricula that are sold and marketed. What is the evidence for those curricula? When a report is released and the finding is "no impact," a need exists to defend the work in terms of implementation and evaluation. It would be useful to have an independent group of people on the back end to think about what has been learned, what did not work, what did work, and where to go next. WWC can be an integral part of this effort.

Mr. Baron stated that a few interventions in WWC stand out in the preliminary studies as having sizeable effects or important outcomes, for example, the Check and Connect retention program. Mr. Baron asked whether, as an alternative to weaving together what the experts say from all the different strategies, an effective intervention could be used as the basis for larger evaluations. Dr. Pendleton responded that that would be a more fruitful way of doing it.

Dr. Long described a general dissemination strategy as including translations, summaries, and a website. The strategy calls for purposeful placement of information in terms of quality and impact and a clear determination of audience. What time, effort, and resources should be devoted to doing proactive outreach such as presenting at conferences, sending products to people, and participating in local districts' professional development? Finally, how can dissemination efforts be evaluated to reveal whether they are working? For example, short online surveys could give constant feedback from the people using the products. Finally, how do all the pieces work together? Busy people need coordination or networking to know how to improve practice or policy. Dr. Maynard noted that RELs play a role in disseminating practice guides and some of the other products.

Dr. Hanushek pointed to two very different topics: (1) what we learned, and (2) how we translate what we learned into practice. Mr. Handy commented on the second topic. He pointed out that every school district has a contract with work rules, policies, and procedures, and every state has school rules and regulations. The presence of research findings in those two places would indicate a degree of success in dissemination. The idea is to measure what gets done (i.e., what is included in rules and school codes) or what is excluded. Both school boards and unions must be persuaded to implement research findings. Dr. Maynard suggested that the body of evidence might not be rich enough to do this. Mr. Baron pointed out that one of the NCEE evaluations found a sizeable difference between two math curricula and, in WWC, many small studies do not use standardized outcome measures. Dr. Hanushek added that negative examples are also important.

Closed Session—Nomination and Election of Board Chair

New Officer Responses and Meeting Wrap-Up

Dr. Hanushek announced the election of Mr. Baron as chair and Dr. Long as vice-chair, both for one-year terms beginning on November 29, 2010.

Mr. Baron offered some words of appreciation to Carol D'Amico, David Geary, and Eric Hanushek, who are leaving the board, and thanked them for the different perspectives they brought to NBES discussions.

A brief discussion followed regarding the need to select an executive director.

The meeting adjourned at 4:48 p.m.

The National Board for Education Sciences is a Federal advisory committee chartered by Congress, operating under the Federal Advisory Committee Act (FACA); 5 U.S.C., App. 2). The Board provides advice to the Director on the policies of the Institute of Education Sciences. The findings and recommendations of the Board do not represent the views of the Agency, and this document does not represent information approved or disseminated by the Department of Education.