Institute of Education Sciences Board Room
80 F Street NW
Board Members Present:
Jonathan Baron, Vice Chairman
Carol D'Amico (present only April 7)
Philip Handy (present only April 7)
Eric Hanushek, Chairman
Ex Officio Members Present:
John Q. Easton, Director, Institute of Education Sciences (IES)
Stuart Kerachsky, National Center for Education Sciences
Robert Kominski, U.S. Census Bureau, delegate representing Robert Groves
James Griffin, National Institute of Child Health and Human Development, delegate representing Susan Shurin
Lynn Okagaki, Institute of Education Sciences
Dixie Sommers, U.S. Department of Labor, delegate, representing Keith Hall
Norma Garza, Executive Director
Designated Federal Official:
Mary Grace Lucier
Mike S. Garet
Thomas J. Kane
Bridget T. Long
Members of the Public:
Fred Altman, citizen, formerly NIMH
Denise Borders, AED
Mike Brodie, Education Daily
Lauren Victoria Burne
Jady Johnson, Reading Recovery Council
Wes Huffman, Washington Partners
Sarah Hutcheon, Society for Research in Child Development
Jim Kohlmoos, Knowledge Alliance
LaTasha Lewis, COSSA
Augusta Mays, Knowledge Alliance
Beena Patel, Learn Coalition
Sarah Spreitzer, Lewis-Burke Associates
Debbie Viadero, Education Week
Call to Order, Approval of Agenda, Chair Remarks, and Remarks by the Executive Director
Dr.Eric Hanushek called the meeting to order at 1:08 p.m. He reviewed the day's agenda, which included (1) a brief presentation on Institute of Education Sciences (IES) research priorities by Executive Director Dr. John Easton; (2) a report on discussions with U.S. Department of Education (ED) agencies about related activities that have research and evaluation components; and (3) reviews by ex officio members of the Board about ongoing work in their agencies that overlaps with IES efforts. He also briefly reviewed the agenda for the next day's meeting on April 8.
Dr. Hanushek said that nominations of four new Board members had been sent to the U.S. Senate for confirmation, which is pending. He suggested that the Administration and Congress understand the process of making nominations and subsequent confirmations as a kind of "conveyor belt" that will eventually lead to the establishment of all 15 authorized members of the NBES.
Priorities of the Institute of Education Sciences
Dr. John Q. Easton introduced a two-page synopsis of "Proposed New Directions for IES Research Priorities." The proposal is a distillation of discussions with Dr. Hanushek and is a public document that will be distributed and presented within the next 6 to 8 months for Board discussion, public comments, and Board approval. He then reviewed the primary outcomes of interest to IES, which can be grouped according to three overarching objectives:
Dr. Easton then reviewed six additional proposal goals that are designed to accomplish these objectives:
IES Priorities—Board Discussion
Dr. Hanushek commented that although NBES is tasked with passing judgment on research priorities, this meeting was intended to offer Board members the opportunity for preliminary review with Dr. Easton. As new members are appointed, a second meeting of the Board will be held to oversee the full development of the IES priorities, with a view toward prompt approval.
Dr. Easton said that a list of concepts, constructs, and variables that can be integrated into an educational curriculum (e.g., conscientiousness, perseverance, the ability to work with others) will be compiled from education literature.
Referring to the work of Dr. James Heckman, Dr. Hanushek commented on linking noncognitive skills, external to achievement measures, with life outcomes. Heckman's work is based on national surveys that can be referenced for available data on the somewhat amorphous research questions related to self-perception of social interaction and related variables, with regard to outcomes. Dr. Hanushek then asked how a research agenda can be developed to refine and include these types of measures in longitudinal surveys.
Dr. Lynn Okagaki commented that a review could be conducted to help refine questions about other types of cognitive outcomes. Dr. Shaywitz agreed that current knowledge about such attributes should be evaluated to determine which should be the focus of future research efforts.
Mr. Baron strongly agreed that measurements of conscientiousness, engagement, and other noncognitive attributes should be developed. He asked whether research and evaluation funding might be dedicated to assessing attendance, graduation and drop-out rates, employment after graduation, and related similar variables. He then referred to a large, randomized ED study of career academies that found no effects on traditional educational achievement outcomes but that demonstrated a significant difference in earnings levels 8 years after high school graduation—a major determinant of quality of life. A study called the New Chance Demonstration indicated improvement in General Educational Development (GED) credentials received, although no effects on any other outcomes (e.g., employment, earnings) were indicated.
Dr. Hanushek observed that relatively little information is available about achievement and cognitive skills. Most economics literature focuses on the value of another year of school on attainment and finds a 10 percent return for every year of school. Over time, a much larger overlap of earnings of people with different schooling levels has been indicated, and the overlap has been spreading out, possibly indicating a higher reward for cognitive skills and a huge distribution of skills for any schooling level. Although attainment is important, it is not yet clear why, nor do we understand its relative value versus achievement for future life outcomes.
To Dr. Carol D'Amico's question about whether IES research priorities will address the role of schooling in determining criminal behavior and different life outcomes of people in the same families or in similar circumstances, Dr. Easton suggested that studies on resilience may be relevant.
Mr. Handy reflected on whether IES research priorities would support production of timely, relevant, and practical results. He urged that IES have a voice in the national debate about assessments, and he raised a question about whether quality teaching should be distinguished from effective teaching. Dr. Hanushek responded that the question was premature with regard to the meeting agenda. Mr. Handy responded that his question concerned the issue of certification and the training of teachers, 80 percent of whom are considered "alternatively certified."
Dr. Hanushek asked whether a research program is under consideration for measuring performance beyond NAEP and assessment of achievement to evaluate higher-order skills.
Ex Officio Member Agency Overview and Remarks
Mr. Baron suggested developing research into the topic of mistakes made in subgroup and mediational analysis and, as an example, referred to an ED-sponsored randomized evaluation of the Upward Bound high school program. No improvement in outcomes was found for students in the program, with the exception of the subgroup of students who entered the program with low academic expectations. Patterns found with subgroups offer the likely explanation that the findings were a spurious effect, resulting from a skewed focus on showing effects for the subgroup. He referred to IES-sponsored research that has involved multiple subgroup comparisons and remediation, which has been extremely useful.
Dr. Hanushek called for an open discussion of Dr. Easton's research proposal.
Dr. Easton reflected that in contrast with program evaluations, particularly with regard to mathematics teaching methods and materials, focus is lacking on the actual objectives of the mechanisms that constitute the programs. Mr. Baron responded that once program mechanisms are identified, their validity must be tested prospectively. As an example, Dr. Hanushek referred to evaluation of funding streams for Title I and Reading First programs, rather than seeking to understand specific program activities.
Implementation and Sustainability
Dr. Hanushek asked how the IES research agenda can be broadened to focus on implementation, processes, and sustainability, and whether the proposal will focus at all on methodology. Dr. Easton responded in the affirmative, although saying that the process, going forward, is not yet clear. IES studies of making corrections with multiple comparisons are evolving, as is the agency's learning curve related to power analysis, which has been a subject of research grants since the agency's inception. Careful study is needed to determine how technical expertise can be applied to ensure that these questions are being vigorously addressed.
Mr. Baron commented that a number of initiatives are underway, such as efforts to scale up evidence-based program models with fidelity to the original model. There is a strong need to develop practical low-cost measures to determine whether programs are being implemented with fidelity without having to conduct site visits. Dr. Hanushek added that a characteristic of many programs is whether the program is susceptible to being faithfully implemented.
Dr. Easton said that the Gates Foundation is investing in this type of research and can be used as a resource. More thorough testing is needed of new and better theories of teaching that are emerging. Dr. Shaywitz added that factors that determine improved teacher-pupil interactions should also be a focus of study. Dr. Hanushek expressed skepticism about their value, suggesting that preliminary analysis based on reviewing student gains over distributions reveals that good and bad teaching is consistent and does not tend to vary based on individual interactions. He raised the question about how new IES directives will reflect Dr. Easton's particular discipline and perspectives.
Mr. Handy said that compensation and other variables that have remained relatively static are now beginning to change, which will enhance the efficacy of the research.
Dr. Hanushek responded that core issues related to this topic include the need to ensure teacher effectiveness, and he reflected that one approach is an incentive/compensation system that weeds out bad teachers and retains good ones; another approach would be to study classroom interactions. He again expressed reservations about focusing on the second approach, emphasizing the relative merit of research aimed at promoting teacher effectiveness. He added that experiments with alternative compensation systems have been conducted in Denver, Colorado, and Houston, Texas, and in Florida, but that a clear analysis has not yet emerged from the data. This remains an open research question. Texas is developing a body of research in this area, and programs have been initiated in Florida, funded by large grants intended to introduce alternative compensation systems. However, results of the analysis will not be available for several years.
In response to Dr. D'Amico's effort to clarify whether policy levers around compensation and retention, as well as classroom practices, would be examined with respect to teacher effectiveness, Dr. Easton responded that his emphasis would be on classroom practices, although policy aspects could be considered.
He noted that an economic model suggests comparing a system that differentially rewards teachers for effectiveness and a second system that is more stringent in weeding out and retaining teachers but that pays everyone according to the current system. Many countries use the latter system of weeding out ineffective teachers. Very few use graduated compensation based on merit.
Dr. Stuart Kerachsky suggested that two different research agendas were under discussion, the first concerning factors that make for quality teaching and the second looking at compensation. However, fundamental questions remain about the meaning of quality in this context and how to incentivize it.
Higher-Order Skill Measurement
Dr. Hanushek then asked Board members to consider the extent to which research programs should be designed to obtain measurements of higher-order skills or outcomes, beyond national assessments. Dr. Easton responded that more attention should be placed on assessing performance of more complex tasks. Dr. Geary expressed skepticism about the viability of teaching and measuring higher-order skills. Dr. Shaywitz concurred but reflected on the importance of, at a minimum, identifying strengths and skills related to reasoning and abstract thinking to balance experiences of failure in reading and mathematics that children often have and remember as negative aspects of their education.
Dr. Hanushek added that as an economist, he is concerned about ensuring the external validity of skill measurements once students leave school and enter the labor market.
Ms. Dixie Sommers questioned where adult learners fall in the research proposal's framework. Dr. Easton responded that this topic has not yet been sufficiently considered. Dr. Hanushek asked whether the Board should focus on grades pre-K–12 or whether some topics overlap. Dr. Easton said that resources would be the critical factor in determining which populations would be the subjects of proposed new directions of IES research.
Executive Function Skills
Discussion then centered around Dr. James Griffin's statement that data from the Early Childhood Longitudinal Study-Birth Cohort demonstrate that achievement gaps between children are solidly in place by 24 months of age and that, traditionally within the field of education, the preschool arena has not been well understood, nor has the value of preventing the gap before it occurs.
Dr. Griffin added that recent attention in research communities has been focused on the construct of "executive function skills," which include a broad array of self-regulation and self-control mechanisms that link neurobiological mechanisms to behavior and cognition. Early evidence suggests that the inability of children to self-regulate is one limiting factor, among others. A 2007 paper by Duncan et al. examined longitudinal data related to early attention measures; it demonstrated that preschool findings predict school achievement. The study also found that these skills are amenable to intervention.
Dr. Geary commented that executive function measures are, to a large degree, independent of in-class attention.
Dr. Hanushek said that education researchers now have sufficient longitudinal data, including preschool data from Dr. Heckman's study, to trace developmental patterns of the full life cycle, from preschool to graduation, and that research programs can be designed around this objective.
Mr. Baron affirmed the value of measuring such noncognitive interventions as self-control and resiliency.
Capacity-Building and Marketing
Dr. Hanushek raised the topic of building REL capacity to assist SEAs and LEAs in conducting evaluations and developing a wider evidence base about what works in educational practice. This role should be emphasized in the reauthorization process. Along with this objective, information should be gleaned from a variety of more localized studies, rather than in only a few large national studies.
Mr. Handy asked how IES research can be more effectively marketed. Dr. Easton replied that a team within the Department, under the leadership of the Director of Communications and Outreach, is thinking about how RELs can conduct more active outreach. Mr. Handy reflected that evidence suggests that the RELs might not be the optimal vehicle, and he emphasized that the agency is obligated to create channels for promulgating its work.
Dr. Shaywitz and Dr. Hanushek spoke to the point that too few policymakers, practitioners, and other end users are aware of IES-sponsored research, including even the most groundbreaking studies IES has undertaken—those on professional development in reading.
Mr. Baron added that research findings should also be gathered from agencies external to IES, including private foundations such as Spencer and other federal agencies such as the U.S. Department of Health and Human Services' Administration on Children and Families..
Ex Officio Report
Ms. Dixie Sommers, Ex Officio Delegate, Bureau of Labor Statistics (BLS), U.S. Department of Labor
Ms. Sommers introduced herself in her capacity as an ex officio member of NBES, representing Keith Hall, Commissioner of BLS. She provided an overview of the BLS mission and activities of relevance to IES.
Mission of the Bureau of Labor Statistics
The BLS examines a range of economic statistics related to employment, unemployment, wages, compensation, and prices and productivity. The Bureau employs Ph.D.-level research economists who are tasked with using BLS and Census Bureau (CB) data to examine performance of the labor market and to understand how data work and how it can be improved. The Office of Statistical Methods and Research assists with instrument design.
The BLS also produces national longitudinal studies of relevance to IES and longitudinal databases of businesses, which are used as sampling frames and as analytical tools for tracing job creation and destruction occurring in businesses and industries of different sizes and categories. The agency is also interested in the development of data systems taking place in other agencies, such as the NCES-sponsored state longitudinal databases, which are relevant with regard to labor market outcomes.
BLS's sister agency, the Employment and Training Administration (ETA), has explored data sets related to SLDS and funded a research grant process to enhance state capacity for developing longitudinal data sets tied to other research on wage records and workforce development. It follows CB research related to its state longitudinal data sets based on wage records.
Workforce preparation is another area of interest as BLS explores new avenues of research. Ms. Sommers said that her office produces occupational statistics, career outlook information, and information that are published in career information guidance materials tied to assessments that are used to help individuals make decisions about pursuing subsidized training available through various workforce programs.
BLS seeks to understand the processes by which people prepare for particular types of occupations through formal, special postsecondary, or more informal training venues. BLS draws on information from NCES, CB, and other agencies, and has been working with NCES to develop standard occupational classification subcategories of instructional programs. Finally, BLS studies educational attainment as a demographic characteristic and looks at its effects on labor market performance, growth, and flexibility.
ETA funds evaluation research that focuses on workforce development, unemployment insurance issues, and other labor market outcomes, although on a smaller scale than IES. Opportunities may be available for sharing results. The agency also funds training in skill development related to employment—both basic or "soft skills" (literacy and numeracy, dispositional outcomes) and occupation-specific skills. ETA has a long history of occupational analysis, deriving from the industrial psychology arena.
Dr. Hanushek asked how respective ETA and IES research agendas could be coordinated. Ms. Sommers said that additional accuracy of data related to career information—specifically regarding education levels required for entry into specific fields—and the type of certification or degree provided should be stressed. She responded that ETA's research agenda should include developing tools to identify skills performed in different occupations; degrees, type, and level of education required; and related questions. She added that BLS does a fairly comprehensive, although somewhat crude, job of this with the education attainment data from the census. The American Community Survey (ACS) will include some additional questions about field of degree related to professional categories. In addition, the Bachelor's and Beyond high school follow-up surveys are used, although neither survey is large enough to provide sufficient occupational detail for employment outcomes.
Dr. Shaywitz stressed the value of bridging and integrating educational and occupational data through studies that follow students throughout school into the labor force, starting as early as possible.
A brief discussion ensued about agency empowerment around data collection activities and the establishment of national databases of information about individual students.
Ex Officio Report
Dr. Robert Kominski, Ex Officio, U.S. Census Bureau
Dr. Kominski informed participants that Title 13, the Confidentiality Act overseeing the Census Bureau, prohibits the Bureau from sharing its data with any other agency without congressional approval.
He began his presentation by informing participants that a prenotice letter, form, and postcard sent out to familiarize the public with the upcoming 2010 census had resulted in a participation (response) rate, to date, of 62 percent once vacancies were accounted for; in the 2000 census, the mailed participation rate of response was 72 percent. He noted that Board members can visit the census website (http://www.2010.census.gov) to review the updated status.
The census long form was found to have a dampening effect on the response rate. To date, Wisconsin has the best return rate, at 74 percent, followed by Minnesota and Iowa, with the low-end participation rate from Alaska, at 51 percent. Lavonia City, Michigan, is the top responder in the country, at 81 percent. A Google map on the site reveals real-time daily rates, and a Census in the Schools program is also part of the promotional campaign.
Dr. Kominski said that he is representing CB's new director, Dr. Robert Groves, formerly of the University of Michigan. Dr. Groves is an accomplished social scientist and is renowned in survey operations—the first advanced-degree-holder social scientist who has served in this capacity at CB. New, improved, and alternative methods of data collection from the 2010 census should ensue during his tenure. By 2015, a synthetic base estimate of the American population should be able to be derived.
The ACS is a national ongoing survey of 250,000 households that has been produced every year since 2000. In 2009 a new question was implemented to ask for the field of bachelor's degree, which will facilitate postsecondary studies out of ED in conjunction with the National Science Foundation's (NSF) technical and manpower studies. The ACS 1-year data will be released in fall 2010 with routine tabulations of the field of bachelor's degrees or higher; tabulations on advanced degrees are not needed for NSF studies.
Dr. Kominski said that massive coding operations have commenced for fields of degrees; the dictionary is closing in on 75,000 entries. Well over 80 percent of all entries are being auto coded to provide data that will be of great use to the CB. From multiple-year data in ACS, the first 5-year file will come out in 2010. A total of 5 years of degree data will be gathered, and it will be intended to yield estimates of fields of degrees at the track level for the entire country. From this point, the data will be updated each year.
ED, BLS, CB, and other participating agencies are measuring noncollegiate postsecondary sub-baccalaureate education. The President's American Graduation Initiative is mandated to return the United States to the status of having the highest global rate of bachelor's degree recipients and, by 2020, to enable all Americans to have at least 1 year of college education or the equivalent (i.e., certificates). Questions are being posed about the characteristics of these types of certifications. A small set of questions will allow the CB to hone in on these definitions.
A sizeable run of cognitive pilot testing—to include between 10,000 and 20,000 households—will also be conducted in 2010. This will have the long-term goal of modifying fundamental educational attainment items, such as questions that will enable CB to reliably measure the field of people with educations beyond high school that have real market value.
Dr. Kominski responded to several questions about the census form and the process of selecting questions for the long form, explaining that for English language learners, the census form is geared at the fifth-grade literacy level. Small focus groups and one-on-one interviews are scheduled to determine what information respondents think they are providing when they answer census-related questions. On the basis of ACS coding of 300 languages spoken in the United States, approximately 150 alternate language forms have been adapted for non-native English speakers. The ACS will release a report based on coding approximately 300 languages in the United States.
A discussion ensued, initiated by Mr. Handy, about different legal standards that apply to tracking data in the United States. The integration of independent data systems is not regarded favorably by most Americans.
Presentations from U.S. Department of Education Offices
James Shelton, Assistant Deputy Secretary for the Office of Innovation and Improvement (OII), U.S. Department of Education; Judy Wurtzel, Deputy Assistant Secretary for Office of Planning, Evaluation, and Policy Development (OPEPD), U. S. Department of Education
Mr. Shelton reviewed a PowerPoint presentation on the topic of innovation and recent OII initiatives, beginning with a definition of innovation within the context of education.
A research product or program can be defined as "innovative" if a process, product, strategy, or approach (1) improves significantly on the current status quo and (2) ultimately reaches scale, thus serving a meaningful percentage and large numbers of the population over time. OII is engaged in identifying ways to encourage more innovation and to ensure that investments are targeted to have the highest possibility of success.
IES is familiar with the process of applying research to product development and is developing greater sophistication around conducting field scans of solutions already being developed in a particular topic area. This process involves testing to determine which solutions can be replicated, then packaging the most successful strategies and taking them to scale. Another approach, referred to as "deliberate design," involves consideration of solutions that are just over or on the horizon and that have not yet emerged from the field. The question in these cases focuses on how to identify opportunities for approaching and reaching the horizon in order to bring real capability difference and outcome difference to the work of improving student outcomes.
This framework would translate into different solution and product development pathways and would involve ongoing assessments to inform the short-cycle improvement of the product or solution. Intervals of uptake, use, and long-term evaluation would create a continuous improvement cycle, or loop of development, of new solutions over time. However, this cycle is relatively impaired in the field of education; currently, well-established and agreed-upon research priorities and markets for implementing solutions are lacking, as are pools of either philanthropic and/or commercial capital to drive entrepreneurs, whether social or regular, who have solutions.
Investment Innovation Fund
Toward this end, an innovation pipeline called the Investment Innovation Fund (i3) is under development; it is designed to evaluate and award development grants of up to $5 million for hypotheses grounded in research projects with the highest levels of promising evidence. Significant pushback has occurred with regard to whether OII should support relatively open-ended or more discriminating proposals, given the thousands of applications it receives. Currently, validation grant awards are predicated on at least a moderate level of evidence, and evidence for scale-up. This framework is distinct from and has a higher evidence standard than any other ED program.
Questions have emerged about how evaluation and evidence frameworks can accommodate the rapid changes in capabilities available in the field. Also, OII administers a little more than 100 formula and discretionary grant programs, most of which have been running for more than 5 years and fewer than a third of which have any resources above the one-half-of-one percent allocated by statute for evaluation and/or codification and/or knowledge management and sharing. The implication is that funding is allocated to projects whose efficacy is still not determined. Considering the blueprint for the Elementary and Secondary Education Act (ESEA), a more rational and cost-effective approach to helping practitioners and policymakers implement effective programs would be to foster collaboration among researchers in competitive programs.
Questions for consideration include these: (1) determining types of evidence that are appropriate early in the development cycle; (2) applying standards of rigor in the development phase; (3) and finding approaches to ensure that evaluations yield information about the effectiveness of research projects and to identify the contexts in which they work best and how to replicate them at scale.
"Systemic" vs. "Scalable"
Mr. Handy commented on the advisability of using the term "systemic" versus "scalable," observing that unless funding for innovative projects is part of a line item or provided over a period of time, interventions that are not systemic probably would not be sustainable, and that actual implementation presents another challenge. Mr. Shelton agreed, commenting that actually demonstrating sustainability inherently involves taking a systemic perspective and considering how innovative programs will fit into the core operations of the entity that will be implementing them over time and within budget. In terms of scalability, Race to the Top policies that ensure funding streams and accountability systems to reinforce the implementation of certain types of new practices are systemic, by definition, and have a higher probability of scale-up and sustainability.
In response to Dr. Geary's question about comparison with Defense Advanced Research Projects Agency (DARPA) competitions, Mr. Shelton responded that OII's process is significantly different, as it involves first basic and applied research, then field scans, and finally the process of culling the best strategies from the field through directed development. In contrast, DARPA begins research initiatives with an established solution in mind, which drives the specific solution. By definition, this is a very different kind of competition that requires a different kind of authority. Ongoing conversations with Dr. Easton are addressing the role and application of direct development within an educational context. Both Mr. Shelton and Dr. Easton agree on the relevance of the DARPA approach. Opportunities for interagency collaboration may offset budgetary constraints.
Mr. Baron expressed support for the model, commenting that the i3 design is the first ED-sponsored effort to establish evidence standards of "reasonably credible" and "very credible." He also applauded OII for standing firm in the face of the vast majority of recommendations that attempted to weaken and dilute the standard. This is particularly important because of the small number of interventions in education that are backed by sufficient evidence of effectiveness. Not only are more studies needed in the strongest category, but the current level of evidence needs to be strengthened so that increased funding is appropriated for studies that are backed by rock-solid evidence.
Mr. Shelton said that a study should be required to meet evidence standards for both an eligibility threshold and the judging criteria for points. If a study meets the minimum threshold for the category, the next criterion will concern its quality. Although every application is judged on its own merit, for a scale-up category proposal it would be possible to clearly determine whether the eligibility threshold was met, relative to large-scale studies, based on point distributions.
At the project level, grant funds usually must be applied to conducting an evaluation to qualify for the next level of evidence. Studies in the "moderate" category should be most diligent in progressing to truly controlled studies on a fairly significant scale. The quality of the evaluation and the importance of the questions that a given study will resolve for the field are among the judging criteria. To this end, OII is prioritizing the development of a data system to monitor cross-cutting evaluations. Mr. Shelton said that according to statutory limitations on ARRA appropriations, OII would need to partner with IES or other agencies that are interested in these cross-cutting studies. The item will be a priority for presentation during an upcoming meeting with the Office of Management and Budget.
Judy Wurtzel presented an overview of OPEPD's proposal for the ESEA reauthorization, from the standpoint of current thinking around evidence building and knowledge building, and touched briefly on OPEPD's assessment competition. She reviewed the key themes that cut across the entire proposal.
Legislating High Standards
Legislation that is committed to high standards, particularly at the college and career levels, should be enacted. Within the context of Title I, the proposal asks states to adopt and implement college- and career-ready standards. Given that high-quality standards should be aligned with high-quality assessments, OPEPD has invested $350 million in new assessments.
Designing Accountability Systems
Critical data and measures are required to assess student achievement and growth. Improvements in growth should be evidenced and rewarded at all points along the education spectrum. Greater flexibility and less micromanagement are being emphasized, so as to allow schools to evolve optimal approaches to designing these types of measures. Accountability systems are recommended along a spectrum of (1) high-growth schools that are making significant progress, or "reward" schools; (2) schools in which students are on track for college and career readiness; (3) and challenge schools that need more attention—particularly the bottom 5 percent, which would require intensive intervention and would be given less flexibility around how funds are used. Various models (e.g., transformation, start-up) for improving outcomes are being proposed.
In response to Dr. Hanushek's question about the meaning of "college readiness," Ms. Wurtzel said that states should provide evidence that students can meet proficiency standards at the college- and career-ready level and can handle nonremediated work. The bar is being set higher than the current stipulations of the No Child Left Behind (NCLB) Act.
Reporting a Broad Range of Measures
States and districts should report a much broader range of data than they gather currently, including grade-related benchmarks for determining whether individual students are on track in their educational process. Measures of human capital, school climate, and data on students who transition from high school to college enrollment will be reported.
Recognizing and Rewarding Success
Many schools that demonstrate significant achievement gaps should be recognized and rewarded when gaps are bridged. A commitment to the Center for Excellence entails a greater emphasis on building in incentives and rewards for achieving excellence.
Emphasizing Effective vs. Quality Teaching
States will be asked to develop assessments to ensure that districts have high-quality teacher evaluation systems; significant components of such systems should be measurements of student achievement growth and multiple observations of teacher performance. Each state would set its own parameters for effective and highly effective teaching. Researchers can then examine coherence between effective teaching and student growth.
Tracking the Effectiveness of Professional Development
Title II makes funding available at the district level for building evidence-tracking systems to identify evidence of the value of professional development and for stressing teacher evaluations.
Focusing on Competitive Grants
The agency is requesting a $4 billion increase for ESEA, the largest request to date. Much of this funding will be devoted to competitive innovation programs. Priority will be given to applications for competitive funding that can bring evidence to the table. Also embedded in some competitive programs are incentives for doing well at the 3-year mark of the grant term; such performance would result in extended funding.
Ms. Wurtzel concluded by mentioning three additional components of the ESEA proposal: (1) promoting SLDS in connection with ESEA requirements; (2) building capacity at the state level; and (3) stepping up collection of larger amounts of data, which is essential in driving academic improvement. Regarding the latter effort, input from the RELs will be particularly constructive in helping states determine the types and relevance of data to be collected.
A more integrated approach to conducting evaluations is being proposed within the framework of the ESEA proposal. Mr. Baron proposed an actual evaluative study on the effectiveness of accountability systems, referencing a Mathematica evaluation of the U.S. Department of Labor's Job Corps accountability system. No correlation was found between the true impact in the randomized trial and sophisticated performance measures used over a long period of time.
Ms. Wurtzel responded that by developing improved measures of student learning, over time, a knowledge base should be built that can have authentic value to researchers. The investment in better assessments should help to realize this objective. Dr. Shaywitz suggested that the "proof of concept" model, as used in the field of medicine, might be applicable to creating better assessment protocols.
A total of $350 million has been allocated through the Race to the Top initiative for competitions among state consortia that are interested in developing the next generation of assessments. The primary competition concerns comprehensive cognitive assessment systems for grades 3 to 8 and at least one high school grade; these would become the tests used by states for accountability under a reauthorized ESEA. Emphasis is on summative assessments, and up to two awards will be funded. Each consortium must be composed of least 15 states. The agency is also providing $30 million in funding for rigorous high school course assessments in multiple content areas. IES has assisted in developing frameworks for validating data and test results derived from the assessments and relating these to school effectiveness.
Early Learning and School Readiness
Dr. James Griffin, Ex Officio, National Institute on Child Health and Development (NICHD)
Dr. Griffin informed participants that his presentation was on behalf of Dr. Alan Guttmacher, the acting director of the NICHD and a pediatrician as well as a geneticist. Dr. Griffin reviewed the exceptional background and qualifications of Dr. Guttmacher as they relate to the mission of IES and mentioned Dr. Guttmacher's exceptional grasp of the needs of students and teachers. NICHD Branch Chief Dr. Peggy McCardle was out of town and not able to attend. Dr. Griffin mentioned that his own area of expertise is early learning and school readiness.
Dr. Griffin then referred to an article by Joseph Derlak in the journal Early Childhood Research Quarterly, noting that the article is particularly lucid on the topic of implementation research. Dr. Griffin confirmed that he would inform Ms. Garza about reference materials touched on during his presentation. He noted that NICHD is in the final planning stages for a workshop on executive functioning in early childhood in June of 2010. Numerous partners within and outside of NIH have been engaged, including IES and the Merrill Foundation. Topics will range from neurobiological development to preschool classroom interventions that are geared to increase executive functions in at-risk populations.
NICHD conducts a range of research projects, from basic to translational. The current focus is on "executive functioning." This buzz word within education research communities across the country can be understood as a possible rate-limiting factor in increasing school readiness.
An under-researched but emerging area of scientific inquiry concerns human-animal interaction. NICHD, in a public-private partnership with the Mars Corporation, is researching the therapeutic impact of pets on children. Findings correlate early abuse of animals in the home with conduct disorders and with abuse of spouses and children. Of most relevance to the NBES is the therapeutic application of pairing pets with children, such as to improve reading; or introducing therapy dogs in classrooms for special needs populations; or horseback riding for children with cerebral palsy. Research has not been conducted on the safety or efficacy of these programs for either children or animals. NICHD's public-private partnership with the Mars Corporation has thus far produced a research conference and two volumes based on conference proceedings. Awards for further research will be made in the fall.
Dr. Griffin added that the United States lags behind other countries in conducting research in this area. The question is not included on the U.S. census, although Germany and Australia have been able to demonstrate efficacy, based on more incisive survey data about pets in the home than are currently available in the United States. The National Children's Survey includes a question on pet ownership, but the question is written to consider pets as disease vectors.
Mr. Baron suggested that an area for interagency discussion may be the effort to identify the most promising candidates for National Center for Education Evaluation and Regional Assistance (NCEE) evaluations. He emphasized that IES is the one entity within government that is equipped with sufficient funding for larger scale research efforts and that it is important to target funding to the best candidates.
Dr. Griffin responded that NICHD typically does not conduct policy research but rather seeks to explore basic cognitive and biological processes and then build them up to the efficacy level; the agency also conducts smaller randomized controlled trials, which are subsequently handed off to IES or ACS. He noted that IES has modeled programs on National Institutes of Health (NIH) research and that this type of pipeline for the sharing and transfer of knowledge exists.
In response to Dr. Hanushek's question about whether NICHD performs "marketing functions," Dr. Griffin replied that typically a special journal issue or edited volume will emerge from a conference or workshop, which is intended to move the field forward. Co-sponsorship of workshops is another approach to involving more of the education community in NICHD's work.
Update on IES Center Activities
IES Commissioners and Staff
Dr. Lynn Okagaki, Commissioner for Education Research and Acting Commissioner for Special Education
Dr. Okagaki began her report on the progress of the National Center for Education Research (NCER) by commenting on the healthy increase in the number of research applications—from 622 in 2009 to approximately 1,000 in 2010. The overall funding rate for the first round of applications was 14 percent, for a total of 62 proposals funded. Recipients of awards from the October deadline will be announced early in June 2010. Dr. Hanushek interjected that the rate of funding was not as meaningful as the quality of the applications under review. Dr. Okagaki responded that overall the proposals have also been getting stronger since 2003.
Dr. Okagaki said that among numerous newly funded projects described in her written report),an example from the first round of fiscal year (FY) 2010 competitions includes evaluation of two Michigan high school programs—Michigan's Merit Curriculum, which requires students to complete more advanced coursework, and the Promise Scholarship Program, which provides financial assistance for postsecondary education. The grant is being funded through the Evaluation of State and Local Education Programs and Policies program.
Of relevance to policymakers, results will be published in June of a national study of performance incentives undertaken in Nashville, Tennessee, public schools. In addition, new deadlines for FY 2011 Race to the Top grant competitions were announced in January and February for phase 1 and phase 2 awards. This program recognizes the importance of understanding schools and districts as organizations and the need to study their functions as coordinated wholes.
Dr. Okagaki continued with an overview of newly funded projects under NCSER, which are intended to address a wide range of issues in special education. For example, Boston University researchers will be developing an assessment of American Sign Language proficiency for use with deaf children from kindergarten through high school. In addition, a team of investigators from the University of Wisconsin will evaluate approaches to peer support for high school students with intellectual disabilities and autism spectrum disorders. These students often have poor transition outcomes related to independent living, employment, leisure activities, community involvement, and related skills.
Improving the Rigor of Single-Case Experimental Design
In 2005 and 2006, NCSER funded two research teams to examine statistical methods for analyzing single-case experimental data and one team to study approaches for strengthening the design of single-case experimental studies. These studies are bearing fruit. Based on the work of these teams, a technical working group convened in January to consider multilevel modeling approaches to analyzing data from single-case designs and use of randomization to strengthen single-case designs. A challenge to researchers will be transitioning to quantitative analyses from the more traditional use of visual analysis. A summer institute on the design and analysis of single-case studies is planned. In addition, through its Statistical and Research Methodology in Education program, NCER is funding a project to develop a d-statistic for single-case designs that is comparable to and in the same metric as the d-statistic from a between-groups experiment.
Dr. Okagaki agreed to provide summaries of these studies' research results for Dr. Hanushek's and the Board's review.
Dr. Stuart Kerachsky, Deputy Commissioner, National Center for Education Statistics, next reviewed a number of NCES activities in his presentation of the National Assessment of Educational Progress (NAEP) (see Issues and Developments, National Assessment of Educational Progress, handout distributed at the meeting).
Mathematics Trial Urban District Assessment (TUDA)
This study extends the NAEP national study to the level of large urban centers. The 2009 assessment results released in December revealed a small but statistically significant gain between 2007 and 2009 in mathematics scores at both fourth and eighth grades for all large cities, although only 2 of 11 TUDA cities in the study showed gains at each grade. Cleveland, with scores at the bottom of the distribution, was the only TUDA city to show no gains since 2003. Seven new TUDA cities were added in 2009, for a total of 18, thus providing a clearer picture of urban education. Compared to the southern part of the country, indicators for declining manufacturing or "rust belt" cities indicate low educational performance. For eighth-grade students nationally, 60 percent scored at or above the Basic level of proficiency in mathematics, but in Detroit and Milwaukee, only 23 percent and 37 percent of students reached those levels.
At issue is the National Assessment Governing Board (NAGB) decision to exclude charter schools from the TUDA assessments, as districts are not responsible for them and regard them as performing more poorly than urban schools. However, federal policy promotes charter schools, and there is a need to determine the optimal method for obtaining accurate measures of TUDA performance and trends.
Recently released 2009 results indicate the same pattern as for mathematics—no gain since the last assessment at fourth grade in 2007, and modest gains since the last eighth-grade assessment. The trend since the first assessment in 1992 shows gains at both grades, but the gains are not as large as those in mathematics.
All racial/ethnic groups gained significantly; in mathematics, gains were more significant than in reading, which is primarily due to demographic shifts in the national population since the early 1990s, with the number of Hispanic students increasing threefold while the number of Caucasian students is decreasing. Assessment at regular intervals is required to determine the scientific significance of these gains, which currently is not well established. Nationalized standards resulting from these assessments would imply the development of a standardized curriculum.
Exclusion of Students Based on Disability or Language
This topic was not discussed during the meeting but is addressed in the printed handout referred to above.
NAEP-TIMSS Grade 8 Linking Study
The study is being designed to meet international benchmarking objectives by using the state-level coverage of the NAEP to project state scores to the Trends in International Mathematics and Science Study (TIMSS) scale. Although costs of a state-level international assessment are prohibitive, a statistical linkage is being developed between these two curriculum-driven assessments, which both assess fourth and eighth graders. Both grades will be assessed concurrently in 2011.
The NAGB has rearranged its assessment schedule to accommodate eighth-grade science in the 2011 NAEP assessment, so that the linking study will include both mathematics and science for grade 8. All 50 states have agreed to participate; in addition, 29 states agreed to take on state-level TIMSS. Funds for this study were included in the final 2011 Administration budget, which still requires passage by Congress.
Statewide Longitudinal Data Systems
The panel has met to review applications received after the ARRA round of SLDS state grants announcement, and it intends to award state grants before the end of May 2010. Grant awards will be doubled to approximately $500 billion to fund the expanded scope of data systems that cover the range of pre-K-postsecondary plus labor force. The SLDS staff is providing technical assistance to states and to the National Forum of Education Statistics, developing voluntary common data standards in collaboration with states and other stakeholders, investigating issues of privacy and confidentiality, and reviewing best practices in data use to support better education.
Privacy and confidentiality remain key issues and have become increasingly sensitive as states extend their focus beyond the K–12 system administered by state education authorities. States' abilities to extend their data systems beyond K–12 and increase the scope of data used in the educational process seem linked to success in protecting student privacy and confidentiality. NCES is well positioned to provide nonregulatory guidance to support FERPA (Family Educational Rights and Privacy Act) objectives, including (1) data stewardship, (2) electronic data security, (3) statistical methods to protect personally identifiable information (PII) disclosure in aggregate analysis and reporting, and (4) terms for use in written agreements when authorizing studies using education records containing PII.April 8th
Dr. Hanushek called the meeting to order at 8:45 a.m. Dr. Hanushek announced a change in the agenda, which involved switching of the order of Dr. Brigid Long's presentation, as she had not yet arrived at the meeting, and Mr. Scott Cody's presentation on the What Works Clearinghouse (WWC). He then introduced Dr. Thomas Kane for his presentation on a Spencer Foundation study on teacher impacts on student achievement.
Presentation of Recently Released IES Studies
Estimating Teacher Impacts on Student Achievement: An Experimental Evaluation
Dr. Thomas J. Kane, Deputy Director for Research and Data,College Ready, Bill and Melinda Gates Foundation, Professor of Education and Economics, Harvard Graduate School of Education
Dr. Thomas Kane presented research that has been conducted jointly with Douglas Stager at Dartmouth College, based on data obtained over the last 4 decades on the effects of individual teachers on student achievement. He commented that Dr. Hanushek had done much of the earlier work in this area. Dr. Kane's PowerPoint presentation (see handout titled Estimating Teacher Impacts on Student Achievement: An Experimental Evaluation) touched on a variety of topics, which are discussed here.
Research generated many interesting and strikingly consistent findings with regard to huge impacts of individual teachers on student achievement; these had an estimated student-level impact of .10 to .25 standard deviations per year. Key findings in the nonexperimental literature show that (1) teacher effects are unrelated to traditional teacher credentials; (2) there is a steep payoff to greater teaching experience in the first 3 years of teaching, but a flat effect thereafter; and (3) there is a small difference between the impact of Teaching Fellows and Teach For America (TFA) teachers and traditional teachers.
This research enterprise has grown dramatically in last 5 to 6 years due to the greater amount of data available in the aftermath of NCLB; recent years have seen an explosion in this type of research.
Dr. Kane continued that, given policymakers' interest in the relevance of these findings, the question has been raised as to whether estimates of teacher effects can be used as proxies for individual teachers' causal effects on student achievement. A study that used random assignment (RA) of teachers to classrooms, based on data from the Tennessee Classroom Size Experiment, evaluated the effect of classroom size. A paper on the apparent variation in teacher effects from that study reported findings that would have been on the high end for the nonexperimental teacher-effect literature.
Dr. Kane added that it is still unclear whether estimates from the nonexperimental data can be used to approximate the effect of a given teacher on student achievement. The variance looks the same, but validity checks have not been conducted for bias. A solution may be found by estimating value added, using usual nonexperimental methods that calculate the effects of individual teachers, controlling for student baseline characteristics, and using student achievement as the outcome. This is being proposed for policy purposes. The question is whether outcomes for individual teachers approximate results in the nonexperimental estimates.
What We Do
For 78 pairs of teachers in the Los Angeles United School District (LAUSD) elementary classrooms, pre-experimental impacts were estimated from value-added models predicting experimental results.
LAUSD data. For grades 2–5, data were obtained for the years before, during, and after RA. Outcomes were assessed based on the California Achievement Test, the California Standards Test, and the Stanford 9 Tests, and data were standardized by grade and year. Mathematics and reading scores were used as covariates relative to grade, race/ethnicity, Title I, eligibility for free and reduced-price lunch, and related categories. The coefficient score was found to vary by grade level.
Experimental design and empirical methods.Two rosters of 3,500 students each were assigned between 78 pairs of teachers in 156 classrooms. Students were randomly assigned at the classroom level based on preference of principals to avoid students with interpersonal issues being assigned to the same classroom. Dr. Kane said that if classrooms had been similar, the precision of the study's estimates would have been higher. However, even given differences in the classrooms, the random design of the study assured that baseline characteristics of teachers and students would not be correlated.
Duration or fade-out of effects could trend either toward a beta score of higher or lower than 1, based on teacher impacts.
Various approaches can be tested to reveal bias and value-added estimates, such as mathematics levels or math gains with no controls, student/peer controls, and student fixed effects. All estimates provided information about how students would perform, an unexpected and important finding of the study. A coefficient of .5 indicates smaller differences than would have been predicted for prior estimates. Bias in this direction could be expected, based on the lack of controls for baseline characteristics. Dr. Kane clarified that the "no controls" category signifies that no effort was made to adjust for different classroom composition or other covariates for Teacher A and Teacher B. Reviewing fixed effect estimates, he also emphasized that the field has been too loose in stipulating what should be considered value-added measures. A surprising result that emerged from the study is that the hypothesis that estimates that control for baseline performance are unbiased could not be rejected.
Not clear how to interpret fade-out. Various interpretations for determinants of fade-out include forgetting and transitory teaching-to-test, lack of use of knowledge, noncumulative nature of grade-specific content of tests, and teacher focus on challenged students who had previous worst teachers being mixed with students who had best teachers (peer effects).
Reconciling with Rothstein. Estimates derived from richer models with more covariates are correlated above .9. The study's approach was to test whether nonexperimental estimators proposed by Rothstein (2010) validly predict student achievement following random assignment.
Summary. Three conclusions emerged from three study hypotheses: (1) Common value-added methods yield unbiased predictions of causal effects of teachers on short-term student achievement. Controlling for baseline score yielded unbiased predictions; further controlling for peer characteristics yielded the highest explanatory power, explaining more than 50 percent of teacher variation. (2) Relative differences in achievement between teachers and students fade out in both experimental and nonexperimental data at the annual rate of .4 to .6. (3) Understanding the mechanism behind this fade-out is key to estimating the long-term benefits of using value-added.
In response to Dr. Shaywitz's suggestion to consider student self-perception as learners in studies of teacher effectiveness, Dr. Kane said that a Gates study and one undertaken by the National Center for Teacher Effectiveness will collect information on such items as student agreement/disagreement about levels of teacher expectations and ability to explain academic content. Dr. Shaywitz pointed out that the focus of these studies is on student perception of teachers rather than of the students' perceptions of themselves as learners.
Mr. Baron asked whether assigning students to high-value added classrooms for 3 years would produce sustained effects. Dr. Kane responded that this type of study design could sort out the four different explanations for fade-out.
What Works Clearinghouse Update
Scott Cody, Deputy Director of the What Works Clearinghouse (WWC), Associate Director of Research Mathematica Policy Research, Inc.
Dr. Scott Cody began his presentation with a review of WWC accomplishments over the contract year since July 2009 and of new directions and developments undertaken by the agency. Accomplishments include (1) intervention reports on seven distinct topic areas; (2) practice guides, which are designed by panels of researchers and practitioners to address education challenges, based on levels of evidence; and (3) quick reviews, for studies that receive high-profile media attention.
Dr. Cody commented that counterfactuals should be considered before taking this study to scale.
Dr. Marsha Silverberg added that only one of four recent randomized controlled trials for Read 180 was found to be effective, and it was implemented in a juvenile justice facility. Dr. Cody said that a new, updated report would be published about the intervention, pending new findings.
Dr. Cody briefly reviewed this group of WWC offerings. The WWC has published 12 practice guides, with the most popular including RtI Math and RtI Reading, Reducing Behavior Problems, Adolescent Literacy, Turning Around Low-Performing Schools, and Organizing Instruction. All have been downloaded more than 40,000 times from the Web; the most popular has been Decreasing Behavior Problems in Elementary School Classrooms. Six practice guides are under development (see handout title The What Works Clearinghouse, Accomplishments and New Directions), with Teaching Fractions and Promoting Reading Comprehension scheduled for publication.
Dr. Cody reviewed publication of 14 Quick Reviews, 3 of which are in development and 8 completed. They fall under the following topic areas, with the number of reviews shown in parentheses: School Organization and Governance (5), Curriculum and Instruction (5), Teacher Programs (1), Supplemental Academic (1), Student Incentives (1), and Student Behavior (1).
In mid-March 2010, the WWC Quick Review of the article "Are High-Quality Schools Enough to Close the Achievement Gap? Evidence From a Social Experiment in Harlem" was released. (See handout distributed at meeting.) Quick Review formats include discussion of study content and intervention, the author's report, and whether the study was consistent with WWC standards. This study was found to be consistent with WWC standards; it looked at a cohort of middle school students in a charter school sponsored by the Harlem Children's Zone who were entered into the Promise Academy by lottery. By the eighth grade, the study found relatively large impacts on mathematics.
Dr. Hanushek asked whether the Quick Reviews focus only on positive study components. Dr. Cody responded that the review addresses any results for designs that are eligible with regard to WWC standards; at times, separate reports are issued.
About 80 percent of studies do not meet WWC criteria, often for obvious reasons. The question remains whether a more efficient procedure can be developed that does not affect conclusions. The WWC has designed a new and more efficient process for assigning a review to a single reviewer; if the study fails to pass at that stage, the reviewer will document the reasons and pass the study on for analysis by a deputy principal investigator. If this reviewer confirms that the study has failed, the fail code is documented and a final rating determined. If the deputy investigator does not agree, the study is sent back to the original reviewer, who then sends it to a second reviewer who does not know that the study did not pass on the first round. This reviewer then receives the deputy reviewer's results for comparison. If agreement is obtained at this level, the review is completed; otherwise, the study is submitted to the original principal investigator.
The process will be tested by applying both methods to the next 40 studies and using both methods to compare the two approaches with regard to both possible cost savings and accuracy of the reviews.
Practice guide review is also being streamlined. Previously, a panel and panel chair were assigned at the beginning of the process to identify a set of key studies, keywords, and search terms, and to conduct a broad literature search for anything related to the topics. WWC staff then review, screen, and identify eligible studies as well as any others that are germane, which are summarized and provided to the panel. For some of the guides, this process yields hundreds and thousands of studies, rendering it inefficient, when only a small number are eligible for WWC review. This two-stage search will sort out key effectiveness studies and terms to identify all relevant studies related to the topic, as well as levels of evidence. For other research, the search is narrower, involving work with panels to identify studies and conduct targeted searches.
Mr. Baron raised recent reports, including the Quick Reviews on the Harlem Children's Zone study and on Experience Corps, as being extremely clear and helpful. He shared several suggestions for improving WWC products overall, including (1) stating the length of each study's follow-up period in WWC reports; (2) clearly distinguishing between effects on intermediate outcomes (such as letter identification) and more policy-important, final outcome measures (such as reading comprehension) ; and (3) distinguishing between findings of effects in small efficacy studies versus large effectiveness trials.
Dr. Cody said that for all intervention studies, the WWC reports on the extent of evidence, applying standards based on internal validity, and then reports on whether results can be generalized. Dr. Hanushek commented that the WWC abides by specified rules, as compared to a level of judgment about the study's validity, which may be difficult to codify. Mr. Baron added that policymakers and practitioners who review WWC studies are not always able to distinguish statistical inferences without a review of the study's effectiveness.
Dr. Okagaki suggested the addition of "extent of evidence" under each item of report summary tables.
Dr. Hanushek said that two recommended topics for the next discussion are (1) the process by which practice guides are informed by further research and evidence, and (2) cost considerations with regard to providing research guides to districts.
Middle School Mathematics Professional Development Impact Study: Findings from the First Year of Implementation
Mike S. Garet, Principal Investigator, Vice President at American Institutes for Research (AIR)
Dr. Garet explained that the focus of the middle school mathematics impact study was on seventh-grade mathematics achievement. The study was a sister study to one on reading in second grade that was presented to NBES more than a year ago. Dr. Hanushek praised the previous report as one of the best ever sponsored by IES. Dr. Shaywitz concurred, commenting on its implications for professional development in reading and that significant resources are being drained in the assumption that professional development is effective.
Dr. Garet said the study was conducted under the auspices of AIR and NDRC and focused on improving seventh-grade mathematics achievement in the area of rational numbers—fractions, decimals, percentages, ratios, and proportions. Results presented focus on the impact of the professional development delivered during the first year of the study. A final report will focus on the cumulative impact of 2 years of professional development. One theory is that teachers also struggle with these concepts and that improvements in teacher knowledge will result in improved student outcomes. The following summarizes Dr. Garet's PowerPoint presentation. (See the Middle School Mathematics Professional Development Impact Study: Findings from the First Year of Implementation handout distributed at meeting.)
Research questions. Questions concerned the impact of professional development on (1) teacher knowledge, (2) instructional practices, and (3) student achievement in rational number topics.
Specification of the professional development. Professional development training included (1) a 3-day summer institute in each district, (2) five 1-day seminars during the school year, and (3) five 2-day coaching sessions at each school. Content emphasized the role of precise definitions and the properties and rationales underlying common procedures such as the division of fractions. Each 2-day coaching visit included a range of coaching activities, including planning, observing, instructing, and debriefing. Common content knowledge and developing teachers' specialized content knowledge of what constitutes good explanations and the use of representations (e.g., number line, addressing common student misconceptions) were emphasized.
Providers. America's Choice (AC) and Pearson Achievement Solutions (PAS) were selected. Both providers incorporated the same professional development design features already in use in the field. AC asked teachers to work and discuss problem sets during the professional development, based coaching on lessons in district practice guides, and gave more emphasis to individual coaching. PAS emphasized discussion of extended problems, based coaching on lessons introduced for the professional development, and gave more emphasis to group coaching.
Design and methods. The sample included 12 districts, 6 served by AC and 6 by PAS; 6 used one of the two most widely used "traditional" texts—Glencoe or Prentice Hall Math—and 6 used the most widely used "reform" text, Connected Math. The sample spanned all four geographic regions of the country; 77 percent of study schools were in mid- and large-sized cities. The average rate of eligibility for free and reduced-price lunch price lunches across the sample was 66 percent. On average, the sample was 34 percent White, 36 percent Black, 25 percent Hispanic, 3 percent Asian, and 1 percent other. Schools were randomly assigned to treatment groups (40) and control groups (37). All regular seventh-grade mathematics teachers were included in treatment groups; seventh-grade honor sections were excluded if they were different from regular classes. In the control group, the same sets of teachers were invited to participate.
Overview of data collection. Data included implementation measures, baseline data from teacher surveys and teacher knowledge tests in the spring of the school year, classroom observation after the third year, customized student achievement tests in the fall and spring of the school year, and district records of state achievement test scores and student demographics in the spring of the school year.
Implementation of professional development: Use of planned professional development time. Professional development was administered separately in each of the 12 districts, during each of the 8 days of institutes and seminars. The study team observed all institute and seminar days, coding time allocation and activities. Providers delivered 45 hours of professional development intended for the institute and seminars. In allocating time to segments, providers omitted 2 percent of planned segments and reduced another 16 percent of planned segments by more than half of the intended duration.
Implementation of coaching: Use of planned professional development time. Coaches completed logs for each 2-day visit, in which they listed all coaching sessions that occurred; logs served as a basis for constructing a record of the coaching experienced by each teacher. For each 2-day coaching visit, each teacher received an average of 4.5 hours of coaching and 112 percent of intended hours.
Teacher participation in the professional development program. The study team collected sign-in/sign-out sheets for the institute and seminar days. The average treatment teacher attended 83 percent of the total professional development hours implemented and participated in 76 percent of the institute hours implemented, 81 percent of seminar hours implemented, and 94 percent of coaching hours implemented.
Service contrast. To compare the amount of professional development received by treatment and control teachers, the teacher survey asked teachers for information about mathematics-related professional development they received that lasted at least one-half day and about mathematics-related coaching or mentoring. Relative to the control group, the treatment teachers received 55.4 more hours of professional development, which can be compared with the intended delivery of 68 hours of professional development per teacher.
Impact results. The question of impact of professional development on teacher knowledge of rational number content was assessed through subscale measurement of common content knowledge and specialized content knowledge. Results indicated that treatment group teachers' chance of correctly answering a typical item was 55 percent versus a rate of 50 percent for the control group. Questions were multiple choice with a few short-answer items. No statistically significant result was found for teacher performance. Items were sought in the domain at the seventh-grade level that would not be either too difficult or too easy to answer. The test was also administered to 12 facilitators who performed well; however, standards are lacking for how well teachers ought to do on these tests, and more work is needed in this area. Instructional practice was measured by American Institutes of Research (AIR) staff, who received training on coding videos and live classroom interactions. Focus was on the frequency with which teachers engaged in professional development-related behaviors. Scales were developed to assess eliciting of student thinking, use of representations, teacher focus on understanding, and impact on student achievement.
Additional results. Results show a significant positive impact of the professional development program for AC on two dimensions of instructional practice (teacher elicits student thinking and teacher uses representations); no impacts on teacher knowledge or student achievement were found. There were no significant impacts of the professional development for PAS. Results should be interpreted with caution. Each provider worked in 6 of the 12 study districts. Providers were not randomly assigned to districts, and thus the results could be due to differences in the districts assigned to each provider.
Exploratory analyses. Analyses based on stable teachers indicate that turnover does not appear to alter the main impact findings. No significant association between teachers' baseline knowledge and treatment-control differences in student outcomes was evident, nor was there a significant association between students' baseline achievement and treatment-control differences in student outcomes.
Teacher knowledge, instructional practice, and achievement. The associations between the study measures of teacher knowledge, instruction, and achievement, controlling for student and teacher covariates, were examined. No significant relationships were found, although most of the coefficients were positive and consistent in magnitude with associations reported in the literature.
Second-grade professional development in reading. Professional development in reading tested three conditions: Treatment A, a content-focused professional development series consisting of 8 institute and follow-up seminar days (48 hours), based on Language Essentials for Teachers of Reading and Spelling; Treatment B, the 8-day institute and seminar series plus coaching throughout the year, with coach training conducted by the Consortium on Reading Excellence (CORE); and "business as usual," the control condition .Although positive impacts occurred on teachers' knowledge and on one of the three instructional practices promoted by the professional development, neither professional development intervention resulted in significantly higher student test scores. The added effect of the coaching intervention was not statistically significant, and there were no statistically significant impacts on the measured teacher or student outcomes in the year following the treatment.
Broader research literature. Overall, literature on the impact of professional development on achievement is quite varied in terms of the types of professional development studied, types of student outcomes measured, and research designs and methods. Some small-scale efficacy studies (RCTs and QEDs), primarily focused on volunteer teachers, show positive effects of professional development on student achievement; others do not. Natural variation studies of the association between teachers' hours of professional development participation and their students' achievement also show mixed results.
Hypothesized features that might be tested in future studies. The following factors may contribute to improved impacts of professional development on student achievement: (1) increased accountability for teachers to learn and implement topics in professional development (e.g., through the teacher evaluation system); (2) increased opportunities for teachers to examine practice and get feedback from homework and exams; (3) more demanding coaches; (4) tighter links to the curriculum or student assessments; and (5) changed balance between focus on content and focus on specific instructional practice.
Overview of the Office for Civil Rights
Russlynn Ali, Assistant Secretary for the Office for Civil Rights (OCR), U.S. Department of Education
Ms. Ali addressed new initiatives under OCR that can expand IES's capacity to improve overall ED activities towards better student outcomes. Ms. Ali said that on March 7, the 45th anniversary of "Bloody Sunday" (the first of three Selma to Montgomery civil rights marches in 1965) the Secretary called for reinvigoration of the OCR, acknowledging that the agency could and would do more to promote student achievement and to close the achievement gap.
She continued that since that time, 17 congressionally mandated compliance reviews have been launched across the country to determine whether students in local jurisdictions are free from discrimination; another 18 reviews are planned before the end of the fiscal year. The reviews are perhaps the "heaviest hammer" used to investigate district practices and are based on anecdotes, media reports, OCR's civil rights data collection, and other sources, but are conducted with the full collaboration of school districts and universities—a relatively new development. Ms. Ali proceeded to review OCR jurisdiction, functions, data collecting activities, and professional development and training agendas.
OCR holds jurisdiction over enforcing Title VI of the Civil Rights Act of 1964, which addresses protections against racial discrimination, discrimination based on color or national origin; Title IX of the Educational Amendments Act, protections against sex discrimination; Section 504 of the Rehabilitation Act, disability discrimination; age, under the Age Discrimination Act; Title II under the American Disabilities Act; and NCLB provisions. OCR covers any school recipient of federal funds. Coverage does not extend to testing agencies.
OCR receives approximately 6,300 complaints each year. Traditionally, the majority has been under Section 504 (50 percent), with 8 percent from Title VI and 20 percent from Title IX. An increase is already evident this year in Title VI- and Title IX-related complaints.
OCR provides technical assistance and guidance to ensure recipients' awareness of rights it covers. Seventeen pieces of guidance related to issues under all three statutes will be issued up to the end of the fiscal year, ranging from peanut allergies, to occupational services, sexual violence on college campuses, elementary and secondary schools' obligations under Title VI for English language learners, obligations of charters to comply with civil rights laws, and use of race in K–12 and higher education admission—all following Supreme Court jurisprudence and interpreting OCR's regulations. Ms. Ali added that OCR will focus on monitoring agreements with districts across the country.
The congressional appropriations bill has called for an Equity Commission, which will be housed in OCR. Proposals to study disparities and best practices in school funding are under consideration. OCR contributions to the area of the federal government financing of schools should be of relevance to IES and other ED agencies. An investigation is underway into possible discrimination against English language learners in Los Angeles high school districts to determine whether programs exist and whether they can be more effective—and to consider how to improve tracking of student progress, allocation of resources, the most effective teachers with this particular subgroup of students, and factors and assessments used to classify students. OCR will also consider how to improve data collection efforts so that a longitudinal story can be told as students journey from kindergarten through their education.
Ms. Ali referred to recent investigations that fall under OCR jurisdiction, including one of a rape of a young woman in a Richmond, Virginia, high school during a homecoming dance and another related to a South Hadley, New Jersey, suicide of a young girl. OCR contacted the school superintendent to offer assistance, which was openly accepted. While maintaining a "kinder, gentler" profile, OCR has been going beyond traditional means to resolve cases of school-related discrimination.
OCR Data Activities
Dr. Hanushek asked about OCR data activities and how they integrate with current efforts. Ms. Ali said that OCR has just launched a totally transformed and revamped data collection effort that fixed a number of methodological problems with an older survey with regard to both collecting and reporting data, such as OCR's collection of data on students with disabilities, which had previously been underestimated.
As programs collect data for grant submissions, accountability under ESEA, and many other reasons, OCR has been aligning definitions used in the Civil Rights Data Collection (CRDC) with EDFacts and other principal offices within the Department. Another effort involves the collection of Average Freshman Graduation Rates (AFGR), previously available only through the CRDC, for every school in the country within EDFacts. Although EDFacts is not public, equity portions will be accessible when CRDC is launched as a public resource. It is not clear whether this will be included in the Common Core of Data. Although more data are being collected than in the past—OCR has increased the sample size this year to 7,000, which captures every school in the country that serves more than 3,000 students—historically civil rights data have been separate from other education-related data.
Many more indicators are being collected to provide new insight into college preparatory curricula and to compile data on seclusion and restraint, and on discipline, including law enforcement referrals, which are increasing for younger ages. Also this year, two separate reportings, rather than one, on results and enrollment data will be collected from students.
Dr. Hanushek asked how OCR determines improvements for the 18 districts in its jurisdiction. Ms. Ali said that if violations are found, the monitoring agreement should stand the test of political tenure. Barriers to student achievement, their removal, and results related to achievement will be reported. Civil rights data collection and relationships with districts, universities, and colleges should lead to greater knowledge about resolution and improvement with regard to civil rights infringements.
She noted that the CRDC is collecting data at the school level on funding for instructional and noninstructional teachers for all Title I and non-Title I schools, which will increase knowledge about school funding. Ms. Ali said that although she is reluctant to draw direct causal links between reductions in civil rights abuses and student achievement, the expectation is that access to such things as college preparatory curricula or to Advanced Placement would be made available in locales where previously they were not. Supports for students and teachers should be provided to promote student achievement.
OCR Professional Development
Internal professional development in internal investigation and development of robust remedies for civil rights violations is being emphasized that, if implemented with fidelity, will result in an increase in student performance. However, OCR does not intend to script professional development protocols directly for school districts. Rather, the agency's role is to identify barriers to student learning and to help districts comply with the law in removing them.
OCR has a "whole system" view with regard to supporting all the functions of school districts. However, the agency is only one component among many educational policy and programmatic initiatives in a larger effort to close the student achievement gap and meet the President's goal for the United States to be first globally in graduation rates by 2020. Ms. Ali expressed confidence that significant results would continue to be achieved based on united efforts of all stakeholder agencies.
In response to a question about how 6,300 annual complaints are triaged, Ms. Ali said that this is the bulk of what OCR field attorneys do. Cases transition through 12 regional offices within 180 days, by law. Compliance reviews are assigned to more complex cases that require more resources. However, the agency is additionally stretched as substance is emphasized over process. More resources are required for web-based investigative work, and more multiple-issue cases are involved in most claims related to civil rights violations than for disability claims. Sharing knowledge and calling on outside expertise are strategies that help to expand agency capacity.
Dr. Hanushek then introduced Dr. Bridget Long, one of the four nominees to the NBES who was in attendance at the current meeting.
The Role of Simplification and Information in College Decisions: Results from the H&R Block FAFSA Experiment
Dr. Bridget T. Long, Investigator, Professor of Education and Economics, Harvard University Graduate School of Education
Dr. Long presented results of research conducted with associates Eric Bettinger of Stanford University and Philip Oreopoulos of the University of Toronto. They looked at the impact of reducing the length and complexity of the Free Application for Federal Student Aid (FAFSA) form to simplify the process of applying for student aid. The study was funded by IES, the Gates Foundation, the National Science Foundation (NSF), and the MacArthur Foundation, among other grantors.
At five pages and 128 questions, the FAFSA is lengthier than Forms 1040EZ and 1040A and is comparable to Form 1040. A total of 850,000 students who would have been eligible for aid in 2000 did not complete necessary forms; many more who did not realize that they were eligible for aid may have elected not to attend college at all. Since the study's inception in 2006, significant progress has been made toward simplifying the FAFSA, but research remains to be conducted as to whether related policies will truly improve college access. Dr. Long continued her presentation with a PowerPoint that contained the following information.
Even with simplification, there are additional concerns. These include (1) lack of information among families and students, so that particularly low-income students greatly overestimate the cost of higher education and feel that there is no basis for preparing themselves academically; (2) lack of awareness of the FAFSA; (3) missed deadlines that particularly affect work-study and campus-based programs; and (4) students receiving information only a few months before attending college. The FAFSA is gatekeeper to both federal and state aid and to aid available directly from colleges and universities. The FAFSA Forecaster is now available online to help determine student aid eligibility, but Internet access may be limited and knowledge about the tool is not yet widespread.
The H&R Block experiment. The goal of the experiment was to test the effects of helping families, without charge, complete the FAFSA in a simplified manner and providing personalized information. H&R Block was interested in the project and was able to process income information, meet deadlines in a timely fashion and could replicate the components of the experiment in communities and organizations across the country.
The FAFSA experiment. Eligibility for the study was based on the applicant's family having an income of less than $45,000, or Pell grant eligibility, and the applicant being without a college degree and between the ages of 17 and 30 years old. Questions were asked about interest in learning more about higher education, receipt of Pell grants, educational attainment, estimates of the average cost of college, and considerations that affect decisions about applying to college. The sample breaks into three groups: (1) students dependent on parents (under age 18, reliant on parents); (2) independents, or young adults ages 24 years; and (3) older, or married, parents, veterans, and orphans. An additional element of the project was geared to provide high school sophomores and juniors with an early estimate of eligibility, thus giving them more time to prepare academically.
Flow of randomized trial. Software screens were used to determine eligibility. Consent and basic background questions were asked, and groups were assigned based on the Social Security number. The software prevented reassigning the treatment. T-statistics and basic regressions verified the randomization.
Interventions. For students in the FAFSA Treatment Group, relevant tax information already collected was transferred into FAFSA cells (prepopulation). A streamlined and automated interview was conducted to collect remaining information (personal assistance protocol). This step was followed by calculation of individualized estimates of aid eligibility and information on local college options (information), which were submitted to FAFSA with participant consent. Follow-up assistance was provided if needed by a call center. Eligibility was determined for the Information-Only Treatment Group, but FAFSA cells were not prepopulated. Personalized information was then provided on family contribution and amount of financial aid for which the student was eligible. Based on participant consent, forms were submitted by the tax professional directly to ED.
H&R Block FAFSA Project implementation results. The 2007 pilot study was conducted at four colleges in Cleveland and included 3,206 individuals; the 2008 study was larger, and it included 26,013 in Ohio and North Carolina. FAFSA-eligible individuals were overwhelmingly willing to participate, interested in receiving college information, and able to complete most FAFSA forms in under 10 minutes. Educational funding sources included both grants and loans.
Implementation tax season 2008 sample statistics. Characteristics of study participants included the following: Most participants in the studies had at least one child and held a GED rather than a regular high school degree. Many received public assistance such as food stamps; Temporary Assistance for Needy Families (TANF); free and reduced-price lunches; and assistance from the Supplemental Nutrition Program for Women, Infants, and Children (WIC). A survey question was administered that determined that 42.2 percent thought a primary reason for not attending college was the cost of tuition.
Summary: Impact on FAFSA submission (application for aid). Assistance with FAFSA substantially increased the likelihood of submitting the aid application at the rates of 39 percent for high school seniors, 186 percent for independent students who had never been to college, and 58 percent for independent students who had previously attended college.
Summary: Impact on college enrollment and aid receipt. The FAFSA treatment significantly increased enrollment among graduating high school students. There was a substantial increase of 7 percentage points in college enrollment, even though no direct financial incentives were provided for participating in the study. Among older, independent students who had not previously attended college, a felt effect was concentrated among those with incomes of less than $22,000. For other independents, there was an effect on aid receipt, addressing the problem of eligible college students not receiving aid.
What is driving the effects (simplification, assistance, and /or information)? For most people in the independent sample, completion of the FAFSA form was accomplished in the H&R Block office. These participants were given the option of having the company submit the FAFSA electronically, without requiring any follow up; the bulk of study results came from people who chose electronic submission. Focus groups provided insight into the fact that the choice not to file electronically was to some extent determined by an assumption of obligation to receive the aid.
Addressing concerns with the current system. The H&R Block intervention mitigated issues of complexity and time by maintaining an average FAFSA interview to 8 minutes. In addition, ED reported lower rejection rates, increases in FAFSA filings and receipt of aid, and enrollment effects.
Broader implications (funded by Gates Foundation). Even with good information, the complexity of the FAFSA and the burden of navigating the application process alone are significant barriers; simplification and outreach are important, and they can be efficiently accomplished with information from tax forms. Additional outreach would also greatly improve the current system. Huge effects were obtained when assistance was offered in filling out the form, even if the applicant was aware that funds were available and could be obtained by applying without H&R Block assistance. In addition, beyond simplifying the form, it is also important to simplify the process because momentum is important to maximizing the chance of FAFSA completion.
FAFSA II: Expanding outreach. Moving forward, Intuit, Benefit Bank, and other "open source" software can be used to prepopulate the FAFSA forms, and the software can be distributed to families to guide them through the application process and provide personalized information. Testing will occur to determine whether study results can be replicated in community or online tax preparation sites. With regard to research and evaluation, questions remain about which methods of outreach, information delivery, and FAFSA assistance are best and about the effectiveness of technology in providing FAFSA assistance. The study has implications for the optimal way to bring this project to scale nationally and offers an opportunity for implementing partnerships with ED.
How else could we help students: Possible outreach models. These include notification as early as elementary school years of eligibility to beneficiaries of public assistance, in the same way that individuals are notified of their Social Security status, and high school monitoring of FAFSA submission.
Other coming attractions. Participants will continue to be tracked over time to determine persistence in preparing for college. The state of Ohio has an excellent database of public institutions; researchers can access student transcripts to study course selection, credits earned, grades, and related items. Analysis will also determine the effects of expanded awareness of FAFSA eligibility on early student preparation with regard to course selection and college preparatory exams.
Mr. Baron lauded the study and suggested that national impact could be achieved as early as 2011 through implementation of the study model at H&R Block offices. Dr. Long responded that in 2009, H&R Block ran the study in offices in Atlanta and New York City without a research component. The company is considering national expansion after a hiatus occasioned by the national economic downturn; issues remain to be clarified around the feasibility of provision of free services.
Mr. Baron inquired about plans for longer-term follow up. Dr. Long said that enrollment data from the National Student Clearinghouse will be accessed to track course selection and other items mentioned under "coming attractions."
Ms. Sommers suggested that DOL programs such as Jobs Corps might offer opportunities for study application. Dr. Long replied that once software is operational, potential partners can be identified. A number of states have contacted the researchers to express interest in implementing the study. Dr. Hanushek suggested that IES might think more broadly about information blockages that affect behavior related to learning. Dr. Long commented on study implications for applicants who, for reasons related to lack of or insufficient information, file late to take the SAT or ACT college preparatory exams and are thus less likely to be assigned to their first choices of test locales. Questions also arise about which courses count for transfer.
Dr. Shaywitz said that many students who take SAT exams may have learning disabilities that are never identified; outreach may extend information to these disadvantaged children, which could have a major positive impact on their college attendance. Dr. Long agreed that there are a number of nuances related to information accessibility and that if more affluent and educated applicants have difficulty with the FAFSA application, it will be no less challenging for underprivileged students trying to navigate the system. For middle-income students, the study could have a large impact, particularly for the $45,000 to $60,000 bracket and that right above Pell eligibility. This group could be informed about state and institutional programs and programs that offer credit and loan counseling, as well as about the fact that tuition is not as high as many prospective students and their families think.
Mr. Baron added that additional information about the causal mechanisms of the positive effects of the study should be determined. Dr. Long responded that simplification resolves difficulties for a great majority of applicants, particularly if they have Internet access. If the Internal Revenue Service prepopulates the FAFSA forms, there is potential for students to apply for financial aid in high school without the involvement of families, as the process is even further simplified.
Open Board Discussion and Next Steps
Dr. Hanushek said that Dr. Easton would consider the feedback offered by Board members about IES research priorities shared earlier in the meeting. He said that it is hoped that, pending Senate approval of the next four NBES nominees, a Board meeting can be scheduled for a second hearing of the proposal. It then would be submitted to the Federal Register for comment.
He mentioned that Dr. Arden Bement, ex officio NBES member representing the NSF, will take a new job at Purdue University and will be replaced by another NSF representative.
Mr. Baron commented on his attendance, along with Dr. Easton, at the National Science Board (NSB) meeting; the NSB oversees the research agenda of the NSF, including STEM education research. During the NSB meeting, Dr. Easton shared a preview of some of the priorities in education research laid out at the NBES meeting, while Mr. Baron reviewed a brief history of the NBES and creation of IES. There was discussion at the NBS meeting of ways the two organizations could collaborate. Dr. Hanushek commented that by involving ex officio members more deeply, the NBES has been able to enhance the interchange between IES and other agencies involved in education research.
Dr. Geary referred to delays in communicating decisions to grantees. He said that in contrast to the NIH process—in which reviews take place at the time scores are received and are followed within a couple of weeks by written remarks, with a decision following at a later date—IES grantees are not informed until final decisions are made, 3 months after submitting grant applications and reviews have occurred. Information is also not available in time to revise applications or to prepare for beginning the application process.
Anne Ricciuti, Deputy Director for Science, leads the Standards and Review Office, which oversees the peer review process. Dr. Ricciuti responded that the Office is exploring approaches to hasten return of scores and summary statements to applicants before the funding decision. Dr. Long emphasized that knowing earlier about funding decisions is especially important, as many projects are linked into the school year. The new online scoring technology should be able to assist in this process. Dr. Ricciuti agreed, and added that funding decisions are not supposed to go public until congressional offices are notified, but IES does not believe that this would preclude providing scores and feedback prior to the funding decision, and is looking into the issue further.
A conversation ensued, initiated by Mr. Baron, about the feasibility of arranging formal discussions with the WWC about changes and improvements in their review process. Dr. Hanushek expressed optimism about the work of the WWC, adding that some initial changes had been made, although a list of the most pressing concerns might be presented. Dr. Shaywitz agreed that questions about WWC-sponsored research methodologies are not surprising but that outreach is one of IES's most important projects and should constitute a separate agenda item. Dr. Okagaki suggested that this conversation might be postponed until appointment of a new commissioner for evaluation, which was supported by Dr. Easton.
Dr, Hanushek adjourned the meeting at 12:45 p.m.
The National Board for Education Sciences is a Federal advisory committee chartered by Congress, operating under the Federal Advisory Committee Act (FACA); 5 U.S.C., App. 2). The Board provides advice to the Director on the policies of the Institute of Education Sciences. The findings and recommendations of the Board do not represent the views of the Agency, and this document does not represent information approved or disseminated by the Department of Education.