Skip Navigation
archived information
Skip Navigation

Back to Ask A REL Archived Responses

REL Midwest Ask A REL Response

Data Use

October 2020

Question:

What research or resources are available on metrics that can be used to measure student engagement?



Response:

Following an established Regional Educational Laboratory (REL) Midwest protocol, we conducted a search for research reports, descriptive studies, and literature reviews on metrics that can be used to measure student engagement. In particular, we focused on identifying resources at the secondary and postsecondary education levels. For details on the databases and sources, key words, and selection criteria used to create this response, please see the Methods section at the end of this memo.

Below, we share a sampling of the publicly accessible resources on this topic. References are listed in alphabetical order, not necessarily in order of relevance. The search conducted is not comprehensive; other relevant references and resources may exist. For each reference, we provide an abstract, excerpt, or summary written by the study’s author or publisher. We have not evaluated the quality of these references, but provide them for your information only.

Research References

Burch, G. F., Heller, N. A., Burch, J. J., Freed, R., & Steed, S. A. (2015). Student engagement: Developing a conceptual framework and survey instrument. Journal of Education for Business, 90(4), 224–229. Retrieved from https://eric.ed.gov/?id=EJ1059425

From the ERIC abstract: “Student engagement is considered to be among the better predictors of learning, yet there is growing concern that there is no consensus on the conceptual foundation. The authors propose a conceptualization of student engagement grounded in A. W. Astin’s (1984) Student Involvement Theory and W. A. Kahn’s (1990) employee engagement research where student engagement is built on four components: emotional engagement, physical engagement, cognitive engagement in class, and cognitive engagement out of class. Using this framework the authors develop and psychometrically test a student engagement survey that can be used by researchers to advance engagement theory and by business schools to monitor continuous improvement.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Fredricks, J., McColskey, W., Meli, J., Mordica, J., Montrosse, B., & Mooney, K. (2011). Measuring student engagement in upper elementary through high school: A description of 21 instruments. Issues & Answers. (REL 2011-No. 098). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southeast. Retrieved from https://eric.ed.gov/?id=ED514996

From the ERIC abstract: “Researchers, educators, and policymakers are focusing more on student engagement as the key to addressing low achievement, student boredom and alienation, and high dropout rates (Fredricks, Blumenfeld, and Paris 2004). As schools and districts seek to increase engagement, it is important for them to understand how it has been defined and to assess the options for measuring it. One challenge educators and evaluators face in measuring engagement is determining the appropriateness of the available instruments, especially given limited time to review the literature. Instruments for measuring engagement also reflect different disciplinary perspectives and theoretical frameworks and are thus not easily compared. To address the information needs of education professionals, this report describes the 21 instruments for measuring engagement in upper elementary through high school identified through a literature review. The report does not include a technical review of the quality of each measure, nor does it recommend or identify strengths or weaknesses of particular instruments.”

Hart, S. R., Stewart, K., & Jimerson, S. R. (2011). The Student Engagement in Schools Questionnaire (SESQ) and the Teacher Engagement Report Form-New (TERF-N): Examining the preliminary evidence. Contemporary School Psychology, 15(1), 67–79. Retrieved from https://eric.ed.gov/?id=EJ934707

From the ERIC abstract: “Student engagement in school is an important construct that has been associated with student success. For the current study, researchers examined the psychometrics of the Student Engagement in Schools Questionnaire (SESQ) and the Teacher Engagement Report Form (TERF-N) of student engagement. The results revealed that both the SESQ and the TERF-N have good internal consistency. The exploratory factor analysis results for the SESQ demonstrated alignment with the theoretically driven development (five factors: Affective Engagement-Liking for Learning, Affective Engagement-Liking for School, Behavioral Engagement-Effort & Persistence, Behavioral Engagement-Extracurricular, and Cognitive Engagement) whereas the results for the TERF-N were more complicated. The items did not load as conceptualized in a 3-factor model, but instead loaded on one, General Engagement factor. Finally, while it may be that teachers viewed a student’s level of engagement as a global construct, the correlations between the measures indicated that they might be used to provide helpful, convergent information obtained from a variety of sources regarding a student’s levels of engagement. Future directions and implications for school psychologists are discussed.”

Kuh, G. D. (2009). The National Survey of Student Engagement: Conceptual and empirical foundations. New Directions for Institutional Research, 2009(141), 5–20. Retrieved from https://eric.ed.gov/?id=EJ833656

From the ERIC abstract: “When the history of American higher education is rewritten years from now, one of the storylines of the first decade of the twenty-first century likely will be the emergence of student engagement as an organizing construct for institutional assessment, accountability, and improvement efforts. The engagement premise is straightforward and easily understood: the more students study a subject, the more they know about it, and the more students practice and get feedback from faculty and staff members on their writing and collaborative problem solving, the deeper they come to understand what they are learning and the more adept they become at managing complexity, tolerating ambiguity, and working with people from different backgrounds or with different views. Engaging in a variety of educationally productive activities also builds the foundation of skills and dispositions people need to live a productive, satisfying life after college. Said another way, engagement helps to develop habits of the mind and heart that enlarge their capacity for continuous learning and personal development. In this chapter, the author briefly summarizes the history of the engagement concept and the circumstances that led to development of the National Survey of Student Engagement (NSSE). He then reviews the substance and evolution of NSSE and its impact on institutional researchers.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Lovelace, M. D., Reschly, A. L., Appleton, J. J., & Lutz, M. E. (2014). Concurrent and predictive validity of the Student Engagement Instrument. Journal of Psychoeducational Assessment, 32(6), 509–520. Retrieved from https://eric.ed.gov/?id=EJ1035535

From the ERIC abstract: “The Student Engagement Instrument (SEI) is a self-report measure of cognitive and affective engagement with school. Prior SEI validation studies have focused primarily on construct validity through analyses of internal consistency, factor analysis, and measurement invariance. Results are presented here from a two-pronged study of the criterion validity of SEI scores. Using a middle school sample (N = 35,900), concurrent validity was assessed through analyses of group differences in SEI scores across student subgroups expected to differ in cognitive and affective engagement levels: behaviorally disengaged versus non-disengaged, high-risk versus low-risk disability status, and high versus low academic achievement. Next, through multiple logistic regression analyses, the 4-year predictive validity of SEI scores for on-time graduation and dropout was assessed in a cohort of first-time ninth graders (N = 11,588). Nearly all SEI factors demonstrated directionally consistent associations with each criterion, including considerable long-term predictive associations with both dropout and on-time graduation.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

McClenney, K. M. (2007). Research update: The Community College Survey of Student Engagement. Community College Review, 35(2), 137–146. Retrieved from https://eric.ed.gov/?id=EJ774689

From the ERIC abstract: “The Community College Survey of Student Engagement, established in 2001 and administered by the Community College Leadership Program at the University of Texas at Austin, provides systematically collected data on the experiences of community college students. This article describes what has been learned through the survey to date and notes plans for the future, including the development of the Survey of Entering Student Engagement (SENSE) for community colleges.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

McClenney, K., Marti, C. N., & Adkins, C. (2012). Student engagement and student outcomes: Key findings from “CCSSE” validation research. Austin, TX: Community College Survey of Student Engagement. Retrieved from https://eric.ed.gov/?id=ED529076

From the ERIC abstract: “The findings from 20 years of research on undergraduate education have been unequivocal: The more actively engaged students are—with college faculty and staff, with other students, and with the subject matter they study—the more likely they are to learn, to stick with their studies, and to attain their academic goals. The existing literature, however, focuses almost exclusively on students in four-year colleges and universities. This special report provides summary highlights from a large-scale research project that examined, for the first time, relationships between student engagement and a variety of student outcomes—including academic performance, persistence and attainment—‘in community colleges’. The bottom line for community colleges: ‘Student engagement matters’.”

Moore, K. A., Lippman, L. H., & Ryberg, R. (2015). Improving outcome measures other than achievement. AERA Open, 1(2), 1–25. Retrieved from https://eric.ed.gov/?id=EJ1194865

From the ERIC abstract: “Research indicates that educational, economic, and life success reflect children’s nonacademic as well as academic competencies. Therefore, longitudinal surveys that assess educational progress and success need to incorporate nonacademic measures to avoid omitted variable bias, inform development of new intervention strategies, and support mediating and moderating analyses. Based on a life course model and a whole child perspective, this article suggests constructs in the domains of child health, emotional/psychological development, educational achievement/attainment, social behavior, and social relationships. Four critical constructs are highlighted: self-regulation, agency/motivation, persistence/diligence, and executive functioning. Other constructs that are currently measured need to be retained, including social skills, positive relationships, activities, positive behaviors, academic self-efficacy, educational engagement, and internalizing/emotional well-being. Examples of measures that are substantively and psychometrically robust are provided.”

National Survey of Student Engagement. (2020). Engagement insights: Survey findings on the quality of undergraduate education. Annual results 2019. Bloomington, IN: Indiana University Center for Postsecondary Research. Retrieved from https://eric.ed.gov/?id=ED604974

From the ERIC abstract: “The National Survey of Student Engagement (NSSE) documents dimensions of quality in undergraduate education and provides information and assistance to colleges, universities, and other organizations to improve student learning. Its primary activity is annually surveying college students to assess the extent to which they engage in educational practices associated with high levels of learning and development. The Center for Postsecondary Research at Indiana University’s School of Education administers NSSE, in partnership with the Indiana University Center for Survey Research. This report presents key findings from the 2019 administration of NSSE and its companion survey, the Faculty Survey of Student Engagement (FSSE). NSSE surveyed first-year and senior students attending 531 bachelor’s degree-granting institutions across the United States in spring 2019, while FSSE results came from 120 institutions, almost all of which administered NSSE as well. ‘Annual Results 2019’ also provides findings from a subset of NSSE respondents who answered additional questions about academic advising and from a study of the persistence of first-year students. Engagement Indicators (EIs) and measures of participation in High-Impact Practices (HIPs) summarize key facets of student engagement.”

Reschly, A. L., Betts, J., & Appleton, J. J. (2014). An examination of the validity of two measures of student engagement. International Journal of School & Educational Psychology, 2(2), 106–114. Retrieved from https://eric.ed.gov/?id=EJ1089183

From the ERIC abstract: “This study evaluated the psychometric properties of two measures of student engagement, the Student Engagement Instrument (SEI) and the Motivation-Engagement Scale (MES), with adolescents in the southeastern United States. Confirmatory factor analyses revealed an acceptable fit of the SEI and a relatively poor fit of the MES in this sample. Correlational analyses provided evidence of convergent and divergent validity of SEI and MES factors. In addition, SEI and MES factors were correlated as expected with external measures of academic functioning and school behavior.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Waldrop, D., Reschly, A. L., Fraysier, K., & Appleton, J. J. (2019). Measuring the engagement of college students: Administration format, structure, and validity of the Student Engagement Instrument–College. Measurement and Evaluation in Counseling and Development, 52(2), 90–107. Retrieved from https://eric.ed.gov/?id=EJ1211472

From the ERIC abstract: “This study evaluated the psychometric properties of the Student Engagement Instrument–College version (SEI–C) with college students in the southeastern United States. Participants self-selected paper-and-pencil or online administration. Confirmatory factor analysis revealed a modified 5-factor structure. Measurement invariance of the modified 5-factor structure of the SEI–C was assessed across the paper-and-pencil and online samples. Full configural, full metric, partial scalar, and full residual variance invariance were established. The paper-and-pencil and online data were aggregated, and correlational analyses between the 5 SEI–C factors and the 4 higher order factors of the Motivation and Engagement Scale–University/College (MES–UC) provided evidence of convergent and divergent validity. All but 1 of the 20 correlations were statistically significant, and all correlations were in the expected direction. Overall, there is evidence to suggest the appropriateness of extending the SEI upward for use with college students and for collecting data via online or paper-and-pencil administration.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Wang, Z., Bergin, C., & Bergin, D. A. (2014). Measuring engagement in fourth to twelfth grade classrooms: The Classroom Engagement Inventory. School Psychology Quarterly, 29(4), 517–535. Retrieved from https://eric.ed.gov/?id=EJ1049459

From the ERIC abstract: “Research on factors that may promote engagement is hampered by the absence of a measure of classroom-level engagement. Literature has suggested that engagement may have 3 dimensions—affective, behavioral, and cognitive. No existing engagement scales measure all 3 dimensions at the classroom level. The Classroom Engagement Inventory (CEI) was developed to fill this gap. In Study 1, exploratory and confirmatory factor analyses were conducted on data from 3,481 students from the 4th to 12th grade. The results suggested a 4-factor model of the CEI. Using these results, in Study 2 several items were revised and data were collected 1 year later from 4th to 12th grade students in the same school district as Study 1. Analyses were conducted on data from 3,560 students after data cleaning. A series of potential models was tested. The final results suggest a 5-factor 24-item CEI: (1) Affective Engagement, (2) Behavioral Engagement—Compliance, (3) Behavioral Engagement—Effortful Class Participation, (4) Cognitive Engagement, and (5) Disengagement. Results advance understanding of the construct of classroom engagement. The CEI fills a significant gap in measurement of engagement. The CEI is classroom level, measures multiple dimensions of engagement, uses self-report, is relatively short, and can be readily administered in classrooms from the 4th to 12th grade.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Zhoc, K. C. H., Webster, B. J., King, R. B., Li, J. C. H., & Chung, T. S. H. (2019). Higher Education Student Engagement Scale (HESES): Development and psychometric evidence. Research in Higher Education, 60(2), 219–244. Retrieved from https://eric.ed.gov/?id=EJ1206550

From the ERIC abstract: “This study describes the development and validation of the Higher Education Student Engagement Scale (HESES). The psychometric evaluations of the scale included: (i) factor structure, (ii) internal consistency, and (iii) criterion validity. The HESES was developed based on our proposed five-factor model of student engagement, which was evolved from Finn and Zimmer’s (In: Christenson SL, Reschly AL, Wylie C (eds) Handbook of research on student engagement. Springer, New York, 2012) student engagement model taken into account the distinctive characteristics in higher education. The five main facets of student engagement include: (1) academic engagement, (2) cognitive engagement, (3) social engagement with peers, (4) social engagement with teachers, and (5) affective engagement. The HESES was developed from the 61-item First Year Engagement Scales (FYES). For brevity, it was trimmed into a 28-item scale having regard to the content validity, factor loadings and error variances of the items. The CFA results supported the correlated five-dimensional model with all the dimensions showing high internal consistency based on Cronbach’s alpha coefficients. A multi-group CFA also rendered the structure as gender invariant. Its criterion validity was evidenced by its correlations with different student learning outcomes and more importantly, its predictive power in explaining variances of GPA (15%) and satisfaction of the university experience (29%). Different from the dominant behavioral perspective of student engagement in higher education, the HESES is based on a psychological perspective, streamlining student engagement as students’ level of involvement in the learning process and a multi-faceted construct with academic, cognitive, social and affective dimensions. The implications and merits of the HESES are discussed.”

Note: REL Midwest was unable to locate a link to the full-text version of this resource. Although REL Midwest tries to provide publicly available resources whenever possible, it was determined that this resource may be of interest to you. It may be found through university or public library systems.

Methods

Keywords and Search Strings

The following keywords and search strings were used to search the reference databases and other sources:

  • “Learner engagement” psychometrics

  • “National Survey of Student Engagement”

  • “Student Engagement Instrument”

Databases and Search Engines

We searched ERIC for relevant resources. ERIC is a free online library of more than 1.6 million citations of education research sponsored by the Institute of Education Sciences (IES). Additionally, we searched IES and Google Scholar.

Reference Search and Selection Criteria

When we were searching and reviewing resources, we considered the following criteria:

  • Date of the publication: References and resources published over the last 15 years, from 2005 to present, were included in the search and review.

  • Search priorities of reference sources: Search priority is given to study reports, briefs, and other documents that are published or reviewed by IES and other federal or federally funded organizations.

  • Methodology: We used the following methodological priorities/considerations in the review and selection of the references: (a) study types—randomized control trials, quasi-experiments, surveys, descriptive data analyses, literature reviews, policy briefs, and so forth, generally in this order, (b) target population, samples (e.g., representativeness of the target population, sample size, volunteered or randomly selected), study duration, and so forth, and (c) limitations, generalizability of the findings and conclusions, and so forth.
This memorandum is one in a series of quick-turnaround responses to specific questions posed by educational stakeholders in the Midwest Region (Illinois, Indiana, Iowa, Michigan, Minnesota, Ohio, Wisconsin), which is served by the Regional Educational Laboratory (REL Midwest) at American Institutes for Research. This memorandum was prepared by REL Midwest under a contract with the U.S. Department of Education’s Institute of Education Sciences (IES), Contract ED-IES-17-C-0007, administered by American Institutes for Research. Its content does not necessarily reflect the views or policies of IES or the U.S. Department of Education nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.