IES Blog

Institute of Education Sciences

New Study on U.S. Eighth-Grade Students’ Computer Literacy

In the 21st-century global economy, computer literacy and skills are an important part of an education that prepares students to compete in the workplace. The results of a recent assessment show us how U.S. students compare to some of their international peers in the areas of computer information literacy and computational thinking.

In 2018, the U.S. participated for the first time in the International Computer and Information Literacy Study (ICILS), along with 13 other education systems around the globe. The ICILS is a computer-based international assessment of eighth-grade students that measures outcomes in two domains: computer and information literacy (CIL)[1] and computational thinking (CT).[2] It compares U.S. students’ skills and experiences using technology to those of students in other education systems and provides information on teachers’ experiences, school resources, and other factors that may influence students’ CIL and CT skills.

ICILS is sponsored by the International Association for the Evaluation of Educational Achievement (IEA) and is conducted in the United States by the National Center for Education Statistics (NCES).

The newly released U.S. Results from the 2018 International Computer and Information Literacy Study (ICILS) web report provides information on how U.S. students performed on the assessment compared with students in other education systems and describes students’ and teachers’ experiences with computers.


U.S. Students’ Performance

In 2018, U.S. eighth-grade students’ average score in CIL was higher than the average of participating education systems[3] (figure 1), while the U.S. average score in CT was not measurably different from the average of participating education systems.

 


Figure 1. Average computer and information literacy (CIL) scores of eighth-grade students, by education system: 2018p < .05. Significantly different from the U.S. estimate at the .05 level of statistical significance.

¹ Met guidelines for sample participation rates only after replacement schools were included.

² National Defined Population covers 90 to 95 percent of National Target Population.

³ Did not meet the guidelines for a sample participation rate of 85 percent and not included in the international average.

⁴ Nearly met guidelines for sample participation rates after replacement schools were included.

⁵ Data collected at the beginning of the school year.

NOTE: The ICILS computer and information literacy (CIL) scale ranges from 100 to 700. The ICILS 2018 average is the average of all participating education systems meeting international technical standards, with each education system weighted equally. Education systems are ordered by their average CIL scores, from largest to smallest. Italics indicate the benchmarking participants.

SOURCE: International Association for the Evaluation of Educational Achievement (IEA), the International Computer and Information Literacy Study (ICILS), 2018.


 

Given the importance of students’ home environments in developing CIL and CT skills (Fraillon et al. 2019), students were asked about how many computers (desktop or laptop) they had at home. In the United States, eighth-grade students with two or more computers at home performed better in both CIL and CT than their U.S. peers with fewer computers (figure 2). This pattern was also observed in all participating countries and education systems.

 


Figure 2. Average computational thinking (CT) scores of eighth-grade students, by student-reported number of computers at home and education system: 2018

p < .05. Significantly different from the U.S. estimate at the .05 level of statistical significance.

¹ Met guidelines for sample participation rates only after replacement schools were included.

² National Defined Population covers 90 to 95 percent of National Target Population.

³ Did not meet the guidelines for a sample participation rate of 85 percent and not included in the international average.

⁴ Nearly met guidelines for sample participation rates after replacement schools were included.

NOTE: The ICILS computational thinking (CT) scale ranges from 100 to 700. The number of computers at home includes desktop and laptop computers. Students with fewer than two computers include students reporting having “none” or “one” computer. Students with two or more computers include students reporting having “two” or “three or more” computers. The ICILS 2018 average is the average of all participating education systems meeting international technical standards, with each education system weighted equally. Education systems are ordered by their average scores of students with two or more computers at home, from largest to smallest. Italics indicate the benchmarking participants.

SOURCE: International Association for the Evaluation of Educational Achievement (IEA), the International Computer and Information Literacy Study (ICILS), 2018.


 

U.S. Students’ Technology Experiences

Among U.S. eighth-grade students, 72 percent reported using the Internet to do research in 2018, and 56 percent reported completing worksheets or exercises using information and communications technology (ICT)[4] every school day or at least once a week. Both of these percentages were higher than the respective ICILS averages (figure 3). The learning activities least frequently reported by U.S eighth-grade students were using coding software to complete assignments (15 percent) and making video or audio productions (13 percent).

 


Figure 3. Percentage of eighth-grade students who reported using information and communications technology (ICT) every school day or at least once a week, by activity: 2018

p < .05. Significantly different from the U.S. estimate at the .05 level of statistical significance.

¹ Did not meet the guidelines for a sample participation rate of 85 percent and not included in the international average.

NOTE: The ICILS 2018 average is the average of all participating education systems meeting international technical standards, with each education system weighted equally. Activities are ordered by the percentages of U.S. students reporting using information and communications technology (ICT) for the activities, from largest to smallest.

SOURCE: International Association for the Evaluation of Educational Achievement (IEA), the International Computer and Information Literacy Study (ICILS), 2018.


 

Browse the full U.S. Results from the 2018 International Computer and Information Literacy Study (ICILS) web report to learn more about how U.S. students compare with their international peers in their computer literacy skills and experiences.

 

By Yan Wang, AIR, and Linda Hamilton, NCES

 

[1] CIL refers to “an individual's ability to use computers to investigate, create, and communicate in order to participate effectively at home, at school, in the workplace, and in society” (Fraillon et al. 2019).

[2] CT refers to “an individual’s ability to recognize aspects of real-world problems which are appropriate for computational formulation and to evaluate and develop algorithmic solutions to those problems so that the solutions could be operationalized with a computer” (Fraillon et al. 2019). CT was an optional component in 2018. Nine out of 14 ICILS countries participated in CT in 2018.

[3] U.S. results are not included in the ICILS international average because the U.S. school level response rate of 77 percent was below the international requirement for a participation rate of 85 percent.

[4] Information and communications technology (ICT) can refer to desktop computers, notebook or laptop computers, netbook computers, tablet devices, or smartphones (except when being used for talking and texting).

 

Reference

Fraillon, J., Ainley, J., Schulz, W., Duckworth, D., and Friedman, T. (2019). IEA International Computer and Information Literacy Study 2018: Assessment Framework. Cham, Switzerland: Springer. Retrieved October 7, 2019, from https://link.springer.com/book/10.1007%2F978-3-030-19389-8.

Equity Through Innovation: New Models, Methods, and Instruments to Measure What Matters for Diverse Learners

In today’s diverse classrooms, it is both challenging and critical to gather accurate and meaningful information about student knowledge and skills. Certain populations present unique challenges in this regard – for example, English learners (ELs) often struggle on assessments delivered in English. On “typical” classroom and state assessments, it can be difficult to parse how much of an EL student’s performance stems from content knowledge, and how much from language learner status. This lack of clarity makes it harder to make informed decisions about what students need instructionally, and often results in ELs being excluded from challenging (or even typical) coursework.

Over the past several years, NCER has invested in several grants to design innovative assessments that will collect and deliver better information about what ELs know and can do across the PK-12 spectrum. This work is producing some exciting results and products.

  • Jason Anthony and his colleagues at the University of South Florida have developed the School Readiness Curriculum Based Measurement System (SR-CBMS), a collection of measures for English- and Spanish-speaking 3- to 5-year-old children. Over the course of two back-to-back Measurement projects, Dr. Anthony’s team co-developed and co-normed item banks in English and Spanish in 13 different domains covering language, math, and science. The assessments are intended for a variety of uses, including screening, benchmarking, progress monitoring, and evaluation. The team used item development and evaluation procedures designed to assure that both the English and Spanish tests are sociolinguistically appropriate for both monolingual and bilingual speakers.

 

  • Daryl Greenfield and his team at the University of Miami created Enfoque en Ciencia, a computerized-adaptive test (CAT) designed to assess Latino preschoolers’ science knowledge and skills. Enfoque en Ciencia is built on 400 Spanish-language items that cover three science content domains and eight science practices. The items were independently translated into four major Spanish dialects and reviewed by a team of bilingual experts and early childhood researchers to create a consensus translation that would be appropriate for 3 to 5 year olds. The assessment is delivered via touch screen and is equated with an English-language version of the same test, Lens on Science.

  • A University of Houston team led by David Francis is engaged in a project to study the factors that affect assessment of vocabulary knowledge among ELs in unintended ways. Using a variety of psychometric methods, this team explores data from the Word Generation Academic Vocabulary Test to identify features that affect item difficulty and explore whether these features operate similarly for current, former, as well as students who have never been classified as ELs. The team will also preview a set of test recommendations for improving the accuracy and reliability of extant vocabulary assessments.

 

  • Researchers led by Rebecca Kopriva at the University of Wisconsin recently completed work on a set of technology-based, classroom-embedded formative assessments intended to support and encourage teachers to teach more complex math and science to ELs. The assessments use multiple methods to reduce the overall language load typically associated with challenging content in middle school math and science. The tools use auto-scoring techniques and are capable of providing immediate feedback to students and teachers in the form of specific, individualized, data-driven guidance to improve instruction for ELs.

 

By leveraging technology, developing new item formats and scoring models, and expanding the linguistic repertoire students may access, these teams have found ways to allow ELs – and all students – to show what really matters: their academic content knowledge and skills.

 

Written by Molly Faulkner-Bond (former NCER program officer).

 

CAPR: Answers to Pressing Questions in Developmental Education

Since 2014, IES has funded the Center for the Analysis of Postsecondary Readiness (CAPR) to answer questions about the rapidly evolving landscape of developmental education at community colleges and open-access four-year institutions. CAPR is providing new insights into how colleges are reforming developmental education and how their reforms are impacting student outcomes through three major studies:

  • A survey and interviews about developmental education practices and reform initiatives
  • An evaluation of the use of multiple measures for assessing college readiness
  • An evaluation of math pathways.

Preliminary results from these studies indicate that some reforms help more students finish their developmental requirements and go on to do well in college-level math and English.

National Study of Developmental Education Policies and Practices

CAPR has documented widespread reform in developmental education at two- and four-year colleges through a national survey and interviews on developmental education practices and reforms. Early results from the survey show that colleges are moving away from relying solely on standardized tests for placing students into developmental courses. Colleges are also using new approaches to delivering developmental education including shortening developmental sequences by compressing or combining courses, using technology to deliver self-paced instruction, and placing developmental students into college-level courses with extra supports, often called corequisite remediation.

Developmental Math Instructional Methods in Public Two-Year Colleges (Percentages of Colleges Implementing Specific Reform Strategies)

Notes: Percentages among two-year public colleges that reported offering developmental courses. Colleges were counted as using an instructional method if they used it in at least two course sections. Categories are not mutually exclusive.

Evaluation of Developmental Math Pathways and Student Outcomes

CAPR has teamed up with the Charles A. Dana Center at the University of Texas at Austin to evaluate the Dana Center Mathematics Pathways (DCMP) curriculum at four community colleges in Texas. The math pathways model tailors math courses to particular majors, with a statistics pathway for social science majors, a quantitative reasoning pathway for humanities majors, and an algebra-to-calculus pathway for STEM majors. DCMP originally compressed developmental math into one semester, though now the Dana Center is recommending corequisite models. Instructors seek to engage students by delving deeply into math concepts, focusing on real-world problems, and having students work together to develop solutions.

Interim results show that larger percentages of students assigned to DCMP (versus the traditional developmental sequence) enrolled in and passed developmental math. More of the DCMP students also took and passed college-level math, fulfilling an important graduation requirement. After three semesters, 25 percent of program group students passed a college-level math course, compared with 17 percent of students assigned to traditional remediation.

Evaluation of Alternative Placement Systems and Student Outcomes (aka Multiple Measures)

CAPR is also studying the impact of using a combination of measures—such as high school GPA, years out of high school, and placement test scores—to predict whether students belong in developmental or college-level courses. Early results from the multiple measures study show that, in English and to a lesser extent in math, the multiple measures algorithms placed more students into college-level courses, and more students passed those courses (compared to students placed with a single test score).

 

College-Level English Course Placement, Enrollment, and Completion in CAPR’s Multiple Measures Study (Percentages Compared Across Placement Conditions)

 

College-Level Math Course Placement and Completion in CAPR’s Multiple Measures Study

Looking Ahead to the Future of Developmental Education

These early results from CAPR’s evaluations of multiple measures and math pathways suggest that those reforms are likely to be important pieces of future developmental education systems. CAPR will release final results from its three studies in 2019 and 2020.

Guest blog by Nikki Edgecombe and Alexander Mayer

Nikki Edgecombe is the principal investigator of the Center for the Analysis of Postsecondary Readiness, an IES-funded center led by the Community College Research Center (CCRC) and MDRC, and a senior research scientist at CCRC. Alexander Mayer is the co-principal investigator of CAPR and deputy director of postsecondary education at MDRC.

Computerized Preschool Language Assessment Extends to Toddlers

Identifying young children with language delays can improve later outcomes

Language is a core ability that children must master for success both in and out of the classroom. Extensive studies have shown that many tasks, including math, depend on linguistic skill, and that early language skills are predictive of school readiness and academic success. Being able to quickly identify children at early ages with language delays is crucial for targeting effective interventions.

Enter the QUILS.

In 2011, the National Center for Education Research (NCER) at IES funded a 4-year grant to Dr. Roberta Golinkoff (University of Delaware) and Drs. Kathy Hirsh-Pasek (Temple University) and Jill de Villiers (Smith College) to develop a valid and reliable computer-based language assessment for preschoolers aged 3-5 years old. The resulting product was the Quick Interactive Language Screener (QUILS), a computerized tool to measure vocabulary, syntax, and language acquisition skills. The assessment ultimately measures what a child knows about language and how a child learns, and automatically provides results and reports to the teacher.

The preschool version of QUILS is now being used by early childhood educators, administrators, reading specialists, speech-language pathologists, and other early childhood professionals working with young children to identify language delays. The QUILS is also being utilized in other learning domains. For example, a new study relied on the QUILS, among other measures, to examine links between approaches to learning and science readiness in over 300 Head Start students aged 3 to 5 years.

QUILS is now being revised for use with toddlers. In 2016, the National Center for Special Education Research (NCSER) funded a 3-year study to revise the QUILS for use with children aged 24-36 months. The researchers have been testing the tool in both laboratory and natural (child care centers, homes, and Early Head Start programs) settings to determine which assessment items to use in the toddler version of QUILS. Ultimately, these researchers aim to develop a valid and reliable assessment to identify children with language delays so that appropriate interventions can begin early.

By Amanda M. Dettmer, AAAS Science & Technology Policy Fellow Sponsored by the American Psychological Association Executive Branch Science Fellowship