Science, Technology, Engineering, and Mathematics (STEM) Education
The Development of an Intelligent Pedagogical Agent for Physical Science Inquiry Driven by Educational Data Mining
Purpose: In the early 2000s, a key educational priority was to use summative assessment both as a measure of learner progress and as an incentive for school improvement. At the same time, many educators and researchers supported also using formative assessment, or assessment for screening, monitoring and diagnosis to aid ongoing learning throughout the academic year. In this project, the research team created a new educational technology tool intended to improve student writing performance and writing assessments. The Assess-As-You-Go Writing Assistant is an online writing environment that, via a combination of tagging, social networking, and natural language processing technologies, gives learners constant feedback on their writing in the form of on-demand, as-you-go formative assessment. The Writing Assistant tracks individual learner progress, the progress of cohorts of students through the system, and the progress of individuals in relation to cohorts—thus providing summative assessment data intended to meet teacher, school, parent, and community accountability requirements.
Project Activities: During phase 1 of the project, the team focused on software development activity, building on a foundation of available web-based, open-source writing tools. Throughout this initial phase, researchers designed and improved the Writing Assistant through iterative development, using laboratory trials, beta testing sites and field sites. In phase 2, researchers pilot tested the Writing Assistant with grade 8 teachers and their students in authentic settings. The researchers were going to train the teachers in a summer intensive institute followed by fall supported in-school teacher classroom use. In phase 3, the researchers proposed to conduct an expanded field test with 30 teachers and their students to assess feasibility of implementation, develop and validate the operational aspects of the writing assistant (multiple points of formative assessments by people, and machine assessment), and explore the potential effects of the intervention on classroom writing instruction, student outcomes, and teacher instruction.
THE FOLLOWING CONTENT DESCRIBES THE PROJECT AT THE TIME OF FUNDING
Setting: This project will take place in grade 8 classrooms in Illinois.
Population: Participants include grade 8 teachers and their students. In phase 2, 12 grade 8 teachers and their students will participate in a pilot study. In phase 3, 30 teachers and their students will participate in an expanded field test.
Intervention: The Assess-As-You-Go Writing Assistant will be a web-based learning environment where students can create written texts and receive ongoing feedback from the system itself through natural language processing technologies, as well as from teachers, peers, and experts using social networking and tagging technologies. In addition, the system will include supporting professional development materials and products necessary for implementation of the intervention.
Research Design and Methods: The first phase will be primarily devoted to software development. This process will be grounded in an iterative cycle of research, development, and testing informed by on-going formative data collection (using laboratory trials, beta testing sites, and field sites). In the second phase of the project, limited pilot testing of the intervention and supporting professional development activities will occur with 12 grade 8 teachers and their students. Information gained from this process will be used to refine the intervention and professional development. In the final year of the project, the team will carry out a feasibility evaluation aimed at collecting pilot data on feasibility of implementation in authentic educational delivery settings and the Writing Assistant's promise to generate intended outcomes, including increased access to useful feedback in the writing process, improved writing performance, and student and teacher satisfaction.
Control Condition: The control group will be randomly assigned to classrooms within each school in the fall of 2011. The control students will not receive any intervention. Researchers will give a pre-test and post-test to both the control and treatment groups.
Key Measures: Researchers will use standardized persuasive-essay writing prompt. Key outcomes of this project will be researcher-developed writing outcomes.
Data Analytic Strategy: Education experts will review student writing products and will code features indicative of specific education outcomes. This information will enable development of a machine learning algorithm that will underpin the Writing Assistant. A successful algorithm will be defined by high correlation between expert classification and other classifications by the Writing Assistant. Data collected during the development phase will be synthesized and reported back to the design team and will involve regularly re-iterated cycles of planning and refinement. In phase 2, observations and other data collected at the field sites will be used to develop a process for assessing fidelity of implementation and to collect initial assessments of feasibility and potential effectiveness. In phase 3, teacher plans and resulting implementation data will serve as the basis for feasibility assessment, testing and developing the machine algorithm, as well as further refinement of the software, professional development materials and process. Student outcome data will be analyzed using hierarchical linear modeling, where level 1 is the student (using student characteristics such as free lunch status, gender, and race as predictors at this level) and level 2 is the classroom (using treatment/control status along with any other measurable characteristics of the class and classroom teacher).
ERIC Citations: Find available citations in ERIC for this award here.
Additional Online Resources and information:
Cope, B., Kalantzis, M., and Magee, L. (2011). Towards a Semantic Web: Connecting Knowledge in Academic Research. Cambridge, UK: Woodhead Publishing.
Kalantzis, M., and Cope, B. (2012). Literacies. Cambridge UK: Cambridge University Press.
Kalantzis, M., and Cope B. (2012). New Learning: Elements of a Science of Education. (2nd ed.). Cambridge UK: Cambridge University Press.
Cope, B., & Kalantzis, M. (2015). Assessment and pedagogy in the era of machine-mediated learning. Education as social construction: Contributions to theory, research, and practice, 350–374.
Cope, B., & Kalantzis, M. (2017). Conceptualizing e-learning. In e-Learning Ecologies (pp. 1–45). Routledge.
Cope, B., Kalantzis, M., & Abrams, S. S. (2017). Meaning-making and learning in the era of digital text. Remixing multiliteracies: Theory and practice from New London to new times, 35–49.
Kalantzis, M., and Cope, B. (2011). The Work of Writing in the Age of Its Digital Reproducibility. In S.S. Abrams, and J. Rowsell (Eds.), Rethinking Identity and Literacy Education in the 21st Century (pp. 40–87). New York: Teachers College Press.
Kalantzis, M., & Cope, B. (2015). Learning and new media. The Sage handbook of learning, 373–387.
Kalantzis, M., and Cope, B. (2016). New Media and Productive Diversity in Learning. Diversity in der Lehrerinnenbildung (pp. 310–325).
Smith, A., Cope, B., & Kalantzis, M. (2017). The quantified writer. Handbook of Writing, Literacies, and Education in Digital Cultures, 63.
Smith, A., & Kennett, K. (2017). Multimodal Meaning: Discursive Dimensions of e-Learning. In e-Learning Ecologies (pp. 88–117). Routledge.
Cope, B., and Kalantzis, M. (2013). New Media, New Learning and New Assessments. E–Learning and Digital Media, 10(4): 328–331.
Cope, B., & Kalantzis, M. (2015). Sources of evidence-of-learning: Learning and assessment in the era of big data. Open Review of Educational Research, 2(1), 194–217.
Cope, B., & Kalantzis, M. (2015). Interpreting Evidence-of-Learning: Educational research in the era of big data. Open Review of Educational Research, 2(1), 218–239.
Cope, B., & Kalantzis, M. (2016). Big data comes to school: Implications for learning, assessment, and research. AERA Open, 2(2), 2332858416641907.
Cope, B., & Kalantzis, M. (2019). Education 2.0: Artificial Intelligence and the End of the Test. Beijing International Review of Education, 1(2–3), 528–543.
Cope, B., Kalantzis, M., McCarthey, S., Vojak, C., and Kline, S. (2011). Technology-Mediated Writing Assessments: Principles and Processes. Computers and Composition, 28: 79–96.
Magnifico, A. M., Woodard, R., & McCarthey, S. (2019). Teachers as co-authors of student writing: How teachers' initiating texts influence response and revision in an online space. Computers and Composition, 52, 107–131.
Olmanson, J.D., Kennett, K.S., and Cope, B. (2015). The Techno-Pedagogical Pivot: Designing and Implementing a Digital Writing Tool. World Academy of Science, Engineering and Technology, International Journal of Social, Behavioral, Educational, Economic, Business and Industrial Engineering, 9(7): 2273–2276.
Olmanson, J., Kennett, K., Magnifico, A., McCarthey, S., Searsmith, D., Cope, B., & Kalantzis, M. (2016). Visualizing revision: Leveraging student-generated between-draft diagramming data in support of academic writing development. Technology, Knowledge and Learning, 21(1), 99–123.
Vojak, C., Kline, S., Cope, B., McCarthey, S., and Kalantzis, M. (2011). New Spaces and Old Places: An Analysis of Writing Assessment Software. Computers and Composition, 28(2): 97–111.
Montebello, M., Cope, B., Kalantzis, M., Tzirides, A. O., Haniya, S., Amina, T., ... & Chen, M. (2018, April). Critical thinking through a reflexive platform. In 2018 17th International Conference on Information Technology Based Higher Education and Training (ITHET) (pp. 1–6). IEEE.