Abstract
This study investigated how Preservice Elementary Teachers (PETs) collaborated with ChatGPT on Analyzing and Interpreting Student Responses (AISR) tasks. These tasks required PETs to analyze and interpret K–12 students’ responses to items targeting counterintuitive scientific mechanisms. This study yielded three findings. First, as PETs gained experience, they developed more positive views of ChatGPT as a pedagogical tool. Second, PETs employed prompt-engineering strategies in three categories, including contextualization, iteration, and decomposition. Third, based on PETs’ task responses, we developed an AISR proficiency model to evaluate three key dimensions—understanding of disciplinary knowledge, understanding of student ideas, and recognizing the value of students’ intuitive ideas. Analyses of PETs’ task responses and their exchanges with ChatGPT showed the following: for understanding of disciplinary knowledge, PETs often struggled to make sense of ChatGPT’s lengthy outputs, which obscured the central scientific idea with extraneous details; for understanding of student ideas, both PETs and ChatGPT tended to adopt an evaluative orientation, classifying student responses by their degree of correctness and completeness; for recognizing the value of students’ intuitive ideas, designing instruction that productively uses these ideas remained challenging for both PETs and ChatGPT. This study’s implications for teacher education are twofold. First, PETs’ effective use of ChatGPT depends on their knowledge of Large Language Models (LLMs), disciplinary content, and pedagogical content knowledge. Second, LLM-supported, task-based learning shows promise as a personalized approach for developing formative assessment and lesson-planning practices in preservice teacher education.
| Original language | English |
|---|---|
| Article number | 27 |
| Journal | Disciplinary and Interdisciplinary Science Education Research |
| Volume | 7 |
| Issue number | 1 |
| DOIs | |
| State | Published - Nov 25 2025 |
| Externally published | Yes |
Scopus Subject Areas
- Pharmacology
- Social Sciences (miscellaneous)
Keywords
- Formative assessment
- Large language models
- Scientific mechanisms