ChatGPT can significantly improve short-term task performance in writing tasks but may promote "metacognitive laziness" and does not enhance knowledge transfer or intrinsic motivation.
Objective: The study aimed to investigate how learners interact with different agents (ChatGPT, human experts, and checklist tools) and compare their effects on learning motivation, self-regulated learning processes, and learning performance.
Methods:
- Conducted a randomized experimental study with 117 university students
- Participants were divided into four groups: ChatGPT (AI), human expert (HE), checklist tools (CL), and control (CN)
- Each group completed a two-stage English reading and writing task
- Data collection included learning trace data, motivation surveys, and performance assessments
- Performance was evaluated across three dimensions: essay score improvement, knowledge gain, and knowledge transfer
- Used process mining to analyze self-regulated learning behaviors
- Employed the Intrinsic Motivation Inventory (IMI) to measure motivation
Key Findings:
- No significant differences in intrinsic motivation among the four groups
- ChatGPT group showed significantly higher essay score improvements compared to other groups
- No significant differences in knowledge gain or transfer among groups
- AI group showed fewer metacognitive processes compared to human expert and checklist groups
- Evidence of "metacognitive laziness" in the AI group, with students becoming overly reliant on ChatGPT
- Different patterns of self-regulated learning processes emerged among groups
- Checklist tools led to increased evaluation processes
Implications:
- Raises concerns about the potential trade-off between short-term performance gains and long-term skill development
- Highlights the importance of maintaining metacognitive engagement when using AI tools
- Suggests the need for balanced integration of AI in educational settings
- Demonstrates the value of different types of support (human, AI, and tools) in learning
- Contributes to understanding the mechanisms of hybrid intelligence in education
Limitations:
- Gender imbalance (70% female participants)
- Focus on a single task type (reading and writing)
- Limited task duration
- Sample size constraints
- Lack of targeted measures for assessing metacognitive laziness
- All participants were second language English speakers
Future Directions:
- Design multi-task and cross-context studies
- Expand sample sizes and achieve more balanced gender distribution
- Conduct long-term follow-up assessments
- Develop targeted measurement protocols for metacognitive laziness
- Investigate how ChatGPT can effectively enhance learners' understanding and knowledge transfer
- Explore the dynamics of cognitive load distribution between humans and AI
Title and Authors: "Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance" by Yizhou Fan, Luzhen Tang, Huixiao Le, Kejie Shen, Shufang Tan, Yueying Zhao, Yuan Shen, Xinyu Li, and Dragan Gašević
Published On: January 19, 2025 (accepted November 10, 2024)
Published By: British Journal of Educational Technology (BJET)