AI-based adaptive feedback in digital simulations enhances preservice teachers' diagnostic justification quality but not their judgement accuracy compared to static feedback.
Title and Authors: "AI-Based Adaptive Feedback in Simulations for Teacher Education: An Experimental Replication in the Field" by Elisabeth Bauer, Michael Sailer, Frank Niklas, Samuel Greiff, Sven Sarbu-Rothsching, Jan M. Zottmann, Jan Kiesewetter, Matthias Stadler, Martin R. Fischer, Tina Seidel, Detlef Urhahne, Maximilian Sailer, Frank Fischer
Published On: January 2025 Published By: Journal of Computer Assisted Learning
Objective: To test whether findings from a previous laboratory study about the effectiveness of AI-based adaptive feedback versus static feedback in simulations would replicate in field conditions, and to evaluate the effectiveness of single simulation sessions with either feedback type.
Methods:
- Experimental field study with 332 preservice teachers at five German universities
- Three randomly assigned groups: simulation with NLP-based adaptive feedback, simulation with static feedback, and no-simulation control group
- Used CASUS learning environment with simulated cases about students' learning difficulties
- Measured diagnostic judgement accuracy and justification quality during learning phase and post-test
- Employed natural language processing and artificial neural networks to provide automated adaptive feedback
Key Findings:
- Adaptive feedback enhanced justification quality significantly more than static feedback during both learning and post-test phases
- No significant differences between adaptive and static feedback regarding judgement accuracy
- Compared to control group, only simulation with adaptive feedback showed positive effects on justification quality
- Neither feedback type significantly improved judgement accuracy compared to control group
- Results successfully replicated patterns from previous laboratory study
Implications:
- Adaptive feedback appears crucial for effective simulation-based learning in higher education field settings
- Static feedback may provide insufficient guidance for effective learning in simulations
- NLP technology can effectively automate personalized formative feedback
- Single simulation sessions may be limited in impacting compiled reasoning outcomes like judgement accuracy
- Repeated simulation practice may be needed to enhance certain diagnostic skills
Limitations:
- Limited number of measurements with single cases for pre-test and post-test
- Low internal consistency in pre-test justification quality measurement
- Control group lacked pre-test measures
- Study only evaluated short-term effects of single simulation sessions
- Limited generalizability due to specific context and participant group
Future Directions:
- Investigate effects of repeated simulation sessions over longer periods
- Explore how latest NLP advances like transformer models could enhance feedback
- Examine cognitive and motivational mechanisms underlying adaptive feedback benefits
- Study potential interactions between learner characteristics and feedback types
- Research ways to better support development of diagnostic judgement accuracy
The study provides important evidence for the value of AI-based adaptive feedback in educational simulations while highlighting areas needing further research to optimize these learning tools.