Objective: The main goal was to explore how effectively a pretrained language model (BERT) could classify segments of preservice physics teachers' written reflections according to elements of a reflection-supporting model.
Methods:
- Analyzed 270 written reflections from 92 preservice physics teachers
- Compared BERT's performance with other deep learning architectures (FFNN, LSTM)
- Used cross-validation strategies to assess predictive performance
- Applied layer-integrated gradients to interpret classification decisions
Key Findings:
- BERT outperformed other deep learning models with a weighted F1 average of 0.82
- BERT's advantage manifested at 20-30% of the training data size
- Word order in segments was important for BERT's superior performance
- The model successfully identified key words associated with different reflection elements
Implications:
- Demonstrates potential for automated analysis of written reflections at scale
- Could enable development of reliable feedback tools for teacher education
- Provides analytical tools for understanding reflection patterns
- Supports development of intelligent tutoring systems
Limitations:
- Specific implementation choices might affect generalizability
- Limited vocabulary size of BERT model (30,000 words)
- Sentence-based segmentation may miss broader context
- Challenges with implicit knowledge and unstated assumptions
Future Directions:
- Explore including metadata and author-related covariates
- Investigate generative language models like GPT-3
- Study links between reflection quality and classroom performance
- Develop and evaluate feedback tools using pretrained language models
Title and Authors: "Utilizing a Pretrained Language Model (BERT) to Classify Preservice Physics Teachers' Written Reflections" by Peter Wulff, Lukas Mientus, Anna Nowak, and Andreas Borowski
Published On: May 2, 2022 (published online) Published By: International Journal of Artificial Intelligence in Education