article Article Summary
Mar 21, 2025
Blog Image

Educators and edtech providers have different conceptions of LLM harms in education, with educators particularly concerned about broader impacts that are difficult to measure and mitigate.

Educators and edtech providers have different conceptions of LLM harms in education, with educators particularly concerned about broader impacts that are difficult to measure and mitigate.

Objective: The main goal of this study was to identify potential harms arising from the use of Large Language Models (LLMs) in education, highlight gaps between how edtech providers and educators perceive these harms, and provide recommendations to facilitate educator-centered design and development of edtech tools.

Methods: The researchers conducted semi-structured interviews with six edtech providers representing leaders in the K-12 space and 23 educators with various levels of experience with LLM-based edtech. Through thematic analysis, they explored how each group anticipates, observes, and accounts for potential harms from LLMs in education. The interviews were conducted between November 2023 and February 2024, with participants primarily based in the US, UK, and Canada. The analysis used an inductive-deductive coding approach to identify themes related to potential harms, levels of concern, experiences, and mitigation strategies.

Key Findings:

  • The study identified three categories of potential harms from LLMs in education:
    1. Technical harms: toxic or biased content, privacy violations, and hallucinations
    2. Human-LLM interaction harms: primarily academic dishonesty
    3. Broader impact harms: inhibiting student learning and social development, increasing educator workload, decreasing educator autonomy, and exacerbating systemic inequalities in education
  • Edtech providers primarily focus on mitigating technical harms that can be measured based solely on LLM outputs, while educators are more concerned about broader impact harms that require observation of interactions between students, educators, school systems, and edtech.
  • Educators reported feeling equipped to address technical harms and academic dishonesty through their teaching practices by mediating student interaction with LLMs, but expressed uncertainty about addressing broader impact harms.
  • The researchers found a significant misalignment in priorities: the harms that edtech providers focus on mitigating are those that educators feel most equipped to handle, while the harms educators are most concerned about (broader impacts) are rarely addressed by edtech providers.

Implications: This research contributes to the field by developing an education-specific framework for understanding potential harms from LLMs, highlighting gaps between edtech providers' and educators' perspectives, and providing recommendations to center educators in the design and development process. The study emphasizes the importance of trust-building between edtech providers and educators as a crucial first step in designing education technology. By making educators' concerns more salient to edtech providers, the research aims to facilitate more effective collaboration and co-design practices.

Limitations: The study was limited to participants from English-speaking WEIRD (Western, educated, industrialized, rich, and democratic) countries, primarily the US. The sample size of edtech providers was relatively small (six), making it difficult to generalize about standard practices across the entire edtech market. Additionally, the sample was not representative, with potential overrepresentation of certain backgrounds (white people, men, STEM teachers) and perspectives (educators with prior positive opinions of LLMs).

Future Directions: The researchers recommend several areas for future work:

  1. Designing tools that facilitate educator mediation of LLM harms
  2. Developing centralized, clear, and independent reviews of LLM-based edtech
  3. Exploring how to enable educators to build their own LLM tools that align with curriculum standards
  4. Implementing educator-centered procurement practices that prioritize teacher input

Title and Authors: "Don't Forget the Teachers': Towards an Educator-Centered Understanding of Harms from Large Language Models in Education" by Emma Harvey, Allison Koenecke, and René F. Kizilcec.

Published On: February 20, 2025

Published By: arXiv (arXiv:2502.14592v1 [cs.CY])

The study provides valuable insights into the disconnect between how edtech developers and educators conceptualize and prioritize potential harms from LLMs in education. While technical harms like biased content and privacy violations are the focus of mitigation efforts by edtech companies, educators are often more concerned with how LLM tools might affect student learning, social development, and educational equity.

The research emphasizes that educators often feel capable of mediating many technical issues through their teaching practices but struggle with addressing systemic challenges that emerge from widespread LLM adoption. A key recommendation is to design tools that enhance rather than replace educator involvement, allowing teachers to maintain their crucial mediating role between students and technology.

By highlighting these gaps in perception and making specific recommendations for improvement, the study contributes to a more nuanced understanding of how LLM-based educational technologies should be designed, evaluated, and implemented with educators at the center of the process. The authors' call to "not forget the teachers" underscores the importance of moving beyond purely technical considerations to address the broader educational implications of AI technologies in classrooms.

Related Link

Comments

Please log in to leave a comment.