article Article Summary
Apr 02, 2025
Blog Image

Large Language Models (LLMs) exhibit implicit biases that mirror and potentially reinforce the hidden curriculum in education, assigning lower scores to student work associated with marginalized populations and demonstrating different authority patterns b

Large Language Models (LLMs) exhibit implicit biases that mirror and potentially reinforce the hidden curriculum in education, assigning lower scores to student work associated with marginalized populations and demonstrating different authority patterns based on race and ethnicity.

Objective: The main goal of this study was to explore how the hidden curriculum (the unspoken values and ideologies) within generative AI intersects with the hidden curriculum in educational systems, examining how these technologies might perpetuate societal inequities in educational contexts despite appearing objective.

Methods: The researchers conducted a technology audit using an evocative audit methodology to examine how LLMs score and provide feedback on student writing samples. They used identical writing passages paired with different student descriptions that varied demographic information (race, class, ethnicity, gender, school type) to test for bias. Multiple LLM models were used including ChatGPT 3.5, ChatGPT 4.0, and Google's Gemini. The researchers analyzed both quantitative scoring differences and used linguistic analysis software (LIWC) to examine differences in feedback text, particularly focusing on measures of "clout" or authority in the language used.

Key Findings:

  • LLMs exhibited what appeared to be positive bias when race was explicitly mentioned (assigning higher scores to both Black and White student descriptions compared to race-neutral prompts).
  • However, implicit biases appeared when indirect racial indicators were used - students described as attending "inner-city public schools" received significantly lower scores than those at "elite private schools."
  • Students described as preferring rap music (associated with Black culture) received lower scores than those preferring classical music (associated with White and upper-class culture).
  • When analyzing feedback text, ChatGPT 4.0 used language with significantly higher levels of "clout" or authority when responding to students described as Black or Hispanic, mirroring power dynamics often seen in educational settings.
  • More advanced AI models showed stronger implicit bias patterns, suggesting that as models "improve" they may more effectively replicate societal biases.

Implications: This research has significant implications for teacher education, highlighting the need to critically examine AI technologies and their hidden curriculum before implementing them in educational settings. The findings suggest that:

  • Teacher education programs should incorporate technoskepticism and critical analysis of AI biases into their curriculum
  • Technology audits should be used as pedagogical tools to help pre-service teachers uncover hidden biases in AI
  • The hidden curriculum of schools and AI could potentially compound inequities unless educators actively work to identify and counteract these patterns
  • More advanced AI models may actually exhibit more sophisticated forms of bias that are harder to detect

Limitations: The study acknowledges several limitations, including:

  • LLMs are sensitive to small changes in prompts, and different phrasings could alter results
  • Some variables like "upper class" and "lower class" may have skewed results due to the inherent value judgment in the terms
  • The focus on specific forms of bias (particularly racial bias) may have limited the scope of biases explored
  • The evocative audit methodology is inherently qualitative in nature, which brings certain limitations
  • The identities and perspectives of the researchers themselves may influence the interpretation of results

Future Directions: The researchers suggest several areas for future research:

  • Exploring how different approaches may mitigate AI biases in educational settings
  • Examining the broader sociocultural context of AI technologies in education
  • Expanding research on the hidden curriculum embedded in and perpetuated by educational technologies
  • Developing and testing specific practices in teacher education aimed at countering AI-perpetuated oppression
  • Investigating how the impact of these biases may differ across various educational contexts and student populations

Title and Authors: "Uncovering the Hidden Curriculum in Generative AI: A Reflective Technology Audit for Teacher Educators" by Melissa Warr and Marie K. Heath

Published On: 2025

Published By: Journal of Teacher Education (American Association of Colleges for Teacher Education)

Related Link

Comments

Please log in to leave a comment.