Generative AI systems like ChatGPT fundamentally lack the epistemological responsibility required to serve as genuine educational collaborators or tutors, creating a perverse dynamic where students must assume full responsibility for the accuracy of AI outputs they lack the expertise to evaluate.
Objective: The main goal of this study is to challenge the widespread endorsement of Generative AI (GenAI) systems as "personal tutors" and "learning collaborators" in higher education by examining the philosophical and pedagogical implications of their constitutive epistemological irresponsibility.
Methods: The authors employ philosophical analysis to examine the nature of GenAI outputs, drawing primarily on two theoretical frameworks: (1) Plato's skepticism about writing from the Phaedrus, which focuses on responsibility for discourse, and (2) Harry Frankfurt's analysis of "bullshit" as discourse characterized by indifference to truth. Through these lenses, they analyze the implications of assigning responsibility to students for verifying the accuracy of GenAI outputs in educational contexts.
Key Findings:
- GenAI systems are constitutively "epistemically irresponsible" - they produce outputs without concern for truth (matching Frankfurt's definition of "bullshit") and lack the agency required to take responsibility for their claims.
- The expectation that students assume responsibility for checking and verifying AI-generated content creates a "perverse" pedagogical relationship where students must take sole responsibility for evaluating claims they lack the expertise to properly assess.
- Calling GenAI a "collaborator" misrepresents the nature of collaboration, which requires mutual responsibility between agents capable of holding themselves and others to collective standards.
- Similarly, labeling GenAI a "tutor" undermines the bidirectional responsibility that characterizes proper teacher-student relationships, where both parties can hold each other accountable.
- To the extent that GenAI systems replace human teachers, students will find it increasingly difficult to develop the skills and motivation needed to evaluate GenAI outputs against disciplinary standards, creating a vicious cycle.
- Student engagement is likely to decrease with extensive use of GenAI systems, as engagement is significantly influenced by relationships with human teachers who are prepared, approachable, and set high standards.
Implications: The findings suggest that integrating GenAI into higher education without critical assessment of its fundamental limitations threatens the core principles of university education as a community of mutually responsible scholars. By introducing "words without an author" into academia, GenAI undermines the ability to trace knowledge claims to responsible agents who can defend them. The authors argue that universities should "cut the bullshit" and stop referring to GenAI as collaborators or tutors - terms that falsely imply responsible agency. This analysis contributes to the field by moving beyond questions of accuracy or "hallucination" to address deeper questions about epistemological responsibility in educational relationships.
Limitations: The paper is a theoretical argument rather than an empirical study, so it doesn't provide direct evidence about the impact of GenAI in actual educational settings. Additionally, the authors acknowledge that a complete ban on GenAI in education is neither possible nor desirable, suggesting that their forceful critique is meant to stimulate discussion rather than dictate policy.
Future Directions: The authors suggest that future work should focus on developing appropriate language and conceptualization for GenAI systems in higher education that acknowledges their constitutive limitations, along with regulatory oversight and pedagogically sound integration. While not explicitly stated, the implications of their analysis suggest the need for research on: (1) how to maintain meaningful human pedagogical relationships in educational environments incorporating AI, (2) how to develop students' capacity to assess AI outputs when they increasingly learn from AI systems, and (3) how to design AI-enhanced education that preserves rather than undermines students' sense of responsibility to disciplinary standards.
Title and Authors: "Cut the bullshit: why GenAI systems are neither collaborators nor tutors" by Gene Flenady and Robert Sparrow.
Published On: May 17, 2025
Published By: Teaching in Higher Education: Critical Perspectives
The authors position their work as a philosophical intervention in ongoing debates about GenAI in education, challenging both enthusiastic endorsements of GenAI as revolutionary and more cautious approaches that nevertheless accept student responsibility for AI outputs. The provocative title reflects their concern that current discourse about GenAI in education is itself unconcerned with truth (i.e., "bullshit"), as it disregards both the reality of what GenAI is and what genuine collaboration and teaching involve. They argue that university education fundamentally depends on a community of individuals responsible to one another for their knowledge claims, which GenAI threatens by introducing epistemically irresponsible outputs that cannot be traced to accountable agents.