article Article Summary
Nov 09, 2025
Blog Image

Embracing AI imperfections as teaching tools significantly enhances pre-service teachers' critical digital literacy and ethical awareness, with 87% reporting heightened ethical consciousness and 78% perceiving stronger critical thinking skills after struc

Embracing AI imperfections as teaching tools significantly enhances pre-service teachers' critical digital literacy and ethical awareness, with 87% reporting heightened ethical consciousness and 78% perceiving stronger critical thinking skills after structured engagement with flawed AI outputs.

Objective: The primary goal of this study was to examine how deliberately engaging with the imperfections and errors of generative AI tools fosters critical digital literacy and ethical awareness in pre-service teachers. Specifically, the research investigated three key areas: (1) how iterative prompt refinement with AI tools develops problem-solving skills and adaptive expertise; (2) how encountering and analyzing flawed AI outputs promotes critical evaluation of bias, representation, and reliability; and (3) how structured reflection on AI imperfections deepens ethical awareness and promotes responsible digital citizenship among future educators.

Methods: This mixed-methods study was conducted during Spring 2025 with 63 pre-service teachers enrolled in an undergraduate educational technology course at Metropolitan State University of Denver, a regional public university in the western United States. Out of 83 total students enrolled, 63 completed the post-intervention survey, yielding a 76% response rate. Participants were predominantly female (73%), with ages primarily distributed between under 25 years (39.7%) and 25-34 years (38.1%). Academic specializations included Elementary Education (31.7%), Early Childhood Education (28.6%), Physical Education (22.2%), Special Education (11.1%), and Secondary English Education (6.3%). Most participants were Sophomores (50.8%) and Juniors (33.3%).

The instructional intervention consisted of a structured four-week module explicitly designed to position AI errors as pedagogical resources rather than obstacles. Each week focused on interconnected activities that progressively built critical engagement with AI technologies. Week 1 centered on cultural identity, personal identity, and embodiment, where students created AI-based self-portraits representing their cultural backgrounds and personal identities through iterative refinement, critically reflecting on how inaccuracies, biases, or misrepresentations influenced their self-concept and prompted discussions around cultural sensitivity, representation, and algorithmic bias. Week 2 involved a digital storytelling and video creation project using AI-driven video creation tools, where generative text tools supported brainstorming and script development, allowing participants to evaluate AI's creative potential while remaining attentive to its limitations and ethical risks. Week 3 featured collaborative peer review and feedback, where completed video projects were shared in discussion forums for participants to critique creative execution, representation accuracy, and ethical implications, deepening understanding of AI's pedagogical possibilities and challenges. Week 4 required reflective narratives and critical analysis through essays synthesizing experiences, ethical insights, and critical understandings, connecting reflections to broader pedagogical and ethical frameworks and emphasizing how flawed AI outputs shaped awareness of critical digital literacy and ethics.

Data collection employed multiple complementary sources to provide comprehensive insights. Qualitative data were gathered from: (1) written reflections submitted after each iterative assignment, (2) detailed discussion posts within structured online forums (approximately 200 entries total), and (3) weekly instructor observation notes documenting participants' reflective processes, emotional responses, and evolving perceptions. Quantitative data were obtained through a post-intervention electronic survey assessing attitudes toward AI tools, confidence in engaging with AI, awareness of ethical implications, and perceptions of critical thinking abilities using Likert scales ranging from strongly disagree to strongly agree.

Qualitative data analysis followed the iterative framework described by Anfara et al. (2002) to ensure transparency and rigor through three stages: (1) open coding, where two researchers independently coded discussion posts, essays, and observation notes to generate preliminary codes; (2) axial coding, where related codes were grouped into broader conceptual categories through collaborative review; and (3) selective coding, where categories were synthesized into overarching themes aligned with research questions and refined through peer debriefings. Validation strategies included triangulation across data sources, member checking with participants, peer debriefing, and maintenance of comprehensive audit trails. Quantitative survey data were analyzed descriptively using frequencies, percentages, and measures of central tendency, with exploratory cross-tabulations examining relationships between demographic variables and survey outcomes.

Key Findings: The study revealed five major themes confirmed through triangulation of reflective essays, discussion posts, instructor observations, and post-survey data.

First, regarding growth in critical digital literacy, participants demonstrated notable improvement in their ability to critically analyze AI tools, especially after iterative engagement with flawed outputs. Initially, many expressed frustration with inaccuracies and culturally insensitive outputs, but as one participant reflected, "At first, the AI images felt completely wrong—they distorted my cultural background. But after multiple revisions, I started asking why these errors happened, which helped me look at the technology more critically." Survey data reinforced this pattern, with 78% of participants reporting greater confidence and perceived gains in their critical thinking and analytical skills after the four-week module. Instructor observations documented an increasing willingness to interrogate algorithmic processes rather than accepting outputs at face value.

Second, concerning ethical awareness and sensitivity, structured engagement with AI errors heightened participants' ethical sensitivity, especially around bias, data privacy, and authenticity. Reflective essays and discussions revealed growing awareness, as one participant noted: "The assignment on cultural bias really opened my eyes to how AI can reinforce harmful stereotypes. It made me think about what I need to model for my future students." Quantitative data confirmed this trend, with 87% of participants indicating greater ethical awareness by the end of the course. Instructor notes captured frequent discussions on responsible classroom use of AI and the risks of over-reliance.

Third, regarding confidence and problem-solving through iteration, frustration with flawed outputs gradually shifted into productive problem-solving, leading to greater confidence with AI tools. Through iterative prompt refinement, participants reported feeling more empowered to experiment, as one stated: "I used to feel stuck when the AI didn't give me what I wanted. Now I see mistakes as part of the process—I just adjust and try again." Survey results indicated that 82% of participants felt more confident in their ability to use AI tools effectively after completing the assignments. Peer review activities further reinforced this growth as students shared prompting strategies and solutions to common challenges.

Fourth, concerning the importance of collaborative feedback, peer review emerged as a central mechanism for deepening critical and ethical engagement. Participants frequently acknowledged how peer critiques helped them see biases they had overlooked, as one wrote: "When classmates pointed out biases in my video, I realized how easily I had missed them. The feedback pushed me to reflect much more deeply." Survey data supported this, with 81% of participants reporting that peer collaboration was essential to their learning. Instructor notes documented lively online exchanges where students debated representation and ethical considerations.

Fifth, regarding barriers and transfer challenges, despite overall growth, participants struggled to envision how to transfer these practices into K-12 contexts without clearer models. As one participant summarized: "I see the value in critically working with AI, but I still need more examples of how to do this in a real classroom." Participants most commonly described a perceived need for clearer examples, scaffolds, and curricular supports to translate these AI-integrated reflective activities into actual classroom practice.

When asked to provide five words describing AI, participants' responses revealed nuanced attitudes. The most frequently mentioned terms were "Help," "Cheat," "Scary," and "Use," reflecting mixed feelings of optimism, concern, and uncertainty. Interestingly, while "Help" dominated among teachers and administrators, instructional support staff more commonly used "Helpful," suggesting their greater familiarity with AI capabilities. The word "Scary" appeared across all groups, revealing shared concerns about risks and uncertainties. The frequent appearance of "Cheat/Cheating" highlighted widespread anxiety over academic integrity, particularly concerns about AI facilitating academic dishonesty.

Implications: The findings have significant implications for AI integration in teacher education across three key categories.

Regarding curricular supports, the study demonstrates the need for: (1) model lessons and case studies that demonstrate how AI-generated mistakes serve as entry points for ethical reflection and problem-solving; (2) scaffolded practice opportunities such as low-stakes bias-detection or prompt-revision exercises that allow pre-service teachers to build confidence gradually; (3) adaptable curricular toolkits with rubrics, guiding questions, and reflection protocols that provide bridges from higher education coursework to classroom-ready activities; and (4) cross-disciplinary integration with humanities and social studies methods courses to support culturally responsive and ethically grounded classroom applications.

Concerning practical applications, the implications include: (1) mini-models that illustrate enactment, such as elementary social studies lessons using AI-generated images to highlight and critique historical inaccuracies, or secondary English activities involving iterative prompt revision with AI text generators; (2) mentorship and practicum partnerships that allow candidates to observe, co-plan, and co-teach AI-integrated lessons with experienced teachers; and (3) hybrid and online adaptation, as the study was conducted in asynchronous online sections, demonstrating that many of these activities are directly transferable to hybrid and fully online environments, offering flexible pathways for scaling.

Regarding institutional supports, successful implementation requires: (1) faculty professional development to ensure instructors are equipped to model AI-critical pedagogy; (2) cross-course coordination and program-level resources to sustain integration across curricula; and (3) institutional investment necessary for scalability, ensuring these approaches extend beyond isolated courses to become program-wide practice.

The study introduces a "Pedagogy of Imperfection" as a conceptual model that extends constructivist and social constructivist traditions by demonstrating how error-centered practices such as iterative AI prompt refinement can deepen critical engagement, foster adaptive expertise, and build ethical sensitivity. This framework bridges productive failure literature, which demonstrates that grappling with errors strengthens problem-solving and knowledge transfer, with critical digital literacy frameworks, which emphasize the necessity of questioning and critiquing digital artifacts. By positioning flawed AI outputs not as mere setbacks but as catalysts for resilience, creativity, and ethical reflection, this pedagogical approach prepares future educators to be not only technically proficient and ethically reflective but also practically prepared to guide K-12 learners in developing responsible, critical, and resilient approaches to AI.

Limitations: Several limitations should be acknowledged. First, the research was conducted within a single undergraduate educational technology course at one regional public university, which may limit transferability of findings to other institutional contexts and subject areas. Second, the study employed a post-intervention survey design relying on participants' self-reported perceptions rather than pre/post measures of change, meaning that reported gains in confidence, critical thinking, and ethical awareness reflect participants' retrospective perceptions rather than verified growth. Third, qualitative findings were based on reflective essays, discussion posts, and instructor observations, which, while rich, may be influenced by social desirability or the instructional context. Fourth, the dual role of the instructor as both teacher and researcher may have introduced bias, despite safeguards such as voluntary participation, IRB approval, anonymization of data, and independent coding procedures. Finally, the study's focus was on pre-service teachers' awareness and perceptions rather than actual implementation practices or measurement of effectiveness of specific AI integration approaches in real K-12 classrooms.

Future Directions: Future research should address these limitations and extend the findings in several ways. First, longitudinal pre/post measures should be incorporated, expanding to multiple institutions and course contexts, and employing external researchers to reduce instructor-researcher bias. Such extensions would strengthen the evidence base for the Pedagogy of Imperfection and its role in cultivating critical digital literacy and ethical awareness. Second, research should examine the long-term transfer of these practices into actual K-12 teaching contexts, investigating how pre-service teachers apply their enhanced critical awareness when they become practicing teachers. Third, comparative studies across different disciplines, grade levels, and institutional settings could identify contextual factors that influence successful implementation. Fourth, research could explore optimal scaffolding approaches, examining which specific types of model lessons, toolkits, and mentorship structures most effectively support transfer from awareness to enactment. Fifth, studies should investigate students' (K-12 learners') perspectives on AI use in classrooms when their teachers employ error-centered pedagogies, examining impacts on student learning, ethical awareness, and attitudes toward technology. Finally, as AI technologies continue to evolve rapidly, ongoing research should examine how the Pedagogy of Imperfection framework adapts to emerging AI capabilities and limitations, ensuring that teacher preparation remains responsive to technological change while maintaining its focus on critical engagement and ethical responsibility.

Title and Authors: "Perfectly Imperfect: How AI Errors Foster Critical Digital Literacy and Ethical Awareness in Pre-Service Teachers" by Miri Kim (Professor, Educational Technology, Metropolitan State University of Denver), Allison G. Chung (Research Assistant, Valor Christian High School), and Hsin-Te Yeh (Professor, Educational Technology, Metropolitan State University of Denver).

Published on: The article is scheduled for presentation at eLearn 2025, October 13-16, 2025, in Bangkok, Thailand.

Published by: eLearn 2025 Conference Proceedings.

Related Link

Comments

Please log in to leave a comment.