article Article Summary
Aug 23, 2025
Blog Image

CGScholar AI Helper, a customized AI feedback tool, successfully improved 11th-grade students' writing skills by providing targeted, rubric-aligned feedback that encouraged revision and enhanced analytical thinking.

CGScholar AI Helper, a customized AI feedback tool, successfully improved 11th-grade students' writing skills by providing targeted, rubric-aligned feedback that encouraged revision and enhanced analytical thinking.

Objective: The main goal of this study was to examine the impact of CGScholar AI Helper on the writing development of 11th-grade students in English Language Arts. The research aimed to explore how customized AI-driven feedback, calibrated to align with teacher expectations and curriculum objectives, could support students' writing improvement in a K-12 educational setting. The study focused on testing whether a mediated use of generative AI, as opposed to unregulated "AI in the wild," could effectively enhance student writing through targeted feedback based on teacher-provided materials and rubrics.

Methods: This qualitative case study employed a rapid development methodology with short, focused release cycles as part of a larger pilot project at the University of Illinois Urbana-Champaign. The research was conducted at an underserved urban high school in the Midwest with six 11th-grade students and one English Language Arts teacher. The CGScholar AI Helper was calibrated through two key mechanisms: prompt engineering using retrieval-augmented generation (RAG) with teacher-provided materials, and refinement through incorporation of the teacher's specific rubric. Students completed a 200-word writing assignment comparing two Indigenous texts: "The World on the Turtle's Back" and "Returning 'Three Sisters' to Indigenous Farms Nourishes People, Land, and Cultures." The process involved students submitting initial drafts, receiving AI feedback based on the teacher's six-criterion rubric, revising their work, and receiving a second round of AI feedback before final teacher evaluation. Data collection included researchers' observations, teacher post-survey feedback, student focus group interviews, and analysis of initial versus revised writing assignments. The study employed reflexive thematic analysis following a six-phase analytical process to evaluate the effectiveness of the AI feedback system.

Key Findings: The research revealed significant improvements in student writing across multiple criteria. Five out of six students demonstrated progress in at least one criterion, with one student improving in three different areas. The most substantial improvements were observed in the "compare and contrast" criterion, where three students showed enhancement, including one student who progressed from a score of 0 to 2. Two students improved in both the "compose" and "analyze" criteria, advancing from scores of 2 to 3, indicating better ability to create defensible claims and analyze Indigenous cultural representation. One student improved in the "introduce and connect" criterion. However, no improvements were observed in the "identify" and "support evidence" criteria. The thematic analysis of student feedback revealed four key themes: the tool was perceived as helpful, direct, encouraging, and fixable (actionable). Students appreciated the specific, targeted feedback that aligned with curriculum goals and provided clear guidance for revision. The teacher's post-survey confirmed that the AI helper effectively motivated students to revise their work, addressing a common challenge in writing instruction. The findings aligned with the Technology Acceptance Model (TAM), demonstrating that students found the tool both useful and easy to use, which contributed to their engagement and willingness to implement suggested revisions.

Implications: This research contributes significantly to the growing field of AI in education by providing empirical evidence of how customized AI feedback can support K-12 writing development. The study demonstrates that when properly calibrated with teacher input, AI tools can serve as effective pedagogical instruments that complement rather than replace traditional instruction. The findings suggest that AI feedback systems can address the challenge of providing individualized feedback in large classrooms while maintaining alignment with curriculum objectives and teacher expectations. The research supports the argument that mediated AI use in education, through prompt engineering and rubric integration, offers a more educationally sound approach than unregulated AI deployment. The study also highlights the potential for AI tools to promote equity in learning opportunities, as evidenced by improvements across students with varying writing abilities from low socioeconomic backgrounds. The integration of AI feedback into the revision process appears to foster critical thinking and analytical skills while maintaining student agency in the writing process.

Limitations: The study acknowledges several limitations that may affect the generalizability of findings. The research involved a small sample size of only six students from a single school, limiting the breadth of conclusions that can be drawn. Some students did not complete pre-survey and post-survey forms, reducing the comprehensiveness of available data sources. The study was conducted in a specific educational context (an underserved urban high school) and focused on a particular writing task, which may not represent the full spectrum of educational settings or writing assignments. Additionally, the research was part of the initial prototype development phase, meaning the tool was still being refined during implementation. The competitive research environment and the presence of researchers during implementation may have influenced student and teacher behavior, potentially introducing bias into the results.

Future Directions: The researchers suggest several avenues for future investigation to build upon these initial findings. Future studies should explore the impact of CGScholar AI Helper across diverse educational settings, including different grade levels, subjects, and cultural contexts, to better understand how various factors influence outcomes. Research involving larger participant groups would strengthen the statistical significance and generalizability of findings. The development team plans to address identified issues, such as the length and complexity of AI feedback, by implementing features like chat boxes for clarification and customizable feedback length settings. Long-term studies examining the sustained impact of AI feedback on student writing development over extended periods would provide valuable insights into the tool's effectiveness. Additionally, research investigating the optimal balance between AI feedback and human instruction, as well as studies examining teacher training and support mechanisms for AI integration, would contribute to more effective implementation strategies. Future research should also explore how structured AI integration can optimize the feedback process and enhance both student writing outcomes and teacher support systems.

Title and Authors: "The impact of AI-driven tools on student writing development: A case study" by Raigul Zheldibayeva, Ana Karina de O. Nascimento, Vania Castro, Mary Kalantzis, and Bill Cope.

Published On: August 2025

Published By: Online Journal of Communication and Media Technologies

Related Link

Comments

Please log in to leave a comment.