Generative AI tools like ChatGPT can ethically enhance academic writing when used for grammar correction and language translation, but should not replace human critical thinking and original content creation in scholarly work.
Objective: The main goal of this study was to provide clear guidance for healthcare simulation scholars on how to ethically and responsibly use generative artificial intelligence tools, particularly ChatGPT, in the academic writing process while maintaining the highest standards of academic integrity and scholarly development.
Methods: The researchers employed a comprehensive literature review methodology combined with direct consultation with ChatGPT to develop their recommendations. They conducted a literature search to understand existing policies from academic journals and publishers regarding AI use in academic writing. The team also directly queried ChatGPT 4.0 with the question "Can you please provide a list of ways ChatGPT can be ethically used to assist authors in writing articles for medical journals?" They then critically analyzed ChatGPT's responses, reviewed original source materials from major academic publishers, and incorporated guidelines from organizations such as the Committee on Publication Ethics (COPE), International Committee of Medical Journal Editors, JAMA Network Journals, and the World Association of Medical Editors. The authors collectively reflected on these recommendations, discussed personal insights, and considered existing research to develop a comprehensive framework for ethical AI use in academic writing.
Key Findings: The study identified three distinct ethical tiers for using generative AI in academic writing. Tier 1 (Most Ethically Acceptable) includes uses where ChatGPT primarily restructures existing text or ideas, such as grammar and spelling correction, improving readability and flow, and language translation. Tier 2 (Ethically Contingent) encompasses applications that depend on careful human oversight, including generating outlines from existing content, summarizing previously written material, improving clarity of drafted content, and brainstorming ideas based on original prompts. Tier 3 (Ethically Suspect) covers uses that are not recommended, such as drafting completely new text without original content input, developing new concepts independently, conducting primary data interpretation, performing literature reviews, and checking for ethical compliance or plagiarism.
The research revealed significant issues with AI-generated content, including a high propensity for plagiarism, AI hallucinations (fabricated but convincing content), and severely inaccurate referencing. Studies cited in the paper showed that ChatGPT generated completely fabricated references in 16% of cases and had wrong DOIs in 38% of generated references. Only 7% of ChatGPT-generated references were found to be completely authentic and accurate, highlighting serious reliability concerns.
Implications: This research provides crucial guidance for the academic community on navigating the integration of generative AI tools while preserving scholarly integrity. The findings contribute to the field of AI in education by establishing clear ethical boundaries and practical frameworks that can help researchers leverage AI's benefits without compromising academic standards. The study emphasizes the importance of transparency in AI use, suggesting that authors should disclose AI usage in their methods sections rather than acknowledgments. The framework supports the evolution of academic writing practices while maintaining the primacy of human intellectual contribution and critical thinking skills essential for scholarly development.
Limitations: The study focuses specifically on ChatGPT and similar LLM-powered generative AI tools, limiting its applicability to other AI technologies. The rapid evolution of AI technology means that some identified issues may become obsolete quickly, requiring regular updates to the recommendations. The research is primarily conceptual and lacks empirical testing of the proposed ethical framework in real academic writing scenarios. Additionally, the study is written from the perspective of healthcare simulation researchers, which may limit its broader applicability across all academic disciplines.
Future Directions: The authors suggest several areas for future research, including exploration of academic-specific generative AI tools like Scopus AI, which may have more reliable and robust data sources that could mitigate issues with bias and hallucinations. They recommend studying the long-term effects of AI tool dependence on scholarly development and critical thinking skills. Future research should also examine how the ethical framework performs across different academic disciplines and investigate the development of AI tools specifically designed for academic writing with built-in fact-checking and bias mitigation features. The authors emphasize the need for continued discourse within academic communities to ensure ethical AI application remains consistent with technological evolution.
Title and Authors: "Artificial intelligence-assisted academic writing: recommendations for ethical use" by Adam Cheng, Aaron Calhoun, and Gabriel Reedy.
Published on: 2025
Published by: Advances in Simulation