Higher AI literacy among marketing students strengthens their confidence and positive attitudes toward generative AI but does not directly reduce dishonest behavioral intentions, suggesting that technical competence alone cannot safeguard academic integrity without explicit ethical guidance and clear institutional policies.
Objective
The primary goal of this dissertation was to examine how AI literacy influences the behavioral intentions of undergraduate marketing students to engage in academic dishonesty using generative artificial intelligence (GenAI) tools. The research investigated whether students with higher levels of AI literacy would be less likely to intend to use GenAI dishonestly in their academic work. This study addresses a critical gap at a time when GenAI adoption has exploded, with 86% of students reporting AI use for schoolwork and academic dishonesty rates using GenAI increasing from 48% in 2022-23 to 64% in 2023-24.
Grounded in the extended Theory of Planned Behavior (TPB), the research aimed to determine how AI literacy relates to key psychological constructs that predict behavior: attitudes toward using GenAI dishonestly, subjective norms (perceived social pressure), perceived behavioral control (confidence in one's ability to act dishonestly), and moral obligation. The study focused on marketing education because of the discipline's position at the intersection of technology and business practice, where students are expected to leverage AI tools ethically while being particularly vulnerable to AI-enabled academic misconduct.
Methods
This quantitative study employed a cross-sectional survey design to collect data from undergraduate marketing students at a single institution. AI literacy was assessed through four core dimensions: technical knowledge (understanding of AI concepts), ethical reasoning (ability to question fairness and social impact), practical proficiency (skills to use AI tools productively), and critical thinking (ability to evaluate AI-generated content). The TPB constructs were measured using adapted scales assessing attitudes, subjective norms, and perceived behavioral control. The study extended the traditional TPB model by incorporating moral obligation. The dependent variable, dishonest behavioral intentions, was measured through items focusing primarily on citation-related dishonesty. Control variables included gender, year of study, and student major.
Data analysis proceeded through multiple stages: descriptive statistics, reliability analysis using Cronbach's alpha, confirmatory factor analysis to evaluate measurement validity, and structural equation modeling (SEM) to test twelve specific hypotheses (H1-H12) about relationships among constructs.
Key Findings
The study revealed several significant and counterintuitive findings. AI literacy significantly predicted all TPB belief constructs—attitudes, subjective norms, perceived behavioral control, and moral obligation—demonstrating that more literate students develop stronger orientations toward GenAI. However, AI literacy did not directly predict dishonest behavioral intentions, contradicting the researcher's central hypothesis.
Most surprisingly, students with more favorable attitudes toward GenAI demonstrated higher dishonest intentions, not lower, suggesting that positive perceptions of AI may embolden misuse rather than restrain it. Similarly, higher perceived behavioral control was associated with greater dishonest intentions, indicating that confidence in using AI may translate into confidence in using it inappropriately. These findings suggest that competence with GenAI may lower psychological barriers to misuse.
Subjective norms proved nonsignificant in predicting behavioral intentions, contradicting substantial prior research showing strong peer influence on academic dishonesty. The moral obligation construct yielded mixed results: broad measures did not predict intentions, but context-specific measures (intervention responsibility) did, suggesting that specific moral measures may be more predictive than general ethical commitments.
Analysis of control variables revealed that gender and year of study showed no significant differences. However, marketing majors reported significantly higher levels of favorable attitudes, subjective norms, perceived behavioral control, and dishonest behavioral intentions compared to other business students, aligning with prior research documenting elevated dishonesty concerns in business fields.
Implications
For students, the findings suggest that developing technical competence with AI tools must be accompanied by explicit ethical reasoning and clear understanding of appropriate use. Confidence with GenAI may increase vulnerability to misuse if not paired with strong moral grounding and clear institutional guidelines.
For faculty, the study highlights that AI literacy training alone will not promote ethical behavior. The counterintuitive positive relationships between favorable attitudes, high perceived control, and dishonest intentions suggest faculty must explicitly address when, why, and how to use AI tools ethically, integrating academic integrity discussions directly into AI literacy instruction rather than treating them as separate concerns.
For administrators, institutions must invest in comprehensive AI literacy programs balancing skill development with explicit ethical instruction. Institutions need consistent, transparent, well-communicated policies that clearly define acceptable GenAI use. The discipline-specific differences, particularly elevated concerns among marketing students, suggest certain programs may require enhanced attention or specialized integrity initiatives. Marketing programs should consider embedding AI ethics throughout their curriculum given the discipline's position at the technology-business interface and the elevated dishonest intentions found among marketing majors.
Limitations
The research represents a snapshot during rapid change in both GenAI capabilities and institutional responses. The cross-sectional design captures associations rather than causal relationships. The study relied exclusively on self-reported data, introducing social desirability bias concerns and limiting conclusions since behavioral intentions do not always translate to actual behavior.
The sampling frame limits generalizability: data came from a single institution focusing on marketing students, excluding graduate students, online learners, and non-business disciplines. The behavioral intention scale focused narrowly on citation-related dishonesty, potentially missing other important forms of AI-enabled misconduct. The moral obligation construct yielded inconsistent results, suggesting need for more refined measurement. Finally, rapid evolution of institutional policies and cultural norms around AI use created an ambiguous environment that may have influenced responses in ways that would differ under clearer conditions.
Future Directions
Longitudinal research tracking students across time would provide insight into how AI literacy develops and whether it eventually translates into more ethical behavior. Multi-institutional studies across different types of colleges and disciplines would illuminate whether findings are context-specific or generalizable. International comparative studies could explore how cultural factors influence the AI literacy-integrity relationship.
The measurement of dishonest behavioral intentions requires substantial refinement, developing more comprehensive measures capturing the full range of AI-enabled academic dishonesty beyond citation concerns. Mixed methods approaches combining surveys with interviews would provide deeper insight into student reasoning. Research directly testing AI literacy interventions through experimental designs would provide stronger causal evidence, comparing different intervention approaches for effectiveness.
The role of institutional context deserves deeper investigation, examining how different policy approaches influence student AI use and integrity. The counterintuitive findings regarding attitudes and perceived behavioral control warrant focused investigation using qualitative methods to explore why favorable attitudes and high confidence predict greater dishonest intentions.
Title and Authors
Title: "Confidence and Integrity: An Exploration of the Impact of AI Literacy on Student Academic Dishonesty Using Generative Artificial Intelligence in Marketing Education"
Author: C. Edward Heath (Northern Kentucky University)
Committee: Dr. Brandelyn Tosolt (Chair), Dr. LaVette Burnette (Co-Chair), Dr. J. Charlene Davis (Member)
Published On: October 31, 2025
Published By: Dissertation submitted to Northern Kentucky University in partial fulfillment of the requirements for the degree of Doctor of Education in Educational Leadership