Summary: AI Use in Schools Is Quickly Increasing but Guidance Lags Behind
Final Statement: AI adoption in K-12 education has surged dramatically, with over half of students and teachers now using AI tools for schoolwork, yet the absence of clear policies, training, and guidance has created widespread anxiety about cheating and concerns about impacts on critical thinking—revealing an urgent need for comprehensive institutional support to ensure AI complements rather than supplants learning.
Objective: The primary goal of this study was to provide a comprehensive, first-of-its-kind examination of artificial intelligence use in K-12 education by triangulating perspectives from five key stakeholder groups: teachers, school leaders, district leaders, students, and parents. The researchers aimed to assess the extent of AI adoption in schools, evaluate the current state of guidance and policies surrounding AI use, understand perceptions of improper AI use (particularly regarding academic integrity), and identify concerns about AI's potential effects on student learning, specifically critical-thinking skills.
Methods: This research employed a mixed-methods approach combining quantitative survey data with qualitative interviews. The quantitative component drew from eight nationally representative surveys conducted between October 2024 and June 2025, reaching 1,261 middle and high school students, 852 additional students, 984 parents of 12-17-year-olds, 967 K-12 public school teachers, 8,601 English language arts/math/science teachers, 3,668 principals, and 521 school district leaders across two survey waves. All survey responses were weighted to ensure national representativeness based on demographic variables including school urbanicity, enrollment size, free/reduced lunch eligibility, grade level, and respondent characteristics. The qualitative component consisted of semi-structured interviews with ten district leaders (nine from rural districts, one from a large suburban district) conducted in spring 2025, covering topics such as district-provided training, visions for AI literacy, hopes and concerns about AI's educational role, and implementation barriers. These interviews lasted 20-40 minutes and were audio-recorded, transcribed, and coded using deductive thematic analysis.
Key Findings: The research revealed several critical findings about AI adoption and its implications in education:
AI Usage Growth: In winter 2025, 54% of middle and high school students reported using AI for schoolwork to some extent, with 21% using it at least weekly—representing a dramatic increase of more than 15 percentage points compared to surveys from one year earlier. Usage increased substantially with grade level: 41% of middle schoolers versus 61% of high schoolers reported AI use. Similarly, 53% of English language arts, math, and science teachers reported using AI for instructional planning or teaching in spring 2025, with 13% using it weekly—a jump of over 25 percentage points from the previous year when only 25% reported any AI use. District leaders' estimates of student AI use (averaging 46% of students) aligned relatively well with actual student-reported usage.
Perception Disconnect: A stark disconnect emerged between student/parent concerns and district leader perspectives regarding AI's impact on critical thinking. Sixty-one percent of parents, 48% of middle schoolers, and 55% of high schoolers agreed that greater AI use would harm critical-thinking skills, compared to only 22% of district leaders. This suggests district leaders may be focusing more on AI's positive potential while students and parents remain significantly more worried about negative consequences.
Ambiguity Around Cheating: Considerable ambiguity exists regarding which AI use cases constitute academic dishonesty. When asked whether using AI for schoolwork is cheating, 77% of parents responded "it depends," while only 7% said "never" and 17% said "always." This lack of clarity creates operational uncertainty for students. Furthermore, 40% of students were unsure whether teachers use tools to detect AI use in their work, while 35% believed teachers do not use such tools, and only 26% reported definite awareness of AI detection efforts.
Student Anxiety: Fifty-one percent of students reported worrying about being falsely accused of cheating with AI even when they hadn't used it, with 16% either knowing someone falsely accused or having been falsely accused themselves. This anxiety increased with grade level, corresponding to higher AI usage rates among older students.
Policy Gaps: Less than half of principals (45%) reported that their school or district provided any policy or guidance on AI use, with minimal variation across grade levels. Policies specifically addressing AI and academic integrity were even rarer—only 34% of teachers reported such policies existed, and these were nearly twice as common in high schools (49%) compared to elementary schools (22%). Among teachers with these policies, the vast majority described them as "limited" rather than "clear and comprehensive."
Training Deficits: Training on AI use proved extremely scarce at all levels. Only 35% of district leaders indicated they provided students with any AI training, with availability varying dramatically by grade level: 32% in high schools, 17% in middle schools, and just 3% in elementary schools. From the student perspective, over 80% reported that teachers have not explicitly taught them how to use AI for schoolwork, with only 19% overall (14% of middle schoolers, 21% of high schoolers) reporting receiving such guidance. Teacher training was similarly inadequate—only 55% of teachers reported receiving any professional development or resources to help adapt their teaching to AI, and among those who did receive training, just 35% found it somewhat or very helpful. Training was more common for high school teachers (65%) than elementary teachers (47%).
Training-Policy Connection: An important finding emerged regarding the relationship between training and policy effectiveness. Among teachers with access to AI training, 37% found their AI policies helpful, whereas among teachers without training access, only 13% found policies helpful and 53% found them unhelpful, suggesting policies alone are insufficient without accompanying professional development.
Implications: These findings carry significant implications for AI integration in education. First, they demonstrate that AI adoption has already become widespread despite institutional readiness, creating a "fast-moving, real-time social experiment at scale" that demands urgent policy attention. The research reveals a critical implementation gap: while AI tools have rapidly proliferated in educational settings, the institutional infrastructure of policies, training, and guidance has failed to keep pace, leaving students, teachers, and parents to navigate AI's complexities largely on their own.
The stark perception disconnect between district leaders and students/parents regarding AI's impact on critical thinking suggests communication failures and potentially different underlying assumptions about AI's role in education. District leaders interviewed viewed AI primarily as a tool to enhance creativity, streamline workflows, improve instruction, and prepare students for future workforce demands where "people that have AI skills get hired and picked up almost immediately." However, this optimistic vision has not been effectively communicated to students and parents, who remain concerned that AI will be used to "supplant learning"—mechanically solving problems and completing assignments without developing underlying skills and understanding.
The widespread student anxiety about false accusations of AI cheating, combined with the general ambiguity about what constitutes improper AI use, indicates that the current environment may be psychologically harmful to students while simultaneously failing to provide clear ethical frameworks. This situation is particularly concerning given that AI usage increases with grade level, meaning students facing the highest stakes academic decisions (high schoolers) also experience the greatest uncertainty and anxiety.
The research also reveals that the educational system's typical approach of training teachers before training students may be leaving a dangerous gap. Many students are already using AI extensively (61% of high schoolers) while receiving virtually no formal guidance (only 21% of high schoolers report teacher guidance), forcing them to determine proper AI use independently. This ad hoc approach risks students developing poor AI habits, misunderstanding appropriate use cases, or experiencing unnecessary anxiety about academic integrity.
The finding that elementary schools receive dramatically less attention for AI training and policy development despite nearly half of elementary teachers experimenting with AI tools suggests a missed opportunity to establish foundational AI literacy skills and habits during a critical developmental period.
Limitations: The study acknowledges several important limitations. First, all responses are self-reported, introducing potential social desirability bias where respondents might answer according to perceived expectations rather than actual experiences or beliefs. Second, respondents may conceptualize "AI" differently, and while the study specifically mentioned ChatGPT in some questions, the general term "AI" could encompass traditional AI-powered educational tools (like personalized learning platforms) or everyday AI products (like virtual assistants), potentially inflating usage estimates—though the researchers note that most adults are unaware AI powers everyday products, and questions specifically referenced schoolwork applications, likely mitigating this concern.
Third, the American School District Panel sample, while weighted for representativeness, constitutes a very small share of approximately 13,000 U.S. school districts, and districts choosing to participate in research panels likely differ systematically from non-participating districts in unmeasurable ways. Fourth, the qualitative interview component involved only ten district leaders in a convenience sample, predominantly from rural districts with no urban district representation, limiting generalizability of interview insights.
Fifth, comparisons to prior surveys to demonstrate AI usage growth involved different samples, question wording variations, and time periods, preventing precise growth measurements and limiting conclusions to general trends rather than exact adoption rates. Finally, the study focused on broad policy and training questions and general perceptions about cheating and critical thinking, without comprehensively examining the full range of policies districts might need or the complete spectrum of AI effects on student learning and development.
Future Directions: The researchers propose several crucial directions for future research and practice. First, they emphasize that trusted authoritative sources, particularly state education agencies, should develop and disseminate guidance on effective AI policies and training programs. With 26 states currently providing some AI guidance for K-12 schools (though varying significantly in depth and breadth), there is opportunity for states to refine these guidelines as the field advances and to more effectively communicate them to educators while providing implementation support.
Second, all training and guidance materials must explicitly distinguish between AI use that "complements" learning versus AI use that "supplants" learning. The concern that widespread AI use harms critical-thinking skills reflects anxiety that students will use AI tools to mechanically solve problems and complete assignments without developing underlying theoretical knowledge and skills. Policies and training should clearly differentiate these use cases and explain strategies for avoiding substitution while encouraging productive supplementation of learning.
Third, in the short term while comprehensive policies are being developed, schools urgently need clear, concrete communication about what constitutes cheating with AI. Current policies related to academic integrity are rare (only 34% of teachers report them) and often limited rather than comprehensive. Providing specific examples of acceptable and unacceptable AI use could help bridge this gap and reduce student anxiety while more nuanced guidance is developed.
Fourth, elementary schools should not be neglected in AI policy and training initiatives. Currently, high schools receive the most student AI training (32% of district leaders report providing it), followed by middle schools (17%), with elementary schools nearly absent (3%). However, elementary school represents a critical period for teaching foundational skills and forming learning habits. With almost half of elementary teachers already experimenting with AI tools, providing elementary students with coherent AI foundations could prevent problems as students advance and AI capabilities expand.
Fifth, future research should examine the long-term impacts of AI use across diverse educational settings with larger participant samples, investigate how AI tools can be refined to support broader task ranges, explore AI's role in specialized educational areas like security and quality assurance, and conduct longitudinal studies tracking how AI integration affects student outcomes over multiple years. Research should also investigate the effectiveness of different policy and training approaches, examine how to close the perception gap between district leaders and students/parents, and study how AI literacy can be systematically developed from elementary through secondary education.
Title and Authors: "AI Use in Schools Is Quickly Increasing but Guidance Lags Behind: Findings from the RAND Survey Panels" by Christopher Joseph Doss, Robert Bozick, Heather L. Schwartz, Lisa Chu, Lydia R. Rainey, Ashley Woo, Justin Reich, and Jesse Dukes.
Published On: 2025 (specific month not provided in the document; surveys conducted October 2024–June 2025)
Published By: RAND Corporation (Research Report RR-A4180-1), conducted in partnership with the Center on Reinventing Public Education and the MIT Teaching Systems Lab