K-12 school districts in the United States are slowly developing AI policies, primarily focused on academic integrity, with only 14.13% of sampled districts having established policies as of May 2024.
Objective: The main purpose of this study was to examine AI policies in K-12 school districts across the United States through content analysis, aiming to provide strategic guidance for policymakers and educational institutions developing ethical frameworks for AI integration in classrooms.
Methods: The researchers conducted a systematic content analysis of AI policies from 191 randomly selected K-12 school districts across different locales (urban, suburban, rural, and town) in the United States. The dataset comprised school districts with at least 500 students. Using Latent Dirichlet Allocation (LDA), the researchers identified underlying patterns in existing AI policies. Data was collected from publicly available documents, including district websites, board meeting minutes, and policy pages, using specific search terms related to AI policies.
Key Findings:
- Only 14.13% (n=27) of the 191 sampled school districts had established AI policies, while 16.23% (n=31) were in the process of developing policies, and 69.10% (n=132) had no AI policy.
- The 27 districts with AI policies were distributed across 20 states and the District of Columbia.
- Five key patterns emerged in existing AI policies: academic integrity, responsible use of AI, teacher guidelines and permissions, data privacy and security, and educational enhancement and support.
- Academic integrity was the dominant concern, with 44.4% of policies mentioning "cheat" and 59% mentioning "plagiarism."
- Only 25.93% of policies explicitly addressed data privacy safeguards, raising concerns about preparedness for potential risks.
- District locale (urban, suburban, rural, town) did not significantly impact whether a district had an AI policy.
- Several policies appeared similar, with 18.52% of the analyzed policies created by Neola, a company providing policy services to educational institutions.
Implications: The findings highlight the nascent state of AI policy development in K-12 education and provide guidance for stakeholders developing AI policies. The study suggests a seven-step framework for AI policy development: assessing the current environment, prioritizing ethical considerations, promoting academic integrity, fostering professional development, ensuring accessibility, implementing ongoing monitoring and evaluation, and adopting iterative policy development. The research emphasizes the importance of applying Value Sensitive Design principles to ensure AI technologies align with ethical standards and stakeholder needs.
Limitations: The study acknowledges several limitations, including reliance on publicly accessible sources, exclusion of private and charter schools, temporal limitations (representing a snapshot in time), and potential geographical bias. Additionally, some districts may have internal policies not publicly available or may be in the process of developing policies not captured in the analysis.
Future Directions: The researchers recommend expanding research to analyze AI policies at the state department of education level, exploring international contexts such as the European Union's AI policy act, examining AI policies in higher education, and applying ethical frameworks to evaluate emerging AI case studies. They also suggest investigating differences in AI policy development based on various factors including race, gender, educational systems, geographical locations, and rurality.
Title and Authors: "Artificial intelligence policies in K-12 school districts in the United States: a content analysis shaping education policy" by Lauren Eutsler, Brittany Rivera, Megan Barnes, and Julie Cummings.
Published On: March 19, 2025
Published By: Journal of Research on Technology in Education