article Article Summary
Dec 10, 2025
Blog Image

Students with lower AI competence focus on personal learning risks like reduced creativity and critical thinking, while higher-competence students emphasize systemic risks such as bias and cheating, revealing that AI literacy shapes how students perceive

Students with lower AI competence focus on personal learning risks like reduced creativity and critical thinking, while higher-competence students emphasize systemic risks such as bias and cheating, revealing that AI literacy shapes how students perceive both threats and opportunities in educational AI adoption.

Objective

This study aimed to examine the relationships between perceived AI competence and risk perceptions among Finnish K-12 upper secondary students (ages 16-19). The research addressed two primary questions: First, which factors do upper secondary students perceive as AI-related risks? Second, how does students' self-reported AI competence shape their perceptions of potential AI-related risks? The study was motivated by recognition that as artificial intelligence becomes increasingly integrated into education, understanding how students perceive its risks is essential for supporting responsible and effective adoption. Given that Finland has been an early advocate for digital and AI education solutions, with the Finnish National Agency for Education aligning new AI guidelines for teaching and learning in Spring 2025, this research targeted general upper secondary education students in a school engaged in a two-year AI development project.

Methods

The research employed a cross-sectional survey design using co-occurrence network analysis, a complexity science approach that models AI-related concerns as interconnected structures rather than isolated variables. A total of 163 Finnish upper secondary students participated voluntarily during class time (47% female, 51% male, 2% non-binary; median birth year 2008). Students were recruited from a school with approximately 400 students that had been engaged in a funded AI development project, where students were introduced to AI in academic counseling sessions and the school revised its code of conduct to address AI use.

The study utilized two primary instruments. First, an AI competence instrument assessed students' self-reported ability to use AI tools through four items rated on a 5-point Likert scale, capturing skills in general AI use, everyday applications, personalized learning support, and information management. Confirmatory factor analysis supported a unidimensional structure with excellent reliability (Cronbach's α = 0.89). Students were divided into low- and high-competence groups using a median split for descriptive analysis. Second, an AI-related concerns instrument consisted of 14 binary response items capturing perceptions of risks across three domains: systemic risks (bias, inaccuracy, resource consumption), institutional risks (cheating, unfair advantage, inconsistent teacher/school rules, copyright), and personal risks (reduced critical thinking, creativity, and learning; fears of misuse and addiction; privacy violations).

Data analysis involved transforming binary responses into a co-occurrence matrix, applying per-respondent L2-normalization to control for individual differences in the number of concerns selected, and estimating regression-adjusted edge weights while controlling for gender and year group confounds. Edges below the 75th percentile were pruned to highlight salient co-occurrences. The resulting network was visualized with node sizes scaled by eigenvector centrality, which quantifies each concern's relative importance within the full network structure.

Key Findings

The study revealed that students were most concerned about systemic and personal risks: AI inaccuracy, bias, using AI for cheating, and reduced learning, critical thinking, and creativity. Copyright violations and inconsistent school policies received the least concern, suggesting students prioritize direct impacts on learning quality and fairness over institutional and legal issues.

Students' AI competence significantly shaped their risk perception patterns. Lower-competence students reported more concerns across nearly all categories and emphasized personal and learning-related risks. They showed stronger connections between reduced creativity, reduced critical thinking, reduced learning, and fear-based concerns such as misuse and addiction. These students were more likely to connect creativity-related concerns with fairness and resource issues. The most central concerns in the low-competence network were reduced creativity (eigenvector centrality = -0.53) and reduced critical thinking (EC = -0.49), along with fear of misuse (EC = -0.38), resource consumption (EC = -0.34), and unfair advantage (EC = -0.26).

Higher-competence students focused more on systemic and institutional risks. They showed stronger associations between AI cheating and bias, bias and inaccuracy, AI cheating and teacher rules, and connections between bias and fairness-related concerns. The most central concerns in the high-competence network were AI cheating (EC = 0.53), bias (EC = 0.52), teacher rules (EC = 0.36), and inaccuracy (EC = 0.34). These students perceived systemic and institutional issues as tightly interconnected and reported fewer concerns overall compared to their lower-competence peers.

The co-occurrence network analysis revealed that the strongest positive associations (increasing with competence) included AI_cheating—Biased, Biased—Inaccurate, AI_cheating—Teachers_rules, and connections between bias and fairness concerns. The strongest negative associations (decreasing with competence) included Less_creative—Less_critical, Less_creative—Resources, Less_creative—Unfair_adv, and connections involving Fear_misuse with other concerns. These patterns indicate that higher-competence students frame AI risks in terms of system reliability and institutional fairness, while lower-competence students emphasize threats to personal cognitive development and potential for individual harm.

Implications

The findings carry important implications for multiple educational stakeholders. At the classroom level, teachers play a central role in mediating student engagement with AI tools. For lower-competence students who connect AI use to fears of misuse, addiction, and reduced learning, teachers should scaffold AI use, create safe spaces for experimentation, and openly discuss both risks and opportunities. For higher-competence students with stronger concerns about cheating, bias, and institutional rules, teachers need clear and transparent guidelines on acceptable AI use alongside development of students' evaluative skills for critically assessing AI outputs. Teachers must move beyond simply allowing or prohibiting AI use and actively integrate it to enhance human creativity and reasoning. Since self-efficacy beliefs mediate AI risk awareness, pedagogical practices should help mitigate negative outcome expectancies while strengthening students' capabilities for responsible AI use.

At the institutional level, the findings highlight the need to explicitly incorporate AI literacy into curricula to support effective AIED implementation. Students' perceptions of their AI-related skills shape how they perceive AI risks, aligning with prior research linking AI self-efficacy and risk awareness. Perception of risk depends on knowledge; without systematic and equitable teaching of AI literacy, disparities in knowledge and skills may widen, leading to unequal opportunities to benefit from AI tools (the "AI divide"). This echoes earlier debates about introducing the internet and search engines into schools: initial fears of plagiarism and misinformation eventually led to integration of digital and information literacy into curricula. A similar transition is now needed for AI, where AI literacy should be integrated into curricula by considering students' moral development and treating it as a transversal skill applicable across disciplines.

At the policy level, policymakers should advance curriculum reform and teacher agency while promoting AI and data literacies and ethical practices to ensure equitable and effective AIED adoption. The EU Artificial Intelligence Act stresses the importance of ensuring sufficient AI literacy among stakeholders, enabling informed decisions about AI systems. Failing to institutionalize AI literacy may leave education systems unprepared for future technological shifts. Research-based pedagogical interventions and guided AI use across educational contexts are essential for developing competence that enables safe, critical, and creative AI application.

Limitations

Several limitations warrant consideration. First, data were collected from a single Finnish upper secondary school engaged in an AI development project, which may limit generalizability to other educational contexts and countries. The school's involvement in the development project may have provided students with more AI exposure than typical, potentially influencing their competence and risk perceptions. Second, AI competence was measured through self-reports, which may not accurately reflect students' actual skills or AI usage practices. Self-assessed competence can be influenced by confidence, experience, and social comparison rather than objective ability. Third, the cross-sectional design prevents causal interpretations regarding the relationship between competence and risk perceptions; it remains unclear whether competence shapes risk perception or whether pre-existing risk orientations influence competence development. Finally, the 14 risks outlined are not exhaustive, and other risk perceptions may exist or emerge as AI technologies and their educational applications evolve.

Future Directions

Future research should investigate how risk perceptions among diverse stakeholders in education intersect and influence educational experiences. Particular attention could be given to the roles of AI competence and risk perceptions in relation to student agency, as competence may determine whether students approach AI as a supportive tool or perceive it as limiting their autonomy and learning. Examining how students balance risks against potential opportunities can provide insights into conditions under which AI is adopted constructively. Longitudinal studies could track how risk perceptions evolve as students gain AI experience and formal literacy instruction, helping determine whether the competence-risk perception relationship is causal. Multi-national comparative studies would illuminate how cultural factors, educational systems, and policy contexts shape the competence-risk relationship. Research should also examine how teachers' AI competence and risk perceptions influence classroom integration and how institutional policies mediate the relationship between student competence and risk awareness. Finally, intervention studies testing different AI literacy approaches could determine which instructional strategies most effectively balance competence development with appropriate risk awareness.

Title and Authors

Title: "Artificial Intelligence Competence of K-12 Students Shapes Their AI Risk Perception: A Co-occurrence Network Analysis"

Authors: Ville Heilala, Pieta Sikström, Mika Setälä, and Tommi Kärkkäinen (University of Jyväskylä, Finland)

Published On: December 2025 (submitted to conference)

Published By: Submitted to ACM conference proceedings (Conference'17, Washington, DC, USA)

Related Link

Comments

Please log in to leave a comment.