article Article Summary
Jun 13, 2025
Blog Image

Teachers' acceptance of educational chatbots depends on both individual characteristics (trust, innovativeness) and sociocultural factors (subjective norms), with more innovative teachers perceiving chatbots as useful educational tools and trust in chatbo

Teachers' acceptance of educational chatbots depends on both individual characteristics (trust, innovativeness) and sociocultural factors (subjective norms), with more innovative teachers perceiving chatbots as useful educational tools and trust in chatbot reliability significantly influencing adoption intentions.

Objective: The main goal of this study was to develop and test a comprehensive framework for understanding K-12 teachers' acceptance of educational chatbots by extending the Technology Acceptance Model (TAM) with three additional variables: trust, subjective norms, and perceived innovativeness. The research aimed to reveal key determinants influencing teachers' attitudes toward and intentions to use educational chatbots, addressing a significant gap in understanding teacher acceptance of AI-based technologies in K-12 educational settings.

Methods: The researchers employed a mixed-methods approach combining quantitative structural equation modeling (SEM) with qualitative interviews. Data were collected from 482 K-12 teachers in Turkey who had experience using the EBA Assistant chatbot, a nationally deployed AI-powered conversational agent used during the COVID-19 pandemic for distance education. The quantitative component utilized a 27-item online survey adapted from established technology acceptance research, measuring seven constructs: perceived ease of use, perceived usefulness, perceived innovativeness, trust, attitude, subjective norm, and intention to use. All items used a five-point Likert scale ranging from "strongly disagree" to "strongly agree." For the qualitative component, 54 volunteer teachers participated in follow-up interviews addressing three open-ended questions about their purposes, expectations, and concerns regarding educational chatbot use. The study employed purposive sampling to target participants with relevant chatbot experience. Statistical analyses were conducted using SPSS 22.0 and AMOS 18.0 for SEM analysis, while qualitative data underwent thematic analysis following Braun and Clarke's framework. Convergent validity was assessed through factor loadings, composite reliability, and average variance extracted measures.

Key Findings: The SEM analysis revealed several significant relationships supporting the extended TAM framework. Perceived innovativeness positively influenced both ease of use (β = 0.40) and perceived usefulness (β = 0.40), indicating that more innovative teachers find chatbots easier to use and more beneficial. Perceived usefulness significantly predicted both attitude toward chatbots (β = 0.11) and intention to use them (β = 0.17). Ease of use positively affected trust (β = 0.36) and attitude (β = 0.66), suggesting that user-friendly chatbots foster greater confidence and positive feelings. Trust emerged as a crucial factor, positively influencing intention to use chatbots (β = 0.26), while subjective norms also significantly predicted usage intentions (β = 0.11). The model explained substantial variance in key outcomes: 48% of attitudes toward chatbots, 42% of trust, and 44% of intention to use. Qualitative findings revealed six main purposes for chatbot use, with adaptive instruction being most prevalent, followed by promoting student engagement. However, teachers expressed concerns about data privacy (33 responses), plagiarism (27 responses), and potential reduction in critical thinking skills (21 responses). Teachers also highlighted expectations for improved accuracy and more sophisticated responses from chatbots.

Implications: The findings provide crucial insights for educational technology integration by demonstrating that chatbot acceptance involves complex interactions between individual and social factors. The study confirms that innovative teachers are more likely to embrace AI-based educational tools, suggesting that professional development should target teachers' willingness to experiment with new technologies. The central role of trust highlights the need for transparent, reliable chatbot systems that protect student privacy and provide pedagogically appropriate responses. The influence of subjective norms indicates that institutional support and peer encouragement significantly impact adoption decisions. These insights suggest that successful chatbot integration requires multi-level interventions addressing individual teacher characteristics, technological design features, and organizational culture. The research contributes to the broader understanding of AI acceptance in education by providing an empirically validated model that extends beyond traditional technology acceptance frameworks to include context-specific factors relevant to educational settings.

Limitations: Several important limitations affect the study's generalizability and scope. The research was conducted exclusively in Turkey with teachers using a specific national chatbot (EBA Assistant), potentially limiting applicability to other educational contexts, countries, or chatbot types. The self-reported nature of survey responses may introduce social desirability bias, where participants provide expected rather than authentic responses. The study focused solely on teachers' perspectives without considering student experiences and feedback, missing important stakeholder viewpoints that could affect overall implementation success. The research did not deeply explore specific sources of social influence or the nature of subjective norms affecting teacher behavior. Additionally, the study lacked objective measures of actual chatbot usage behavior, relying instead on self-reported intentions and experiences. The cross-sectional design prevents understanding of how acceptance factors might change over time with increased experience or technological improvements.

Future Directions: The researchers recommend several important avenues for continued investigation. Future studies should explore chatbot acceptance across diverse educational systems and cultural contexts to improve generalizability. Research should incorporate objective usage data alongside self-reported measures to better understand the relationship between intentions and actual behavior. Student perspectives and experiences should be investigated to provide a more comprehensive understanding of educational chatbot effectiveness. Longitudinal studies could track how teacher acceptance evolves with extended use and technological improvements. Future research should examine specific sources and types of social influence affecting teacher decisions, including peer networks, institutional policies, and community expectations. Studies should also investigate the integration of various AI-based educational tools beyond chatbots to understand broader patterns of AI acceptance in education. Research could explore the effectiveness of different professional development approaches for enhancing teacher readiness to adopt AI technologies, particularly focusing on building trust and addressing privacy concerns.

Title and Authors: "Teacher – Artificial Intelligence (AI) interaction: The role of trust, subjective norm and innovativeness in Teachers' acceptance of educational chatbots" by Ismail Celik, Hanni Muukkonen, and Signe Siklander.

Published on: 2025

Published by: Policy Futures in Education (Sage Publications)

Related Link

Comments

Please log in to leave a comment.