article Article Summary
Oct 13, 2025
Blog Image

A validated 19-item scale measuring K-12 English teachers' ChatGPT-based digital literacy across four dimensions—awareness, competence and application, social responsibilities, and professional development—provides educators and researchers with the first

A validated 19-item scale measuring K-12 English teachers' ChatGPT-based digital literacy across four dimensions—awareness, competence and application, social responsibilities, and professional development—provides educators and researchers with the first domain- and context-specific tool to assess English teachers' readiness for AI-assisted instruction, revealing high awareness and ethical consciousness but indicating need for targeted support in skill-specific competencies and professional growth opportunities.

Objective

This study aimed to develop and validate a comprehensive measurement tool to evaluate K-12 English teachers' ChatGPT-based digital literacy (ETCDL), addressing a critical gap in instruments that consider both the unique features of English instruction at primary and secondary school levels and the characteristics of ChatGPT-assisted instruction. The research sought to create a domain-specific and context-specific scale grounded in the Technological Pedagogical Content Knowledge (TPACK) framework, Theory of Planned Behaviour (TPB), and the Digital Literacy of Teachers framework to assess prospective and in-service English teachers' digital literacy in ChatGPT-enhanced contexts.

Methods

The researchers employed a scale development and validation methodology involving 288 Chinese K-12 English teachers, including 165 pre-service teachers (57.3%) and 123 in-service teachers (42.7%) from various provinces. Teachers represented different school levels: 44 primary (35.7%), 56 junior high (45.5%), and 23 senior high (18.7%).

Scale Development Process: The initial item pool was created by adapting validated questionnaires from Ajzen (1991), Archambault and Crippen (2009), Luo and Zou (2024a), and Liu and Wang (2024), considering ChatGPT characteristics, K-12 English teaching nature, and theoretical frameworks. The process involved:

  1. Initial 32-item pool based on theoretical frameworks
  2. Expert focus group review (two researchers) reducing to 25 items
  3. Semi-structured interviews with eight English teachers for face validity, resulting in final 19 items across four dimensions

Data Collection and Analysis:

  • Online survey distribution via Wenjuanxing platform during Fall 2024
  • Multiple validation approaches:
    • First-order Confirmatory Factor Analysis (CFA) via AMOS 26.0
    • Second-order CFA to confirm composite structure
    • Exploratory Structural Equation Modeling (ESEM) via Mplus 8.4 for cross-validation
  • Model fit evaluated using chi-square to degrees of freedom ratio, CFI, IFI, TLI, SRMR, and RMSEA
  • Reliability assessed through Cronbach's α coefficients

The four dimensions were theoretically constructed as:

  • ChatGPT-based digital awareness: Built from TPB's "intention"
  • ChatGPT-based digital competence and application: Built from TPACK's TPK, TCK, and TPCK
  • ChatGPT-based digital social responsibilities: Built from TPB's "subjective norms" and "attitude"
  • ChatGPT-based professional development: Built from TPB's "perceived behavioral control"

Key Findings

Psychometric Properties:

  • First-order CFA: Demonstrated satisfactory model fit (χ²=291.927; CFI=0.90; IFI=0.91; GFI=0.91; SRMR=0.05)
  • Second-order CFA: Confirmed acceptable fit (χ²=281.45; CFI=0.90; IFI=0.91; GFI=0.91; SRMR=0.04)
  • ESEM Model: Showed acceptable fit (χ²=197.83, RMSEA=0.06 [90% CI: 0.046-0.070], CFI=0.95, TLI=0.91, SRMR=0.03)
  • Reliability: Overall Cronbach's α=0.83; dimension-specific α ranged from 0.71-0.85
  • Factor loadings: Ranged from 0.53-0.86 across all items and dimensions

Dimensional Structure: The validated 19-item ETCDL scale comprises four dimensions:

  1. ChatGPT-based Digital Awareness (3 items, α=0.74):
    • Mean values: 4.14-4.22 (out of 5)
    • High scores indicate strong enthusiasm and recognition of ChatGPT's value
    • Teachers demonstrated positive attitudes toward AI integration
  2. ChatGPT-based Digital Competence and Application (11 items, α=0.72):
    • Mean values: 3.82-4.23
    • Relatively strong pedagogical understanding with variations
    • Lower confidence in teaching listening/speaking (3.83-3.89) compared to other skills
    • Lower confidence in integrating linguistic concepts with instructional approaches (3.82)
    • Highlights need for domain-specific training
  3. ChatGPT-based Digital Social Responsibilities (3 items, α=0.81):
    • Mean values: 4.00-4.35
    • Consistently high ratings across ethical awareness, legal compliance, and privacy protection
    • Strong awareness of moral responsibilities toward students
    • Reflects sensitivity to data security and sustainable AI use
  4. ChatGPT-based Professional Development (2 items, α=0.85):
    • Mean values: 3.93-4.08
    • Teachers recognize ChatGPT's potential for growth
    • Indicates gap between recognition and actual integration into professional practice
    • Suggests need for institutional support and targeted training

Cross-Dimensional Insights:

  • Teachers showed strong enthusiasm and ethical awareness but uneven competence across language skills
  • While digital awareness is high, actual application varies by teaching domain
  • Social responsibility scores exceeded competence scores, indicating prioritization of ethical considerations
  • Professional development dimension revealed underutilization of ChatGPT for teacher growth

Implications

Theoretical Contributions:

  1. Framework integration: Successfully combined TPACK, TPB, and Digital Literacy of Teachers frameworks to create a multidimensional conceptualization extending beyond technical knowledge to include social responsibility and professional growth.
  2. Domain specificity: First instrument to capture digital literacy specifically for K-12 English teaching with AI tools, recognizing that technology integration varies by discipline and educational context.
  3. Advanced methodology: First study in English language teaching AI contexts to employ ESEM alongside first- and second-order CFA, addressing limitations of traditional factor analysis by accounting for cross-loadings.
  4. Holistic literacy model: Positions AI literacy as multilayered construct encompassing awareness, technical competence, pedagogical application, ethical considerations, and professional development.
  5. Cross-cultural research foundation: Enables international comparison of AI-assisted digital literacy across cultural and institutional contexts.

Practical Applications:

  1. Assessment tool: Provides researchers, educators, and administrators with validated instrument to measure and understand K-12 English teachers' digital literacy in AI settings.
  2. Teacher training design: Informs curriculum design and resource allocation for professional development programs, highlighting need to address:
    • Skill-specific competencies (particularly listening and speaking)
    • Ethical implications and legal compliance
    • Creative uses of ChatGPT for research and instructional innovation
  3. Policy development: Helps inform policymaking related to AI-integrated English teaching at K-12 level, particularly regarding resource allocation and support structures.
  4. Targeted interventions: Identifies specific areas needing support:
    • Domain-specific training for different language skills
    • Guidance on evaluating and cross-checking AI-generated materials
    • Strategies for adapting teaching in AI-assisted contexts
    • Support for conducting educational research with AI tools
  5. Peer collaboration: Highlights importance of peer support systems, especially in under-resourced contexts through mentoring, collaborative workshops, and professional learning communities.
  6. Longitudinal tracking: Enables evaluation of how teachers' AI-assisted digital literacy evolves over time, supporting assessment of training programs and policy interventions.

Limitations

The researchers acknowledge several important limitations:

  • Sample restrictions: Study involved only 288 Chinese K-12 English teachers, limiting generalizability. Future research should include diverse economies, cultural contexts, educational settings, teaching experience levels, academic backgrounds, age groups, and gender representations.
  • Validity scope: While construct validity was established, predictive and concurrent validity remain unexplored. Future studies should investigate whether ETCDL can reliably predict psychological and emotional factors (e.g., psychological demands in ChatGPT-assisted instruction).
  • Methodology limitations: Sole reliance on self-reported questionnaire data may not fully capture true perceptions or behaviors. Qualitative methods (classroom observations, interviews, teaching reports) could provide deeper insights into actual engagement with AI tools.
  • Demographic homogeneity: All participants were native Chinese speakers, male teachers represented only 33.7%, and all were elementary school teachers, limiting generalizability across different demographic groups and educational levels.
  • Cultural specificity: Study conducted within Chinese educational context may not fully translate to other cultural or institutional settings without adaptation.
  • Temporal constraints: Data collected during single semester (Fall 2024) provides snapshot rather than longitudinal understanding of digital literacy development.
  • Self-report bias: Self-reported data may be subject to social desirability bias or inflated self-assessment, particularly regarding ethical awareness and competence.

Future Directions

The researchers recommend several avenues for continued investigation:

  1. Expanded sampling: Include teachers from broader range of economies, cultural contexts, educational settings, and demographic groups (diverse teaching experience, academic backgrounds, age, gender) to enhance generalizability.
  2. Validity expansion: Investigate predictive and concurrent validity by examining relationships between ETCDL scores and psychological/emotional factors, teaching effectiveness, and student learning outcomes.
  3. Mixed methods approach: Combine quantitative scale administration with qualitative methods (classroom observations, individual interviews, teaching reports) for nuanced understanding of how teachers engage with AI tools.
  4. Longitudinal studies: Track evolution of teachers' digital literacy over extended periods to evaluate effectiveness of training programs and policy interventions.
  5. Comparative research: Conduct cross-national investigations comparing AI-assisted digital literacy across cultural and institutional contexts.
  6. Behavioral observation: Study actual classroom practices rather than just self-reported intentions and competencies to understand implementation gaps.
  7. Intervention studies: Test effectiveness of targeted professional development programs addressing identified weaknesses (skill-specific competencies, professional development applications).
  8. Scale adaptation: Adapt and validate ETCDL for other subject areas, educational technologies, or AI tools beyond ChatGPT.
  9. Outcome research: Examine relationships between teacher digital literacy scores and student learning outcomes in AI-assisted English instruction.

Title and Authors: "K-12 English Teachers' ChatGPT-Based Digital Literacy: Scale Development and Validation" by Shuqiong Luo (Jinan University, China) and Di Zou (The Hong Kong Polytechnic University)

Published On: 2025 (received March 30, 2025; revised September 4, 2025; accepted September 15, 2025)

Published By: European Journal of Education (John Wiley & Sons Ltd.), Volume 60, Issue 4, Special Issue: The Role of Artificial Intelligence in Education

Related Link

Comments

Please log in to leave a comment.