The main goal of this study was to develop and validate a 25-item multiple-choice AI literacy test for Hong Kong secondary school students (grades 7-9).
Methods:
- Developed 25 multiple-choice test items based on an AI curriculum
- Conducted a pilot test with 144 students from six secondary schools
- Used Item Response Theory (IRT) with Markov Chain Monte Carlo (MCMC) algorithm to analyze item characteristics
- Evaluated test reliability using Kuder-Richardson Formula 20 (KR-20)
Key Findings:
- Overall mean score was 43.97 out of 100
- KR-20 reliability coefficient was 0.68, indicating moderate reliability
- All items showed satisfactory discrimination capabilities
- Five items were identified as too easy for the student population
Implications: The study provides a foundation for developing a standardized AI literacy test for secondary students, which could help assess learning outcomes and improve AI education curricula.
Limitations:
- Small sample size
- Test reliability below ideal threshold of 0.8
- Some items too easy for target population
Future Directions:
- Revise overly simple items to increase difficulty while maintaining discrimination
- Conduct another pilot test with a different sample group
- Regularly update test content to align with AI advancements
Title and Authors: "A Pilot Study on the Development and Validation of AI Literacy Test Items for Grade 7 to Grade 9 Students" by Yifan Chen, Helen Meng, King Woon Yau, Irwin King, Ching Sing Chai, Savio Wai-Ho Wong, Thomas K. F. Chiu, and Yeung Yam
Published On: 2024 (specific date not provided)
Published By: International Symposium on Educational Technology (ISET)