A collaborative workshop framework successfully engages educators in developing constructionist criteria for evaluating AI educational tools, enabling them to identify which AI technologies truly support student knowledge construction.
Objective: The main goal of this workshop was to engage educators, researchers, and other education experts in collaboratively developing criteria to evaluate AI educational tools from a constructionist learning perspective. The workshop aimed to help participants determine which AI tools enable constructionist learning for K-12 students and provide a framework for critically assessing the usefulness of AI educational technology tools in knowledge construction environments.
Methods: The study employed a hands-on, highly collaborative workshop format that drew upon participants' personal experience with constructionist tools. The methodology included several key components:
- Collaborative development of criteria to define effective constructionist tools based on participants' background knowledge
- Application of these criteria to analyze AI tools in educational contexts
- Hands-on exploration and tinkering with AI tools of participants' choice to evaluate their affordances for constructionist learning
- Peer review and sharing of tool analyses among participants
- Support from MIT curriculum designers and peers throughout the evaluation process
- Preparation for a larger-scale workshop with computer science educators who may be new to constructionism
The workshop provided an overview of AI and constructionism while allowing ample time for practical exploration and collaborative evaluation of various AI educational tools.
Key Findings: While this is a workshop proceedings paper that focuses primarily on methodology rather than empirical results, the key outcomes include:
- Successful engagement of teachers, students, researchers, and education experts in critically evaluating AI tools through a constructionist lens
- Development of peer-reviewed assessments of AI educational tools
- Creation of criteria for evaluating AI tools' capacity to support student knowledge construction
- Enhanced understanding among participants of how AI can or cannot be used constructively in educational settings
- Preparation of a framework that can be scaled up for broader implementation with computer science educators
- Generation of practical insights for using AI as a tool for constructionist learning environments
Implications: This work contributes significantly to the field of AI in education by addressing a critical gap in evaluation frameworks for AI educational tools. The constructionist approach offers an alternative to more transmission-based or automation-focused uses of AI in education. By focusing on how AI tools can support students in building their own knowledge rather than simply delivering content, this framework aligns with pedagogical approaches that emphasize active learning, creativity, and student agency. The collaborative development of evaluation criteria ensures that the framework is grounded in practical educator experience while maintaining theoretical rigor. This approach has implications for educational technology selection, curriculum design, and teacher professional development in the age of AI.
Limitations: As a workshop proceedings paper, this document has several inherent limitations:
- Limited empirical data or detailed findings from the workshop implementation
- Brief format that doesn't allow for comprehensive analysis of results or participant feedback
- Lack of specific details about the criteria developed or their effectiveness in practice
- No longitudinal data about the impact of the framework on participants' subsequent AI tool selection or classroom implementation
- Limited information about participant demographics, backgrounds, or representativeness
- Absence of quantitative measures of workshop effectiveness or participant learning outcomes
Future Directions: The paper suggests several areas for future development:
- Piloting the larger-scale workshop directly with computer science educators who may be new to constructionism
- Testing and refining the developed criteria through broader implementation
- Conducting more detailed studies on the effectiveness of the evaluation framework in real classroom settings
- Expanding the workshop model to other educational contexts and subject areas
- Developing additional resources and training materials based on the workshop outcomes
- Investigating the long-term impact of constructionist AI tool evaluation on teaching practices and student learning outcomes
- Creating a repository of AI tool reviews using the developed criteria for broader educator access
Title and Authors: "Generating Constructionist Criteria for AI Educational Technology: How to Evaluate AI Tools to Help Students Build Knowledge" by Lydia Guterman, Sarah Wharton, and Mary Cate Gustafson-Quiett.
Published on: 2025
Published by: Constructionism Conference Proceedings, 8/2025, pp. 557–558
This workshop represents an important step toward developing principled approaches to AI tool evaluation in education, emphasizing the importance of pedagogical frameworks in guiding technology selection and implementation. The constructionist perspective offers valuable insights for ensuring that AI tools support meaningful learning experiences rather than simply automating educational processes.