By Astrid Carolus, Martin J. Koch, Samantha Straka , Marc Erich Latoschik, Carolin Wienrich
Overview
The Meta AI Literacy Scale (MAILS) is a questionnaire designed to measure and develop your ability to interact with, understand, and use AI effectively. Grounded in extensive research, MAILS identifies five key areas of AI literacy, ensuring that everyone—students, professionals, and educators—has the tools to succeed in an AI-enhanced world.
AI-related job postings have increased on average from just above 0.50% in 2014 to 2.05% in 2022 in the United States (Maslej et al., 2023).
More AI-based technologies are being developed, as the average annual growth rate of filed AI patents is 76.6% between 2015 and 2021 (Zhang, Maslej, et al., 2022).
Understand AI: Gain fundamental knowledge of AI concepts and functions.
Use & Apply AI: Learn how to effectively use AI tools in everyday scenarios.
Detect AI: Develop the skill to identify AI-driven systems and interactions.
AI Ethics: Weigh the societal and ethical implications of AI use.
Create AI: Explore how to design and program AI applications (an advanced skill outside core literacy).
Psychological Competencies in AI Literacy
Self-Efficacy: Confidence in learning and solving problems using AI.
Problem-Solving: The ability to troubleshoot AI-related issues independently, ensuring smooth interaction with AI systems.
Example: A customer support representative resolving issues with a malfunctioning chatbot by identifying and adjusting its training data.
Mastery of this skill minimizes disruptions and enhances productivity in AI-reliant workflows.
Continuous Learning: A commitment to staying updated on new AI developments and integrating them into practical use.
Example: A software developer regularly exploring the latest AI frameworks to enhance application features.
Continuous learning ensures users remain agile and effective in a rapidly evolving AI landscape.
Self-Competency: The ability to regulate emotions and critical thinking when engaging with AI systems. This includes two key components:
Emotion Regulation: Managing feelings like frustration, anxiety, or overconfidence during AI interactions.
Example: A teacher using a flawed AI grading tool maintains composure while addressing the inaccuracies and seeking better solutions.
This skill fosters a constructive and balanced approach to AI-driven tasks, even under pressure.
AI Persuasion Literacy: Recognizing and mitigating the subtle influences AI systems may have on personal decisions.
Example: Avoiding unnecessary purchases by critically assessing algorithmically curated product recommendations in an online store.
This competency ensures informed and autonomous decision-making, safeguarding users from undue AI influence.
AI literacy parallels Bloom’s taxonomy regarding their general configuration of skills (Fig.1)
In the present sample, the average age was 32.13 years (SD = 11.66 years, ranging from 18 to 72 years). Most participants lived in Germany (77.00%) or Austria (7.00%). 145 participants considered themselves female (48.33%), while 152 participants identified as male (50.67%). 3 participants identified as diverse (1.00%). Participants were asked to indicate their experience with AI by rating three statements ”I use artificial intelligence at work”, ”I use artificial intelligence at school/ university”, and ”I use artificial intelligence in my everyday life” on an 11-point Likert scale (0 = ”never or only very rarely” to 10 = ”very often”). Almost one-fifth of the participants (19.67%) reported never using AI in their everyday lives, at school/university, or at work. The average scores (and standard deviation) were M = 1.84 (2.79) for work, M =1.21 (2.31) for school/university, and M = 3.73 (3.03) for everyday life respectively showing rather low use of AI on the scale of 0 = ”never or only very rarely” to 10 = ”very often”. Participants were also asked to report the AI they use at work (Fig. 2). In total, the participants reported the use of 158 AI-based systems. The scope ranged from the use of devices with AI, to the use of programs that include AI functions, to the autonomous implementation of machine learning processes. Ten AI frameworks and ten AI providers and environments (e.g., Amazon, Watson) were reported.
The AI Literacy Questionnaire and its associated core competencies are designed to be modular, making them highly adaptable to various educational, professional, and societal contexts. Each component of the framework—such as understanding AI, applying AI tools, detecting AI systems, evaluating AI’s impact, and creating AI solutions—can be used independently or in combination, depending on the specific needs of the setting.
In Education: Educators can focus on foundational competencies, like understanding and applying AI, to build students’ confidence and knowledge, while advanced learners might engage in evaluating ethical considerations or designing AI projects.
In Professional Training: Organizations can tailor the questionnaire to assess employee readiness for AI integration in the workplace, emphasizing practical applications and critical evaluation skills.
In Community Outreach: Facilitators can use select competencies, such as detecting AI and understanding its societal implications, to foster awareness and responsible use of AI among the general public.
This modular approach ensures that the framework is flexible and scalable, supporting diverse learning goals, skill levels, and contexts while maintaining a structured pathway for building AI literacy.
Note: This is a Snapshot for Reference Only
The above image is a static snapshot of the questionnaire provided for display and reference purposes only. The embedded links and interactive features are not functional in this snapshot. For access to the full, interactive version, please see appendix.
The study successfully developed and validated the Meta AI Literacy Scale (MAILS) to measure AI literacy and related psychological competencies essential for the sustained use of AI. Key findings include:
Core Factors Identified: "Use & Apply AI," "Know & Understand AI," "Detect AI," and "AI Ethics" were confirmed as components of AI literacy.
Separate Dimension for Create AI: The ability to create AI emerged as a distinct factor, not inherently part of AI literacy.
Dual Psychological Competencies: Psychological skills were categorized into "AI Self-efficacy" (problem-solving and learning) and "AI Self-competency" (emotion regulation and resistance to persuasion).
Positive Correlations: AI literacy dimensions correlated positively with attitudes toward AI and willingness to use technology, except for "Create AI," which showed lower correlations.
Practical and Theoretical Contributions: The instrument provides a robust measurement tool for researchers, practitioners, and educators, with implications for better understanding and implementing AI literacy.
Future research is recommended to validate the scale against other measures of AI literacy, explore predictive validity, and expand its application across diverse contexts.
AI Literacy Quest
Engage with the Meta-AI Literacy chatbot to explore and apply key AI literacy concepts, meeting the following learning outcomes:
Understand AI: Identify fundamental concepts and real-world applications.
Use & Apply AI: Demonstrate practical uses of AI tools.
Detect AI: Evaluate whether interactions are AI-driven or human-based.
AI Ethics: Analyze the ethical implications of AI in real-world scenarios.
Evaluate AI: Critically assess AI’s strengths, limitations, and trustworthiness.
Create AI: Generate ideas for simple AI solutions to address real-world challenges.
Participants will engage in a 15-minute interactive quest with the Meta-AI Literacy chatbot, working through six stages tied to each MAILS framework pillar. Each stage involves solving a problem, answering questions, or reflecting on a scenario, culminating in a group discussion to reinforce learning.
Stage 1: Understand AI (2 minutes)
Chatbot Prompt: "AI is transforming our world. Can you name two examples of how AI is used in daily life and explain their benefits?"
Task: Participants discuss and submit answers (e.g., "Google Translate improves communication, Spotify recommends songs").
Outcome: Build foundational knowledge of AI concepts.
Stage 2: Use & Apply AI (2 minutes)
Chatbot Prompt: "You’re tasked with improving your team's productivity. What AI tool would you choose and why?"
Task: Groups select an AI tool and justify their choice (e.g., "AI project management tool to automate task tracking").
Outcome: Understand practical applications of AI tools.
Stage 3: Detect AI (2 minutes)
Chatbot Prompt: "In which scenarios is AI being used? (A) A customer support chatbot. (B) A handwritten thank-you note. (C) A recommendation on Amazon."
Task: Participants identify AI-driven interactions and justify their reasoning.
Outcome: Sharpen skills in distinguishing AI from non-AI systems.
Stage 4: AI Ethics (2 minutes)
Chatbot Prompt: "Imagine a facial recognition system used for school security. What ethical concerns might arise, and how could they be mitigated?"
Task: Groups brainstorm potential issues (e.g., privacy concerns, bias) and propose solutions.
Outcome: Develop critical thinking about ethical implications.
Stage 5: Evaluate AI (2 minutes)
Chatbot Prompt: "An AI system claims 95% accuracy in predicting student performance. What factors would you evaluate to decide if it’s trustworthy?"
Task: Participants discuss factors like data quality, transparency, and real-world testing.
Outcome: Practice evaluating AI reliability, strengths, and limitations.
Stage 6: Create AI (2 minutes)
Chatbot Prompt: "Design an AI tool to help students learn math. What features would it have, and how would it work?"
Task: Groups propose a basic idea for an AI tool (e.g., "A chatbot that adapts to individual learning styles using gamified quizzes").
Outcome: Encourage creativity and innovation in designing AI solutions.
Group Reflection & Discussion (3 minutes)
How did using the MAILS Mentor chatbot enhance your understanding of AI concepts? What was one insight that surprised you?
How could you apply what you learned to your work or studies?
References
Carolus, A., Koch, M., Straka, S., Latoschik, M., & Wienrich, C. (2023). MAILS - Meta AI literacy scale: Development and testing of an AI literacy questionnaire based on well-founded competency models and psychological change- and meta-competencies. Computers in Human Behavior: Artificial Humans, 1, 100014. https://doi.org/10.1016/j.chbah.2023.100014
Maslej, N., Fattorini, L., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Ngo, H., Niebles, J. C., Parli, V., Shoham, Y., Wald, R., Clark, J., & Perrault, R. (2023). Artificial Intelligence Index Report 2023 (arXiv:2310.03715). arXiv. https://doi.org/10.48550/arXiv.2310.03715
Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041
Zhang, D., Mishra, S., Brynjolfsson, E., Etchemendy, J., Ganguli, D., Grosz, B., Lyons, T., Manyika, J., Niebles, J. C., Sellitto, M., Shoham, Y., Clark, J., & Perrault, R. (2021). The AI Index 2021 Annual Report (arXiv:2103.06312). arXiv. https://doi.org/10.48550/arXiv.2103.06312