The use of artificial intelligence (AI) among young people is expanding rapidly. Across Europe, 88% of younger teens (ages 13–15) and 96% of older teens (ages 16–18) reported using AI tools for learning and creative tasks at least a couple of times a week, including for schoolwork, research, and translation.
AI can offer meaningful benefits for young people. It can assist with homework and studying, clarify complex concepts, provide more personalized learning experiences, and reduce stress related to academic workloads. When used with a clear pedagogical purpose, AI can strengthen critical thinking, creativity, and collaboration.
However, widespread adoption brings new challenges. AI enables the creation and sharing of false or misleading information at scale, increasing young people’s exposure to misinformation and disinformation, as well as potential privacy risks. AI tools can also be misused to create harmful content, such as deepfakes and non-consensual intimate images (NCII), that enable harassment or cyberbullying.
These opportunities and risks can be especially consequential during adolescence, a critical period for developing reasoning, judgment, and identity. The ways young people engage with AI during this formative stage may shape their long-term social, emotional, and intellectual well-being. Frequent reliance on AI could lead some teens to outsource critical thinking and social interaction to these tools. These dynamics underscore an urgent priority: building young people’s AI literacy so they can use AI in ways that support healthy development.
The Role of AI Literacy
As AI becomes increasingly embedded in daily life, AI literacy is essential. When young people understand how AI systems work, they are better prepared to make informed decisions about when and how to use AI tools. This awareness can strengthen their sense of agency and help them consider how their use affects their peers and communities.
Understanding the limitations of AI is especially important for social and emotional well-being. Although AI systems can generate sophisticated, human-like responses, they do so without comprehension, awareness, or intent. AI tools may support the development of social and emotional skills, but they cannot replace authentic relationships or the role of caring adults in young people’s lives. A clear understanding of AI’s capabilities and limitations reinforces the importance of real-world connections and helps young people maintain healthy boundaries with technology.
AI literacy also supports intellectual well-being. As AI-generated content becomes more prevalent and AI systems increasingly shape everyday decisions, young people must develop the skills to question, evaluate, and make informed judgments. Because these systems can replicate and amplify societal biases, students need to assess credibility, recognize bias and external influence, and evaluate AI outputs. Together, these abilities strengthen independent judgment and support responsible engagement with AI.
The draft AI Literacy Framework translates these priorities into practice. Grounded in ethical principles such as fairness, transparency, explainability, accountability, and respect for privacy, the framework guides how learners engage with AI tools before, during, and after use. Across its competences, learners evaluate the accuracy and relevance of AI systems, recognize their limitations, and consider how design choices shape outcomes for individuals, communities, and institutions. These competences deepen understanding of how AI use influences daily life, learning, relationships, and overall well-being.
In response to global stakeholder feedback, the final framework will place greater emphasis on metacognition, reflection, and responsible decision-making, strengthening AI literacy’s role in supporting young people’s well-being.
A Shared Responsibility
Young people develop AI habits in a broader social context. They are influenced not only by their peers’ use of AI, but also by the behaviors they observe in adults.
Educators, parents, and education leaders play a vital role in modeling thoughtful use and leading conversations about responsible engagement with AI. Developers must also be accountable for designing transparent, explainable tools that do not exploit young people’s sensitivities.
AI will continue to impact how young people live, work, and learn. With deliberate guidance from adults and a strong foundation in AI literacy, young people can use these tools in ways that support their well-being.