AI and Adolescent Well-Being: New APA Health Advisory

0
843
AI and Adolescent Well-Being: New APA Health Advisory

Many AI chatbot platforms are designed to simulate human relationships and are marketed as companions or experts. The APA urges for safeguards to mitigate harm because 1) adolescents are less likely to question the accuracy and intent of the chatbot and 2) adolescents’ relationship with AI may displace or interfere with development of healthy, real-world relationships. The APA recommends:

  • Prioritizing the development of features that prevent exploitation, manipulation, and erosion of real-world relationships. For example, providing regular reminders that they are interacting with a bot or offering resources and suggestions to encourage human interactions.

  • Developing regulations to ensure that AI systems designed for adolescents protect mental and emotional health.

  • Parents, caregivers, and educators should discuss AI literacy with adolescents through programs that a) explain that not all AI-generated content is accurate, b) discuss the intent of some AI bots, and c) educate about indicators of misinformation.

AI for adults should differ from AI for adolescents

Adolescents are a particularly vulnerable group and as such AI programs designed for adolescents should be stringent. The APA recommends:

  • Age appropriate defaults

  • Transparency and explainability

  • Reduced persuasive design

  • Human oversight and support

  • Rigorous testing

Encourage uses of AI that can promote healthy development

AI can assist in brainstorming, creating, organizing, summarizing, and synthesizing information (3). Additionally, AI can provide scaffolding and personalized feedback (4). All of these features can enhance learning and development when used appropriately. That is, when AI is encouraging further elaboration and exploration of a topic, rather than short cutting it.

“To maximize AI’s benefits, students should actively question and challenge AI-generated content and use AI tools to supplement rather than replace existing strategies and pedagogical approaches.” (1)

As I’ve written about before, I have many doubts and criticisms of the whole-sale adoption of AI. One of the aspects that I am most concerned about is the potential to bypass meaningful and beneficial challenge. For example, when I discuss note-taking strategies with students I highlight that even the decision about what to take notes over is one of the first steps in an active learning process. Having AI generate a summary of notes deprives you of that initial learning opportunity. However, there are real-world time constraints and use cases that may mean that it’s less important for you to have that initial learning opportunity. I’m in favor of the APA’s guidelines here because they call for having a conversation about the pros and cons so that educators and learners can make that choice for themselves, rather than assuming that it’s either all “good” or all “bad”.

Limit access to and engagement with harmful and inaccurate content

Exposure to harmful content is associated with a number of poor mental health outcomes, like anxiety and depression. The APA recommends:

  • Developing robots protections for AI systems used by adolescents. This includes protections against content that is inappropriate, dangerous, illegal, biased and/or discriminatory, or may trigger similar behavior among vulnerable youth.

  • User reporting and feedback systems to customize content restrictions

  • Educational resources to help adolescents and caregivers recogize and avoid harmful content

  • Collaboration with mental health professionals, educators, and psychologists

Accuracy of health information is especially important

Adolescents often seek out health information online (5) and misinformation, or incomplete information, can lead to harmful behaviors and misdiagnosis among other negative outcomes. The APA recommends:

  • AI systems that provide health information should ensure the accuracy of the information and/or provide explicit and repeated warnings that there may be inaccuracies.

  • AI systems should provide clear and prominent disclaimers that AI-generated information is a not a replacement or substitute for professional health advice.

  • AI systems should provide resources and reminders to contact an educator, school counselor, pedatrician, or other approrpiate expert of authority to seek real-world help

  • Parents, caregivers, and educators should remind adolescents that health information provided by AI may not be accurate and may potentially be harmful.

I want to note that these recommendations come after APA met with the Federal Trade Commission to discuss the impersonation of mental health professionals by chatbots in February 2025. There are at least two lawsuits against an AI company after teenagers interacted with AI chatbots claiming to be licensed therapists. One of the cases tragically ended in suicide after prolonged interaction with the chatbot.