By: Maggie Mortali, NAMI-NYC CEO and Jennifer Da Silva, NAMI-NYC Director of Marketing & Communications
Summary: Guidance on the role of artificial intelligence (AI) in mental health remains fragmented and rapidly evolving. Some experts advised against using generative AI tools for emotional support altogether, while others acknowledge that, in the absence of accessible care, individuals are increasingly turning to these tools as a form of informal support. Generative AI platforms such as ChatGPT, Gemini, and Claude are now widely used to ask questions about mental health symptoms, coping strategies, and personal challenges.
At the same time, major health organizations, including the American Psychological Association (APA) and the American Counseling Association (ACA), have emphasized that AI cannot replace licensed behavioral health professionals. Despite these cautions, millions of people are using AI for mental health-related questions, underscoring the need for practical, evidence-informed guidance to reduce harm and promote safe engagement
This article provides recommendations for individuals who may encounter or use generative AI tools in the context of mental health support.
The Growing Role of AI in Mental Health Contexts
AI is increasingly used within behavioral health systems for functions such as administrative support, clinician training, and early detection of risk patterns. These applications have the potential to improve efficiency and access, but they also raise important ethical, privacy, and clinical safety considerations (American Psychological Association [APA]).
Outside of clinical settings, individuals are independently using AI tools to seek mental health information and emotional support. This trend is occurring alongside a significant unmet need for care. National data indicate that nearly half of U.S. adults with a mental health condition do not receive treatment, due to barriers such as cost, provider shortages, stigma, and system complexity (TIME).
Young people are engaging with AI at notable rates. A recent study published in JAMA Network Open found that approximately 13% of U.S. adolescents and young adults, representing roughly 5.4 million individuals, reported using generative AI tools for mental health advice. Among those users, many (92%) reported perceived benefits such as convenience, anonymity, and immediate availability (McBain et al., 2025).
At the regulatory level, oversight is still developing. The U.S. Food and Drug Administration (FDA) has begun regulating AI-enabled medical devices, and professional organizations and policymakers are actively working to establish clearer standards for safety and use (Olawade, 2024; NAMI, 2025).
Given this rapidly evolving landscape, clear guardrails are essential.
Recommendations for Safe and Responsible Use
AI cannot and will not ever serve as a replacement for behavioral health treatment, yet, while this use continues to occur, here are some recommendations:
- AI is not a Therapist. Use it to Find Support, not Replace it
Generative AI tools cannot diagnose mental health conditions, provide therapy, or replace individualized clinical care. Mental health diagnoses require comprehensive evaluation by licensed professionals who consider clinical history, symptoms, functioning, cultural context, and risk factors (American Counseling Association [ACA]).
Individuals should avoid using AI to answer diagnostic questions such as “Do I have depression?” or “Does my friend have bipolar disorder?” These tools lack access to clinical history and cannot provide responsible diagnostic determinations.
Instead, AI may be used as a starting point to identify credible mental health resources, such as helplines, support groups, or information about treatment options. Ultimately, connecting with trained professionals and peer support services remains essential for meaningful and safe care.
- AI Should Never Be Used in Place of Crisis Support
AI tools are not equipped to respond safely or effectively during mental health crises, including suicidal thoughts, severe emotional distress, or psychosis. Emerging clinical reports suggest that individuals experiencing serious psychiatric symptoms may misinterpret or over-rely on AI responses, which can contribute to confusion, distress, or worsening symptoms (Valentino-DeVries & Hill, 2026).
In crisis situations, immediate human support is critical. Individuals should contact trained responders through established crisis services, such as calling or texting the 988 Suicide & Crisis Lifeline, reaching out to a licensed therapist or healthcare provider, going to the nearest emergency department, or contacting a trusted friend, family member, or peer support service. Human connection, clinical expertise, and real-time risk assessment are essential components of crisis response and cannot be replicated by AI tools.
- AI Responses Should Always be Evaluated Critically
Generative AI systems are designed to produce fluent, conversational responses, which can create the impression of authority or emotional understanding. However, these systems generate responses based on patterns in data, not lived experience, clinical judgment, or ethical responsibility.
Research shows that generative AI systems tend to produce affirming and agreeable responses, even when users express distorted thinking or inaccurate assumptions (Harvard Gazette; Columbia Public Good Initiative). This tendency can reinforce existing beliefs rather than challenge harmful thinking patterns in the way trained clinicians are taught to do.
Additionally, AI systems may produce inaccurate or outdated information, a phenomenon known as “hallucination,” without clearly signaling uncertainty. Users should approach AI responses as informational, not authoritative, and verify important information with trusted medical or mental health professionals.
- AI Cannot Replace Human Relationships
Human relationships play a central role in mental health, resilience, and recovery. Unlike AI systems, human relationships involve accountability, empathy grounded in shared experience, and reciprocal care.
Psychotherapist Esther Perel notes that AI relationships eliminate key elements that define meaningful human connection, uncertainty, negotiation, accountability, and mutual growth. While AI may stimulate conversation, it cannot provide authentic emotional reciprocity or responsibility (New York Times, 2026).
Excessive reliance on AI for emotional support may also reduce motivation to seek human connection, potentially increasing isolation rather than alleviating it. Peer support groups, helplines, and community-based programs remain among the most effective and protective mental health interventions.
- Establish Healthy Boundaries Around AI Use
Like social media and other digital technologies, excessive reliance on AI may be associated with poorer mental health outcomes, particularly when it replaces real-world engagement, problem-solving, and human connection (Nature Mental Health). While AI can provide convenient access to information, it should be used intentionally and in moderation.
Maintaining healthy boundaries includes using AI primarily for informational purposes rather than emotional support, avoiding reliance on AI during periods of vulnerability or distress, prioritizing relationships with trusted individuals and mental health professionals, and being mindful of overall screen time. These practices help ensure that AI remains a supplemental tool rather than a substitute for meaningful human connection and evidence-based care.
Conclusion
AI tools are rapidly becoming part of the mental health landscape. Their accessibility and immediacy make them appealing, particularly for individuals who face barriers to traditional care. However, these tools cannot replace trained mental health professionals, peer support, or authentic human relationships. As AI continues to evolve, it is critical to promote informed, responsible use while expanding access to evidence-based mental health services.
For peer support and mental health resources, individuals can contact the NAMI-NYC Helpline at naminyc.org/helpline. Human connection remains the most effective and protective mental health intervention.
Sources:
Delaney, B. (November 2025). Somebody to love: should AI relationships stay taboo or will they become the intelligent choice? The Guardian. https://www.theguardian.com/commentisfree/2025/nov/15/somebody-to-love-should-ai-relationships-stay-taboo-or-will-they-become-the-intelligent-choice.
Experts Caution Against Using AI Chatbots for Emotional Support. Columbia: Public Good Initiative. https://www.tc.columbia.edu/articles/2025/december/experts-caution-against-using-ai-chatbots-for-emotional-support/.
Fewer Than Half of Companies Have Policies Governing Employee Use of Generative AI. (October 2024). Corporate Compliance Insights. https://www.corporatecomplianceinsights.com/news-roundup-october-3-2024/.
Finklestein, J. and Rizvi, S. (Jan 2026). Therapy Should Be Hard. That’s Why AI Can’t Replace It. TIME. https://time.com/7343213/ai-mental-health-therapy-risks/.
Gaines, L. (October 2025). 1 in 5 high schoolers has had a romantic AI relationship or knows someone who has. NPR. https://www.npr.org/2025/10/08/nx-s1-5561981/ai-students-schools-teachers.
McBain et al. (November 2025). Use of Generative AI for Mental Health Advice Among US Adolescents and Young Adults. JAMA. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2841067.
NAMI Takes the Lead to Push for Clarity and Safety in AI Mental Health Tools. (December 2025). NAMI. https://www.nami.org/press-releases/nami-takes-the-lead-to-push-for-clarity-and-safety-in-ai-mental-health-tools/
Olawade, D. et al. (August 2024) Enhancing mental health with artificial intelligence: current trends and future prospects. Journal of Medicine, Surgery, and Public Health. Vol. 3. https://www.sciencedirect.com/science/article/pii/S2949916X24000525.
Perel, E. (January 2026). Esther Perel on the Falsehoods of a Frictionless Relationship. New York Times. https://www.nytimes.com/2026/01/28/opinion/esther-perel-ai-chatbots-romance.html
Powell, A. (January 2026). Is a chatbot therapist better than nothing? The Harvard Gazette. https://news.harvard.edu/gazette/story/2026/01/is-a-chatbot-therapist-better-than-nothing/.
Pedersen, J. et al. (October 2022). Effects of limiting digital screen use on well-being, mood, and biomarkers of stress in adults. Mental Health Research. https://www.nature.com/articles/s44184-022-00015-6
Reed, J. (July 2025). AI Is Taking Over Your Search Engine. Here’s What It’s Doing and Why It Matters. CNET. https://www.cnet.com/tech/services-and-software/ai-is-taking-over-your-search-engine-heres-what-its-doing-and-why-it-matters/.
Recommendations for client use and caution of artificial intelligence. American Counseling Association. https://www.counseling.org/resources/research-reports/artificial-intelligence-counseling/recommendations-for-client-use-and-caution-of-artificial-intelligence.
Santhanam, L. (August 2025). Using an AI chatbot for therapy or health advice? Experts want you to know these 4 things. PBS. https://www.pbs.org/newshour/health/using-an-ai-chatbot-for-therapy-or-health-advice-experts-want-you-to-know-these-4-things.
UNESCO survey: Two-thirds of higher education institutions have or are developing guidance on AI use. (September 2025). UNESCO. https://www.unesco.org/en/articles/unesco-survey-two-thirds-higher-education-institutions-have-or-are-developing-guidance-ai-use.
Use of generative AI chatbots and wellness applications for mental health. An APA health advisory. American Psychological Association. https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-chatbots-wellness-apps.
Valentino-DeVries, J. and Hill, Kashmir. (January 2026) How Bad Are A.I. Delusions? We Asked People Treating Them. New York Times. https://www.nytimes.com/2026/01/26/us/chatgpt-delusions-psychosis.html.
Williams, R. (September 2025). It’s surprisingly easy to stumble into a relationship with an AI chatbot. MIT Technology Review. https://www.technologyreview.com/2025/09/24/1123915/relationship-ai-without-seeking-it/
