AI in Mental Health Care: What Employees Expect and Employers Should Ask
New research reveals what workers expect, what science says, and how employers can evaluate AI responsibly.
New research reveals what workers expect, what science says, and how employers can evaluate AI responsibly.

Artificial intelligence is reshaping how people live, connect, and get support—including when it comes to their mental health. For employers, the question isn’t if AI will play a role in care delivery, but how to ensure it’s safe, equitable, and effective.
In Modern Health’s latest survey of 1,000 Gen Z and Millennial full-time U.S. employees, 81% said they’re interested in AI tools that could help detect early signs of burnout or depression, and nearly as many said they’d welcome personalized recommendations for care and well-being resources.
That interest, however, comes with clear expectations. Employees expressed concern about privacy and data security (87%), poor or inaccurate guidance (84%), and overreliance on technology (86%). In other words, they see the potential—but they’re equally aware of the risks.
“People are open to innovation, but they expect it to be done responsibly, especially when it comes to mental health care,” says Chen Cheng, Modern Health’s Chief Technology Officer. “They want to know that technology is clinically guided, secure, and used to make care more personal rather than less human.”
AI in mental health care isn’t a single tool or experience. When designed within a clinically governed, adaptive care model, it functions as supportive infrastructure—helping care evolve as people’s needs, preferences, and circumstances change.
Within Modern Health’s adaptive care approach, AI can responsibly enhance care in several ways:
But there’s a critical distinction to understand:
“There’s a big difference between general-purpose AI tools built for the public and AI developed within mental health care,” explains Cheng. “At Modern Health, our AI is designed in close partnership with our Clinical Strategy & Research team, with clinical review, governance, and escalation protocols embedded from the start—not added later.”
That distinction separates responsible care technology from tools that may feel helpful but operate without sufficient safeguards. In moments of distress, convenience can draw someone in, but without expert oversight, those interactions risk being inaccurate, ineffective, or unsafe.
Clinically governed AI isn’t only about mitigating risk. It’s also about ensuring and improving quality—using expert input to shape how technology delivers appropriate, effective, and evidence-based support, with humans continuously guiding and reviewing its role in care.
Recent research reinforces both the promise of AI and the importance of clinical governance.
Outcomes were comparable to therapist-supported digital programs, underscoring the level of clinical involvement required to develop and govern such tools responsibly.
At the same time, other research has shown that general-purpose language models not designed for clinical use can struggle to detect risk signals such as suicidal ideation—reinforcing why mental health AI must be built with clear boundaries, escalation pathways, and expert oversight.
“AI isn’t inherently good or bad—it’s how it’s designed and governed that determines its impact,” says Dr. Jessica Watrous, Modern Health’s Chief Clinical Officer.
“We embed clinical expertise throughout the development and review of AI-enabled tools so they can support our adaptive care model with a foundation of safety, equity, and measurable value.”
In mental health, adaptive care means providing flexible, personalized support that can readily shift across modalities, including coaching, therapy, digital tools, and resources, along a user’s care journey.
Rather than positioning AI as a standalone form of care, Modern Health approaches AI as enabling infrastructure that helps adaptive care function more seamlessly—supporting continuity, personalization, and timely transitions between human-led modalities.
Within this framework, AI can help support care in practical ways:
“Responsible AI is built with clinical experts and designed to support—not replace—human care,” says Dr. Watrous. “When used within an adaptive care model, it helps care stay responsive as people’s lives and needs change over time.”
Employers navigating this evolving landscape are interested not just in whether or not a vendor is using AI, but how AI is governed and integrated into care.
Here are key questions to consider when evaluating AI-enabled mental health solutions across the market:
“For AI to earn trust in mental health care, it has to work in partnership with people,” says Cheng. “That’s where responsible solutions will stand apart.”
Employees are open to AI in mental health care, but only when it’s safe, transparent, and guided by humans. For employers, responsible adoption means prioritizing clinical governance and choosing partners that use AI to strengthen human-led, adaptive care rather than a shortcut.
When embedded thoughtfully, AI can help mental health support become more continuous, responsive, and personalized without losing the empathy, judgment, and trust that only people can provide.
As both employees and employers have made clear, the future of mental health care depends not just on innovation but on integrity.
Learn, connect, and see how adaptive mental health care works for organizations and individuals alike.

Join conversations on workplace well-being, leadership, and mental health.

Discover how Modern Health delivers scalable mental health support across 200+ countries and territories.

Explore ROI insights and frameworks that show how adaptive care drives measurable outcomes and healthier workforces.