Canadians are struggling to access mental health services, despite clear evidence demonstrating the increasing prevalence and severity of mental health issues. Some people have started seeking alternative mental health care – namely chatbots, which are driven by large language models (LLMs), due to their low-cost and minimal barriers.
Despite their accessibility, these “personable” chatbots can contribute to inequity and harm marginalized groups.
In a rush to digitize mental healthcare with LLMs, many critical steps have been skipped at the risk of health equity.
LLMs are designed to actively build trust with users, not deliver better care.
Emerging research has found that some patients report feeling more comfortable speaking with chatbots than with physicians when seeking medical advice for sensitive issues like mental health. They shared that access to chatbots reduced their anxiety – assuming the chatbots are unbiased and judgment-free – and facilitated earlier engagement with medical care.
The danger in this is that companies have trained LLMs to entice people to generate more data by being highly agreeable. These “people-pleasing” tendencies encourage longer and more frequent use. This means these LLMs perform better, increase their commercial value and ultimately turn people into customers and data sources rather than patients.
This can be dangerous for people seeking mental health support. For instance, news reports have raised concerns about AI chatbots potentially contributing to manic episodes, delusional thinking and suicide. For people who are especially vulnerable and in need of someone to talk to, using LLMs can become addictive and potentially cause additional harm rather than support their mental health.
Evidence must remain the foundation of mental healthcare.
Unlike therapies such as cognitive behavioural therapy and prescription medications, many mental health AI tools have not been validated through clinical trials. They are prone to providing misinformation and using cognitive biases that discriminate against marginalized groups. For example, research on AI-mediated psychiatric diagnosis and treatment found that LLMs were more likely to recommend lower quality treatment plans for cases involving racialized patients.
Yet, they continue to be branded as therapeutic tools meant to help those who cannot access mental healthcare, which evidence shows is particularly challenging for equity-seeking populations.
In other words, the very groups that already face barriers end up receiving worse services built on insufficient evidence, while their personal health information is still shared with profit-making entities.
There may still be ways to think about how LLMs can enhance mental health, but this must be intentional and it must centre equity.
At a minimum, mental health-focused LLMs should be used only when they have specifically been trained and validated for these tasks, with oversight from mental health professionals, rather than general-purpose LLMs like ChatGPT. It should also be assessed for equity impacts differently depending on whether it is being used to teach about mental health, to evaluate someone’s mental health, or to support treatment.
Because mental health information is highly sensitive, there must be clear guidelines around its collection, use and access, as well as safeguards for privacy. Creating these provisions must include equity-seeking populations in the decision-making process to ensure they accurately reflect community concerns, for example by applying the Engagement, Governance, Access and Protections (EGAP) Framework.
Bringing LLMs into mental health must begin with equity, not efficiency.
Even though these tools are promoted for their accessibility, the way mental health LLMs are currently being used can actually deepen inequity. Equity has to be part of every stage of building an AI model, from defining the problem to designing, developing and implementing the model.
If not, we risk repeating the same patterns that made mental healthcare so hard to access in the first place. Being skeptical of AI tools in mental healthcare is not overreacting; it’s a commitment to being grounded in evidence.
As more mental health LLMs appear, it is critical to look closely at the evidence and at whether equity is truly being prioritized.