In just a few short years, Artificial Intelligence (AI) tools have become established as a very real option for employers looking to promote and support better mental health among staff. In the first of a two-part series of articles, we look at the rise of AI coaches and chatbots, their benefits and limitations, and how they fit into the wider landscape of worker wellbeing.

Far from being a Covid-induced blip on the radar, workplace mental health has become one of the defining challenges for employers in modern times.

Conditions such as anxiety and depression are now the primary cause of long-term absence for workers in the UK, where ill health results in almost two weeks being lost per worker per year. Overall, this is estimated to equate to an annual cost to employers of £51bn.

While these figures underline how there is no shortage of demand for mental health services, lengthy NHS waiting lists continue to make it difficult for the employees struggling with these issues to access the timely care they need. This has a clear negative knock-on effect in terms of both an individual’s personal wellbeing and their ability to perform at work.

Where it is an option, affected employees are increasingly turning to the private healthcare services offered via employee benefits packages to provide rapid-response help with mental health matters. This help can take many forms – Employee Assistance Programs (EAPs) are, after all, well-established as a source of guidance and advice. But in recent years, there is one technology-driven solution that has become more prevalent in workplace wellbeing toolkits: artificial intelligence (AI).

First line of support

In the form of AI coaches and chatbot ‘therapists’, AI is being used to facilitate immediate, private connections between staff and mental-health support on a 24/7 basis.

Examples include Nova from Unmind, which drinks giant Diageo has rolled out globally to all employees; Ebb, an ‘AI companion’ from popular meditation app Headspace; and personalised care platform Wysa.

Such tools offer instant access to a range of confidential services, including guidance on coping techniques, exercises based on cognitive behavioural therapy (CBT), and support for activities such as journaling.

Access is often via a standalone app or, in some cases, employees are provided with seamless access to AI therapy tools within the flow of their work. Platforms such as Unmind and apps such as Simpatico, for example, support direct integration within environments such as Microsoft Teams, productivity tools such as Slack or dedicated HR information systems (HRIS), reducing barriers to help.

Understandably for an emerging technology, there is not yet a strong evidence base for the effectiveness of AI therapy. However, some studies have shown promising results for those experiencing major depressive disorder, generalised anxiety disorder, and those at high risk of feeding or eating disorders.

The authors of this and other similar studies are clear that further research is needed in this area – and even the creators of the tools are keen to emphasise that they should not be regarded as a substitute for the care provided by human therapists, with many incorporating mechanisms to identify those at risk and direct them to appropriate clinical support.

The limitations of algorithms

It is important to note that bespoke AI-powered mental-health tools represent a different proposition from general AI tools such as ChatGPT since they are built with a specific purpose in mind. They do, however, share some of the limitations that are prevalent in all forms of artificial intelligence.

Generally speaking, responses generated by AI are underpinned by a large language model (LLM) and not any form of real ‘intelligence’. LLMs are trained on huge amounts of text data, enabling the model to learn patterns and relationships between words. When prompted by a user, the AI generates a response by predicting the next most likely word in a sequence.

Inherently, this means that chatbots cannot truly interpret complex emotional states or adapt dynamically to trauma. They might appear empathetic, but they are effectively simulating a therapeutic relationship.

That’s not to say such tools are not effective. There are many cases of users citing the positive impact of their relationship with an AI companion. It does, however, introduce the risk that individuals might become overly dependent on this outlet or overly trusting of AI responses that might be largely reinforcing their own perspective.

Furthermore, where the data used to train a model is reflective of majority populations, there is also a risk that responses are biased, potentially perpetuating social inequalities.

Issues around privacy and trust also continue to hang over AI tools. And in the context of processing sensitive personal reflections, some employees are likely to have concerns about the robustness of the underlying governance measures designed to prevent data misuse or breaches.

Such issues are well understood by AI developers, many of whom are at pains to underline how their AI therapy agents have been carefully engineered and fine-tuned to safeguard the privacy, security and safety of users.

Another layer of support

For employers, the addition of AI wellbeing tools presents an opportunity to provide an additional, accessible and scalable layer of on-demand help to staff, complementing benefits offerings such as EAPs, private medical insurance and other initiatives.

At another level, they have the benefit of generating data and information on wellbeing trends without breaching individual privacy. This valuable insight can help HR teams adapt support mechanisms accordingly, potentially shifting to a more proactive wellbeing strategy rather than reactively addressing health-related problems as they arise.

For teams, the instant-access nature of digital tools can be a major benefit in the face of weeks-long waiting lists for counselling services, allowing confidential wellbeing conversations to be instigated whenever and wherever they are needed. By effectively lowering the barrier and reducing the associated stigma, they have real potential to encourage more people to reach for help.

In summary then, AI tools can be seen as exciting and potentially powerful tools for proactive employers looking to help address the rising tide of mental health problems among the workforce. It is also clear, however, that technology cannot achieve this objective in isolation. Artificial intelligence must sit within a wider company culture of positive wellbeing and form part of an ecosystem of benefits where human support remains central if they are to make a real difference to this vitally important issue.

The information contained within this communication does not constitute financial advice and is provided for general information purposes only. Links to related sites have been provided for information only. Their presence on this blog does not mean that the firm endorses any of the information, products or views published on these sites. No warranty, whether express or implied is given in relation to such information. Vintage Health or any of its associated representatives shall not be liable for any technical, editorial, typographical or other errors or omissions within the content of this communication.