Re-imagining what is possible: The potential for AI in mental health
The long-term plan for the NHS will be the biggest domestic policy question for the rest of this year. The funding settlement offers a huge opportunity to transform the traditionally under-prioritised service of mental health. In her speech on the long-term plan, the Prime Minister called for “true parity of care between mental and physical health”.
The need for a transformed mental health system is increasing. Approximately 1 in 4 people in the UK experience a mental health problem each year and this number is set to rise. Growing numbers are putting strain on the NHS and other public services, like welfare and education, and are significantly affecting workplace productivity.
The Secretary of State for Health and Social Care, Rt Hon Matthew Hancock MP, has recently recognised the major role technology will play in NHS reform given it can achieve “the holy trinity of improving outcomes, helping clinicians and saving money”. Artificial Intelligence (AI) has the potential to re-imagine how mental health care is delivered. It can improve service delivery by providing personalised treatment based on an individual’s specific condition and it can improve clinical pathways by identifying early signs and patterns affecting a person’s mental health. It can also help to create patient-facing applications to provide support for those suffering from mental health problems. This broadens access to therapy and modifies interactions to an individual’s situation.
There are, of course, ethical questions that still need answering regarding the increased use of AI in mental health. Specifically, in the case of patient-facing applications, it is unclear what effect AI will have on the relationship between patients and healthcare practitioners. Could the move away from face-to-face contact have an adverse effect on someone’s mental health who really values the relationship they have with a clinician? AI is trained primarily on ‘measurable data’.
In the case of mental health, non-quantifiable information might be equally important (facial expressions, tone of voice), so what will it mean for patient care if we begin to rely more heavily on this information? Finally, clinicians are also going to be using AI more and more as decision-support tools. However, what happens if there are algorithmic errors? Who is accountable for misdiagnosis if the misdiagnosis was based on machine advice?
The roundtable was held under the Chatham House Rule.