Connect with us

News

AI Therapists Boosting Mental Health But Human Empathy Remains Vital

Published

on

The use of chatbots in mental health support is gaining traction, but concerns remain about their limitations in replacing genuine human empathy. This issue came to light when the U.S. National Eating Disorders Association (NEDA) replaced its helpline with a chatbot named Tessa, which provided harmful advice to individuals struggling with mental illness as reported by The Economic Times. The incident resulted in the dismissal of five workers. NEDA acknowledged the positive results from the bot but is investigating the advice it gave and considering future steps.

Mental health chatbots powered by artificial intelligence (AI) are growing in popularity globally, including in South Africa, where resources are limited. Despite concerns about data privacy and ethical counselling, over 40 mental health chatbots currently exist. Individuals like Jonah, coping with obsessive-compulsive disorder, find chatbots like ChatGPT useful as an additional support system alongside regular therapy sessions.

The surge in mental health tech startups has led to substantial venture capital funding, particularly since the COVID-19 pandemic brought mental health into the spotlight. The increased need for remote medical assistance during the pandemic has further highlighted the importance of these digital tools.

Access to mental health support is a growing challenge worldwide, with anxiety and depression affecting an estimated one billion people globally. The pandemic has exacerbated this issue, with a 27% increase in mental health issues. Affordability remains a significant barrier to treatment, and while AI therapy can be more accessible, it is crucial to address potential disparities in healthcare.


Also Read: Closing the Digital Gap: Artificial Intelligence as a Catalyst

Advertisement

Privacy concerns also loom large. A study by the Mozilla Foundation highlighted privacy risks in mental health and prayer apps, with many failing to meet security standards and potentially sharing user data with third parties. The possibility of collecting sensitive data from insurance companies, data brokers, and social media platforms raises significant concerns about data breaches and privacy violations.

In response to these challenges, the mental health app Panda in South Africa is developing an AI-generated “digital companion” that will interact with users, provide treatment suggestions, and offer insights to traditional therapists, with the user’s consent. Strong encryption and privacy measures are in place to protect user data.

While regulations and voluntary codes of conduct are being developed to safeguard users and ensure ethical practices in the AI industry, lawmakers worldwide are pushing for comprehensive legislation. Despite the benefits of chatbots, genuine human empathy remains an irreplaceable aspect of mental health support. As emphasised by former NEDA counsellor Nicole Doyle, technology should be seen as a tool that complements human interaction rather than a substitute for it.

Also Read: 

Prominent Figures Unite to Address AI in South Africa

Advertisement

Follow us on Google News

Photo by Kindel Media