The Promise and Limits of AI in Mental Healthcare

The Promise and Limits of AI in Mental Healthcare


Share this post

Artificial intelligence (AI) today permeates almost every sphere of modern life, from finance and defence to education and healthcare. While its recent explosion has captured global attention, the roots of AI stretch back several decades. The intellectual groundwork was laid in 1950, when British mathematician Alan Turing posed a revolutionary question in his paper “Computing Machinery and Intelligence”: Can machines think? A few years later, in 1955, computer scientist John McCarthy formally coined the term “artificial intelligence,” giving a name—and direction—to a field that would steadily evolve from theory to application. By the 1960s and 1970s, AI had begun its first practical experiments in healthcare through early expert systems designed to assist clinical decision-making, marking the discipline’s initial steps from laboratories into real-world use.

The formal birth of artificial intelligence is widely traced to 1956, when the term was officially introduced at the Dartmouth Conference, an event that established AI as a distinct field of scientific inquiry. From there, progress came in waves—sometimes slow, sometimes dramatic. Landmark moments such as IBM’s Deep Blue defeating world chess champion Garry Kasparov in 1997, and the rise of machine learning techniques in the 1990s, signalled that machines were no longer limited to rigid programming but could learn, adapt, and outperform humans in narrowly defined tasks.

Today, humanity finds itself living in an age once imagined only in speculative thought. The world shaped by artificial intelligence bears striking resemblance to futures envisioned by writers and thinkers such as Jules Verne, Isaac Asimov, Arthur C. Clarke, and Carl Sagan—where machines extend human capability and challenge our understanding of intelligence itself.

At its core, artificial intelligence is a branch of computer science dedicated to building systems capable of performing tasks that typically require human intelligence. These include learning from experience, reasoning, problem-solving, perception, and decision-making. By analysing vast volumes of data to identify patterns, modern AI systems can act autonomously—or with minimal human intervention—reshaping how societies work, decide, and imagine the future.

The integration of artificial intelligence (AI) into mental healthcare has evolved steadily—from early theoretical explorations in the mid-20th century to a wide range of contemporary applications aimed at improving access, efficiency, and diagnostic support. Today, AI is no longer a futuristic add-on; it has become an essential operational tool in mental healthcare systems worldwide.

AI contributes to mental health care in multiple ways. It strengthens early detection of mental health conditions, offers accessible 24/7 support, and enables personalised treatment plans informed by real-time data. By analysing speech patterns, written text, and behavioural cues, AI systems can identify warning signs, suggest coping strategies, and support self-management. Importantly, these tools also help reduce long-standing barriers such as stigma, cost, and limited availability of trained professionals.

As mental health professionals, we are acutely aware that stigma remains one of the greatest obstacles preventing individuals from seeking timely care. Many suffer in silence, deterred by fear of judgement or social consequences. Artificial intelligence holds the potential to soften—and in some cases eliminate—this barrier, offering discreet, non-judgemental pathways to support and fostering a more inclusive and supportive environment for those in need.

Another persistent challenge in mental healthcare, particularly in resource-limited settings, is the inability to maintain accurate and comprehensive patient records. During my 16 years as a medical doctor working in various government hospitals across Sri Lanka, I repeatedly encountered the absence of robust data-management systems. Patient histories, treatment progress, and follow-up information were often fragmented or incomplete, forcing clinicians to depend on paper-based records that were inefficient, vulnerable to loss, and prone to error. Looking back, one cannot help but reflect on how transformative AI-driven data systems could have been—both for clinicians striving to deliver continuity of care and for patients whose outcomes depend on it.

The integration of artificial intelligence in mental health care enhances speed, precision, and complete effectiveness. By utilizing electronic health records, healthcare providers can prioritize individuals at high risk, enabling early detection of conditions such as depression, psychosis, and suicidal thoughts.

AI tools help predict patients' behaviour patterns and risks associated with them. We can forecast potential suicides, self-harm, or homicidal tendencies. Here, I recall a special case study. This particular patient was referred to me by Dr. Neil Fernando for a psychological assessment. He was a combatant with a traumatic brain injury and drastic personality changes. We found that this combatant had unstable moods and a potential risk for violence. Therefore, we advised the authorities to place him under observation and to refrain from issuing him any weapons. However, these recommendations were not taken into consideration. The time passed, and within 8 months, we heard that this person committed several murders, and eventually the police arrested him. While in custody, he took his own life in the remand prison. If we had potential AI tools, we could put more pressure on the authorities and convince them. Moreover, we could have evaded a major disaster.

Some AI systems can forecast declines in mental health up to a year in advance with an impressive accuracy rate of 84%. Additionally, these systems offer personalized treatment recommendations while ensuring accessibility and confidentiality for those hesitant to seek traditional in-person care due to stigma.

AI-generated image for illustrative purposes.
AI-generated image for illustrative purposes.

As noted earlier, stigma remains one of the most damaging forces in mental healthcare. Fear of judgment often leads to shame, isolation, and discrimination, discouraging individuals from seeking help when they need it most. The consequences are severe: delayed intervention, poor adherence to treatment, worsening symptoms, and ultimately poorer health outcomes. Artificial intelligence offers a way to dismantle this barrier by enabling anonymous, non-judgemental, and easily accessible avenues for support—allowing individuals to seek help without fear or exposure.

I can speak to the promise of this technology from personal experience. Today, I am part of an AI-assisted healthcare monitoring system. My family physician in Toronto uses AI-driven tools to generate more precise and insightful assessments of my health. With secure access to my complete medical history and blood-test data, he is able to identify subtle trends, anticipate emerging risks, and alert me early—often before symptoms become clinically evident.

This is not medicine replacing the human touch; it is medicine strengthened by intelligence. When used responsibly, artificial intelligence empowers clinicians, reassures patients, and shifts healthcare from reactive treatment to proactive prevention—an evolution that mental healthcare, in particular, can no longer afford to ignore.

AI enhances our ability to utilize psychometrics with greater effectiveness and efficiency. It allows high-precision screening tools, particularly for conditions like depression, PTSD, ADHD, Schizophrenia, etc., to achieve accuracy rates of up to 89%, and importantly, they eliminate racial and gender biases in diagnosis and treatment. We know that racial and gender biases in mental health lead to misdiagnosis, under-treatment, and mistrust. AI can help eliminate racial and gender biases in mental health by standardizing diagnostic processes, analyzing large, diverse datasets to identify and correct disparities, and providing a neutral, non-judgmental digital interface for initial screenings.

AI-driven tele-therapy and mobile applications help dismantle geographical and logistical barriers, allowing mental health services to manage millions of interactions simultaneously. Triage tools powered by AI have been shown to cut wait times by as much as 50% by effectively prioritizing high-risk patients for immediate clinical intervention.

AI has greatly enhanced the efficacy of Virtual Reality (VR) therapy by establishing secure and controlled settings for diverse therapeutic methods. This AI-driven Virtual Reality technology supports exposure therapy, effectively addressing various phobias. Additionally, it integrates Eye Movement Desensitization and Reprocessing (EMDR) to enhance trauma processing and offers modules for Cognitive Behavioural Therapy (CBT), thereby improving treatment effectiveness.

AI-based mindfulness and stress management apps reduce stress by offering guided practices (breathing, meditation, body scans) that build present-moment awareness, helping users observe thoughts non-judgmentally to shift from reacting to responding. They improve emotional regulation, increase self-awareness of triggers, and foster self-compassion, making it easier to manage challenging situations, improve focus, and promote calmer states, thereby lowering cortisol and enhancing overall mental resilience. AI can support and enhance aspects of spiritual practice and personal growth.

Although there are new advancements associated with AI, many individuals harbour concerns that artificial intelligence may replace the human element. However, this notion is not entirely accurate; AI serves as a co-pilot, with humans maintaining leadership. Rather than replacing people, AI is designed to enhance their abilities and support their decision-making processes. While humans are prone to errors and may overlook certain blind spots in their work, AI acts as a corrective measure, positioning itself as a tool for empowerment. Although fears of a dystopian future, reminiscent of the "rise of the machines," may lead us to seek a saviour figure like John Connor, it is essential to recognize that AI is fundamentally projected to assist, not to dominate. AI is to augment human factors.

While the benefits of AI are numerous, it is important to recognize that it is not a magic bullet. AI comes with its own set of drawbacks and limitations. Therefore, I want to clarify that I do not idolize AI. It is not a divine or superior entity.

The use of AI in mental health care presents several significant downsides. One major concern is the absence of genuine human empathy; while AI can mimic empathetic responses, it cannot grasp emotional cues or establish the therapeutic rapport that human clinicians naturally develop, which is essential for effective therapy. AI cannot establish a genuine therapeutic relationship.

Today, many individuals rely on AI-driven virtual assistants like Siri and Alexa for their convenience. But Siri and Alexa cannot give us the human touch. Siri and Alexa do not love you.

Here, I remember one incident that occurred in February 2006 in Philadelphia. I was on my way to California, and my flight was cancelled due to a snowstorm. The blizzard grounded all the airplanes. I had to find a way to go to LA, and I was looking for possible flight options. When I called United Airlines, a young female answered me. I explained my dire situation, and she gave me several options. However, while I was talking to her, I realized that I was not talking to a human but to a robotic machine, and I became disappointed. I wanted a human connection. Despite the heavy snowfall, I went to the Philadelphia airport to seek human assistance. This indicates how we crave a human connection.

In the realm of mental health, the significance of emotional connection and trust is imperative. However, artificial intelligence lacks the capacity for empathy, compassion, and moral responsibility, which are crucial elements in fostering genuine human relationships.

Additionally, there are safety issues associated with AI-powered software that simulates human conversation, as unregulated usage can unintentionally reinforce harmful thoughts or worsen symptoms, particularly in vulnerable populations.

Privacy and data security also pose critical challenges, given that mental health data is highly sensitive, and the reliance on extensive personal information raises ethical concerns regarding misuse and breaches. For instance, there was a significant breach of former Toronto Mayor Rob Ford's health records in 2014 when staff at multiple hospitals, including Mount Sinai, inappropriately accessed his confidential medical information while he was being treated for cancer

Furthermore, algorithmic bias is a risk, as AI models trained on non-representative data may produce biased outcomes, perpetuating inequalities for marginalized groups. Algorithms trained on Western data may fail to recognize cultural variations in symptom expression. For example, a model might flag outward sadness as the primary indicator for depression while missing "somatic" expressions (like fatigue or pain) more common in non-Western cultures.

The unregulated nature of many AI tools means they often lack clinical validation, leading to potentially inaccurate or unsafe advice. As a matter of fact, AI is ill-equipped to handle critical emergencies, such as suicidal ideation, where immediate human intervention is vital. In response to these issues, some regions, like Illinois, have begun to impose restrictions on AI use in mental health therapy, emphasizing the need for professional oversight.

There were some instances where AI failed to recognize complex and serious mental health situations. AI cannot intervene in real time, and AI cannot be held morally or legally accountable like humans. AI cannot replace trained professionals. AI can support mental health services, but it cannot replace human judgment, empathy, or responsibility.

The use of AI in the mental health field does have its limitations; however, completely discarding it in favour of traditional approaches is not a viable option. Embracing a balanced integration of both AI and conventional methods may yield more effective outcomes for mental health care.

There are shortcomings in using AI in the mental health field. But we cannot totally remove AI and go back to the old system. We cannot "throw the baby out with the bathwater. “We can't go back to the old school method.

Artificial intelligence is still a developing tool, and any glitches must be identified, corrected, and refined over time. AI is expected to play an increasingly significant role in the field of mental health, where it will serve as an essential and valuable support tool. The future of AI in mental healthcare lies in transformative advances that enable more personalised, pre-emptive, and accessible care.


Share this post

Be the first to know

Join our community and get notified about upcoming stories

Subscribing...
You've been subscribed!
Something went wrong
When Namal Rajapaksa’s Platform Was Cancelled, So Were the Questions

When Namal Rajapaksa’s Platform Was Cancelled, So Were the Questions

JAFFNA— Plans for a debating-style event featuring Namal Rajapaksa — a former cabinet minister and the political heir to the Rajapaksa family — were cancelled in Cambridge and Oxford following protests by Tamil student groups and sections of the diaspora. Demonstrators argued that a platform at elite institutions should not be extended to a member of a political dynasty widely associated — in Tamil memory and in international human-rights discourse — with wartime atrocities, militarisation, and


Our Reporter

Our Reporter

Cuba's New Ambassador Presents Credentials to Sri Lankan President

Cuba's New Ambassador Presents Credentials to Sri Lankan President

COLOMBO — Cuba's newly appointed ambassador to Sri Lanka formally assumed her duties on Monday, presenting her credentials to President Anura Kumara Dissanayake in a ceremony at the Presidential Secretariat in Colombo, Sri Lanka's Foreign Ministry announced. Her Excellency Patricia Lázara Yego Yérrez, appointed by the Cuban government with the agreement of Colombo, presented her letters of credence at 3 p.m., completing the constitutional formality required before a foreign envoy may begin offi


Jaffna Monitor

Jaffna Monitor

“The Ball Is in Your Court Now,” Moscow Says on Financial Talks With Colombo

“The Ball Is in Your Court Now,” Moscow Says on Financial Talks With Colombo

COLOMBO — Russia's Ambassador to Sri Lanka, Levan Dzhagaryan, delivered a detailed review of Moscow's foreign policy at a press briefing held at the Russian Embassy in Colombo this week, dismissing Ukrainian President Volodymyr Zelensky as an illegitimate leader whose mandate had long expired. Responding to a question from Jaffna Monitor regarding potential Russian investment in Sri Lanka, the ambassador said, “Where should we invest? Unfortunately, we have no projects here to invest in.” The


Our Special Correspondent

Our Special Correspondent

In Jaffna, Critics See a New Architecture of Control

In Jaffna, Critics See a New Architecture of Control

JAFFNA, SRI LANKA — When a prominent doctor in the Northern Province set out to organise a community programme through his professional civil society network, he expected the usual obstacles: bureaucratic delays, funding constraints, and coordination challenges between departments. What he encountered instead was something entirely different. According to multiple credible sources who spoke with Jaffna Monitor on condition of anonymity, fearing professional consequences, the doctor was told tha


Our Special Correspondent

Our Special Correspondent