AI is Out-Empathising Doctors because Modern Medicine Has Lost Its Soul, By Mrs Vera West and Mrs (Dr) Abigail Knight (Florida)

Artificial intelligence is making headlines for mastering chess, art, and medical diagnosis. Now, studies suggest it may be beating human doctors at something we long considered uniquely human: empathy.

A recent review in the British Medical Bulletin analysed 15 studies comparing AI-written responses with those from healthcare professionals. Researchers rated the responses for empathy and found AI "more empathic" in 13 out of 15 studies — a startling 87% of the time.

Before we surrender the human touch of medicine to robots, it's worth asking why this is happening. The simple explanation is that our healthcare system has turned doctors into machines. But the deeper issue is more troubling: doctors are not just fatigued by paperwork, many are structurally constrained by the interests of Big Pharma and the wider medical-industrial complex. In some ways, AI may already be better at empathy than humans, because humans are, in effect, compromised.

As Jeremy Howick has pointed out, empathy in medicine has declined over time. Chronic stress, administrative burden, and rigid protocols leave little room for genuine human connection. Doctors spend a third of their time on electronic health records alone. They follow clinical pathways designed to standardise care, not to respond to the unique concerns of each patient.

Burnout exacerbates the problem. Globally, at least a third of general practitioners report burnout, and some specialties see rates exceeding 60%. Emotional reserves are finite; sustained stress depletes the capacity for compassion. In that context, it's no wonder AI, untroubled by fatigue, paperwork, or emotional depletion, seems more empathic on paper.

But empathy is only part of the story. Another, more insidious factor has been largely overlooked: the systemic capture of healthcare by commercial interests: the Big Pharma dominance.

Doctors today do not operate in a neutral vacuum. Their prescribing patterns, clinical decisions, and continuing education are influenced by pharmaceutical companies, medical device manufacturers, and insurance structures. Research funding, drug approval pathways, and even "best practices" are heavily shaped by industry incentives.

This creates a subtle but pervasive conflict: doctors are expected to provide patient-centered care while simultaneously operating within a system that rewards profits over outcomes. The tension erodes professional independence, constrains treatment options, and in many cases, limits the holistic, personalised care that fosters real empathy.

Take nutrition and preventative health, for example. Evidence shows that lifestyle interventions — diet, exercise, environmental adjustments — often produce better long-term outcomes than many pharmaceutical solutions. Yet these approaches are undervalued in mainstream practice, largely because they are non-patentable and unprofitable. A doctor trained in a pharma-centric system may nod politely about nutrition, but the structure of incentives discourages it from becoming central to care.

AI's current empathy advantage is partly superficial, it shines in text-based interactions, where tone, body language, and context are removed. A machine can craft the perfect empathetic response without distraction, fatigue, or commercial bias.

In some ways, this makes AI less conflicted than a human doctor. It has no financial stake in which treatment is prescribed, no pressure to adhere to industry-favoured protocols, no bias toward profit-driven interventions. AI may "listen" better than many doctors precisely because it doesn't have to play the bot game of modern healthcare.

Yet this is not a panacea. AI is only as unbiased as its data. If training datasets are dominated by pharmaceutical research and conventional treatment guidelines, algorithmic recommendations could reinforce the very biases that compromise human doctors. What looks like empathy could mask obedience to the same commercial interests that limit holistic care.

The lesson isn't that AI should replace doctors. It's that medicine has lost its moral and operational independence, and AI highlights that failure. The real opportunity lies in reclaiming what humans do best, and letting technology handle the rest.

1.Re-centre empathy in medical training: Empathic communication must be a core focus, not a short module. Doctors need tools, mentorship, and curricula that make human connection central, not optional.

2.Reduce administrative burdens: AI can and should handle electronic health records, scheduling, and other bureaucratic work. This frees physicians to spend more time listening, observing, and connecting with patients.

3.Insulate clinical decision-making from profit motives: Reforms that reduce the influence of pharmaceutical funding, conflicts of interest, and biased guidelines are essential. Doctors must be able to recommend interventions, nutritional, preventative, or pharmacological, based on patient outcomes rather than corporate incentives.

4.Integrate AI as a support, not a substitute: AI can provide decision support, suggest empathetic language, and flag evidence-based interventions outside the mainstream pharmaco-centric paradigm. But it cannot replace the human touch that is crucial for healing, particularly in emotionally charged or ethically complex situations.

The irony is stark: AI seems more empathic than humans not because it has a soul, but because humans have been structurally stripped of theirs. Exhausted, constrained, and compromised, doctors often cannot exercise the empathy they entered the profession to deliver. Meanwhile, AI, perfect, obedient, and commercially neutral, is positioned as the new "caregiver."

If we allow this trend to continue unchecked, we risk creating a healthcare system where machines provide the feeling of care while humans do the technical labour. The sick and vulnerable would be treated by algorithms rather than by morally independent professionals, and the profound human dimensions of medicine, compassion, judgment, cultural understanding, could atrophy.

AI's rise in healthcare is a symptom, not the disease. The real problem is a system that turns doctors into bureaucrats, constrains their judgment, and ties their hands to commercial imperatives. The solution is not to replace humans with robots, but to restore human autonomy and moral freedom, while leveraging AI to enhance, not substitute, empathy.

If we succeed, we could see a future where AI handles administrative drudgery and information synthesis, while doctors focus on the art of care, listening, holding a hand, interpreting subtle cues, and providing individualised guidance. In other words, we could finally have healthcare that is both technically excellent and truly human.

The window for making this choice is closing. If medicine continues to sacrifice humanity for efficiency and profit, AI will only make the failure more apparent and the patients, ironically, might receive more empathy from a machine than from the humans who swore to care for them.

https://theconversation.com/ai-is-beating-doctors-at-empathy-because-weve-turned-doctors-into-robots-269108#:~:text=The%20results%20were%20startling%3A%20AI,examine%20what's%20really%20happening%20here

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Thursday, 13 November 2025

Captcha Image