AI and the Future of Doctors and Lawyers: Extinction or Evolution? By Ian Wilson LL.B and Brian Simpson

In a 2025 Business Insider interview, Jad Tarifi, a former Google AI pioneer and founder of Integral AI, issued a stark warning to aspiring doctors and lawyers: AI's rapid advancements could make their advanced degrees obsolete, as generative models increasingly outperform humans in tasks traditionally requiring years of specialised training. Tarifi's claim that pursuing medical or law degrees is akin to "throwing away" years of life, coupled with his scepticism about PhDs outside niche fields like AI for biology, echoes broader fears about AI's labour-displacing potential. Yet, his assertion that AI will soon master these professions overlooks current limitations, ethical barriers, and societal needs. Are human doctors and lawyers truly facing extinction, or is this a case of Silicon Valley hubris worshipping the sacred cow of AI supremacy? This discussion evaluates Tarifi's predictions, balancing AI's capabilities with the enduring value of human expertise.

AI's Capabilities in Medicine and Law

Tarifi's argument hinges on AI's ability to automate tasks central to medicine and law. In medicine, AI has shown promise in diagnostics and data analysis. For instance, Nature Medicine (2024) reported that AI models like Google's Med-PaLM achieved 79.5% accuracy on the UK Royal College of Radiology exam, close to human radiologists' 84.8%. In diagnostics, AI can outperform humans in specific tasks, such as detecting retinal biomarkers for Alzheimer's (NCBI Bookshelf, 2025) or predicting sepsis a day in advance (PMC, 2020). These tools process vast datasets, genomic, imaging, or clinical, faster than humans, supporting Tarifi's claim that medical education's reliance on memorisation is outdated. Generative AI, like OpenEvidence, aids doctors by summarising medical literature in real-time, as noted by Harvard's Adam Rodman (Harvard Gazette, 2025).

In law, AI is transforming routine tasks. Tools like Leya and EvenUp streamline document review and legal research, with Thomson Reuters' 2024 Future of Professionals Report estimating AI could save lawyers 200 hours annually (Legal Blog, 2024). Advanced language models, such as those from Google's DeepMind, can draft contracts or analyze case law with increasing accuracy. Tarifi's warning aligns with predictions like those in NatLawReview (2024), where experts foresee AI replacing entry-level lawyers within five years, especially with quantum computing advancements like Google's Willow chip, which performs calculations in minutes that would take supercomputers 10 septillion years.

These developments suggest AI could disrupt both fields, particularly for tasks involving pattern recognition, data processing, or rote memorisation, skills Tarifi argues, are overemphasized in traditional education. If AI continues improving, as he predicts, it could indeed challenge the necessity of lengthy training for certain roles.

The Limits of AI in Medicine and Law

Despite these advances, AI's limitations temper the extinction narrative. In medicine, AI struggles with complex, context-dependent tasks requiring emotional intelligence, ethical judgment, or physical interaction. A 2024 study cited in IEEE Spectrum (2025) found that GPT-4, when used by doctors, did not improve diagnostic accuracy or speed compared to humans alone, and its standalone performance, while superior, was prone to "hallucinations," fabricated outputs that could mislead treatment (IEEE Spectrum, 2025). AI also lacks the empathy critical for patient trust, as noted in PMC (2020), which emphasises that "no application can replace personal connection." With physician shortages projected to reach 124,000 in the US by 2034 (AAMC, 2024), replacing human doctors entirely seems impractical, especially for hands-on specialties like surgery or psychiatry.

In law, AI's challenges are similar. While it excels at repetitive tasks, it cannot navigate courtroom advocacy, ethical dilemmas, or client relationships requiring nuanced judgment. NatLawReview (2025) quotes Jenna Earnshaw of Wisedocs: "AI can't think critically, navigate ethical dilemmas, or argue a case in court." Hallucinations remain a persistent issue, with no legal tech company fully eliminating them (NatLawReview, 2025). Human oversight is essential to catch biases or errors, as AI's "black box" nature obscures its reasoning (PMC, 2023). Moreover, legal systems value precedent and human accountability, making full automation unlikely in high-stakes cases.

Tarifi's confidence in AI's trajectory also assumes continuous exponential growth, but bottlenecks like data scarcity, energy demands, and regulatory hurdles could stall progress. IEEE Spectrum (2025) notes that websites are increasingly blocking AI data scraping, and AI data centres are projected to increase Canada's electricity demand by 75% by 2050 (NCBI Bookshelf, 2025). The EU's AI Act (2024) and proposed US regulations further complicate deployment, requiring human-in-the-loop oversight (TechTarget, 2024).

The Sacred Cow of Professional Prestige

Tarifi's warning challenges a sacred cow: the belief that advanced degrees in medicine and law guarantee secure, prestigious careers. This dogma, rooted in decades of social and economic reverence for these professions, is now questioned as AI automates tasks once reserved for highly trained humans. However, his Silicon Valley-centric view, prioritising "meditation" and "emotional self-discovery" over professional training, creates a new sacred cow: the overhyping of AI's capabilities. Unchecked reliance on intelligent systems risks subjugating human agency. Tarifi's dismissal of PhDs, except in niche fields, ignores the value of critical thinking and specialisation that degrees cultivate, which AI cannot replicate in dynamic, human-centric contexts.

Evolution, Not Extinction

Rather than extinction, human doctors and lawyers face evolution. AI will likely augment these professions, as suggested by the American Medical Association (Harvard Medical School, 2023), automating repetitive tasks like note-taking or legal drafting to free professionals for higher-level work. PMC (2023) emphasises human-AI collaboration, where doctors using AI outperform those who don't, a sentiment echoed by Chris Williams of Leya: "AI won't replace lawyers, but lawyers using it will" (NatLawReview, 2025). In medicine, AI scribes and diagnostic tools could alleviate burnout, affecting 63% of US primary care physicians (Harvard Gazette, 2025), while lawyers could focus on strategy and advocacy.

However, Tarifi's caution about lengthy education has merit. Medical training, often spanning 8–12 years, and law degrees, costing upwards of $200,000, may need reform to integrate AI literacy and focus on uniquely human skills. Niche fields like AI for biology, as Tarifi suggests, or cancer immunotherapy (PLoS One, 2020), offer opportunities where AI is nascent, requiring human innovation. Ethical and regulatory frameworks must also evolve, ensuring AI serves patients and clients without compromising accountability.

Human doctors and lawyers are not on the brink of extinction, but their roles are transforming. Tarifi's warning about AI's potential to disrupt these professions is grounded in real advancements, AI's diagnostic and legal research capabilities are impressive, but overstates its ability to replace human judgment, empathy, and accountability. The sacred cow of professional degrees must be questioned, but so too must the hype of AI supremacy. By embracing AI as a tool, not a replacement, and reforming education to value critical thinking, doctors and lawyers can evolve to meet future demands.

https://futurism.com/former-google-ai-exec-law-medicine

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Friday, 29 August 2025

Captcha Image