The hype, if not BS, surrounding artificial intelligence (AI), particularly large language models (LLMs) like ChatGPT, Gemini, and others, has sparked fears of a "white-collar bloodbath," with predictions of millions of jobs lost to automation. A May 2025 Axios article cites Anthropic CEO Dario Amodei warning that AI could eliminate half of entry-level white-collar jobs, potentially spiking unemployment to 10–20% within one to five years. Yet, for the anti-AI crowd, including me, those sceptical of AI's ability to replace human workers en masse, the reality is far less apocalyptic. Drawing on real-world examples like investigative journalist Ian Lind's experience and insights from critics like Michael Spencer and Jing Hu, this discussion argues that AI's fundamental limitations, particularly its inability to deliver absolute accuracy in high-value tasks, ensure that human jobs remain secure until AI can overcome these critical flaws. Far from a job-killing juggernaut, AI remains a flawed tool that often creates more work than it saves!

At the heart of the anti-AI argument is a simple truth: AI fails where absolute accuracy is required to create value. Unlike low-stakes tasks like generating a school essay or a catchy jingle, where "good enough" suffices, high-value work in fields like journalism, law, finance, and cybersecurity demands precision that AI cannot consistently deliver; you know,this essay! Ian Lind, an investigative journalist with decades of experience, tested AI tools (NotebookLM, Gemini, ChatGPT) to analyse a massive trove of documents from a complex federal prosecution case in Hawaii. His findings, shared in a July 2025 post, reveal AI's critical weaknesses:

Incomplete Processing: AI doesn't "read" entire datasets, stopping once it has enough to generate a plausible response. Lind found that AI missed mentions of a key individual, "Drew," in 150 warrants, despite claiming otherwise.

Digital Dementia: AI lacks memory of prior queries or responses, leading to inconsistent outputs. This makes it unreliable for iterative tasks requiring continuity.

Untrustworthy Outputs: AI produces "good enough" responses riddled with errors or hallucinations, fabricated details that sound convincing but are wrong. Lind noted that even straightforward queries yielded "noticeable errors," rendering AI untrustworthy for consequential work.

These flaws stem from AI's core architecture. LLMs are probabilistic text prediction engines, not reasoning machines. As Jing Hu of 2nd Order Thinkers explains, AI lacks true agency or understanding, following pre-set steps without adapting to unexpected changes, like a new form field or an API error. This echoes philosopher Herbert Dreyfus's 1972 critique in What Computers Can't Do, which argued that AI's reductionist approach cannot replicate human intuition or context. The Chinese Room thought experiment further illustrates this: AI mimics understanding without grasping meaning, like a person translating Chinese symbols without knowing the language.

In high-stakes fields, these limitations are dealbreakers. A 2025 Guardian article recounts a graphic designer laid off after six years because AI supposedly replaced her work, only for the company to face backlash when AI-generated outputs required extensive human correction. Similarly, a CrowdStrike employee noted that generative AI tools produced inaccuracies that embarrassed the company when communicated to customers, requiring human intervention to fix. As a July 2025 ZeroHedge post aptly states, "Nobody's sharing the AI tool's error that forfeited the lawsuit" because such failures undermine the hype.

Far from replacing humans, AI often creates liabilities that demand more human labour. In legal and investigative work, errors like missing a defendant's name in warrants can derail cases, costing time, money, and credibility. Lind's experience highlights that AI's "shoot from the hip" approach, generating superficially credible but flawed outputs, forces professionals to double-check its work, negating any efficiency gains. A 2025 Axios report notes that companies like Axios require managers to justify why AI won't do a job before hiring humans, indicating scepticism about AI's reliability even among adopters.

This pattern extends to other sectors. A finance worker shared on X that AI analyst agents pitching discounted cash flow (DCF) models required human analysts to redo the work for accuracy, effectively doubling the effort. In cybersecurity, CrowdStrike's 2025 layoffs of 500 employees were justified by AI's "efficiency," yet remaining workers faced increased workloads to correct AI's mistakes, with no additional compensation. These examples show that AI's errors create "shadow work," unpaid or underpaid human labor to fix machine failures, undermining claims of widespread job replacement.

The AI hype, fuelled by breathless claims of robots dancing or chatbots coding, obscures these limitations. A 2025 Forbes article predicts that AI could dominate the job market by 2050, with up to 60% of jobs requiring adaptation. Yet, projections like Goldman Sachs' estimate of 300 million jobs impacted globally or McKinsey's 30% automation of U.S. work hours rely on assumptions that AI will soon achieve near-human accuracy. Real-world evidence suggests otherwise. A 2024 Business Insider report notes that while AI excels at tasks like drafting cover letters, its errors in high-stakes applications like legal briefs or customer service have led to sanctions and reputational damage.

The anti-AI crowd can take comfort in the fact that AI's current capabilities fall short of the "99.9%" accuracy needed for high-value work, as an X user noted in December 2024. While AI can handle tasks where 90% accuracy suffices, like generating marketing copy or basic customer service responses, it struggles where precision is non-negotiable. This gap ensures that jobs requiring nuanced judgment, contextual understanding, or accountability remain human domains for the foreseeable future.

While AI poses risks to certain roles, its limitations protect many others. Entry-level white-collar jobs, like junior software developers, paralegals, and customer service reps, face pressure, as noted in a 2025 New York Times article citing a 5.8% unemployment rate for recent graduates in technical fields. However, roles requiring absolute accuracy, emotional intelligence, or complex problem-solving are less vulnerable. For instance:

Investigative Journalism: Lind's work shows that AI cannot reliably sift through vast datasets without missing critical details, preserving the need for human expertise.

Legal and Financial Analysis: Errors in AI-generated legal briefs or financial models, as seen in real-world cases, necessitate human oversight.

Creative and Strategic Roles: AI's lack of genuine creativity or goal-oriented reasoning limits its ability to replace strategists, consultants, or artists who navigate ambiguity.

Human-Centric Fields: A 2024 Vic.ai article notes that banking and consulting rely on personal relationships and emotional intelligence, areas where AI falls short.

The World Economic Forum's 2023 prediction of 92 million jobs displaced by AI by 2030 is offset by 170 million new jobs created, suggesting a net gain. Roles like AI trainers, ethicists, and oversight specialists are emerging, requiring uniquely human skills. Even in automation-heavy sectors, humans are needed to maintain and verify AI systems, as seen in X's use of human moderators to ensure content accuracy.

Pro-AI advocates argue that AI's capabilities are rapidly improving, citing advancements like Anthropic's Claude 4, which can code at near-human levels. They claim that training on larger datasets will close the accuracy gap, making mass job replacement inevitable. However, this ignores AI's structural flaws. LLMs rely on statistical associations, not reasoning or causality, and no amount of data can fix their inability to understand context or detect consequential errors. As Dreyfus argued, human intuition, rooted in lived experience, cannot be replicated by rule-based systems, a point reinforced by the Chinese Room analogy.

Another counterargument is that AI's economic benefits, like a projected $15.7 trillion impact by 2030, justify job displacement. Yet, this assumes equitable distribution of gains, which critics like Jing Hu doubt, noting that AI may exacerbate inequality by favouring tech giants over workers. The anti-AI crowd can argue that human labour remains essential to ensure fairness and accountability, qualities AI cannot provide.

For those worried about AI taking their jobs, the strategy is clear: leverage AI's limitations to stay indispensable. Focus on skills where accuracy, context, and human judgment are paramount, fields like investigative reporting, legal analysis, or strategic consulting. Invest in "AI-resilient" sectors like healthcare or education, where empathy and complex decision-making are critical. Upskilling in areas like AI oversight or ethics, as suggested by a 2025 New York Times article, can also position workers to benefit from AI's growth without being replaced.

Moreover, the anti-AI crowd should advocate for transparency about AI's failures. Companies often hide errors, as seen in CrowdStrike's AI-driven layoffs, to maintain the hype. Publicising cases like Lind's or the graphic designer's redundancy can shift the narrative, emphasising AI's role as a tool, not a replacement. Governments and unions, as seen in the EU's AI Act or SAG-AFTRA's 2023 strike, can push for regulations ensuring human oversight.

AI's limitations, its inability to deliver absolute accuracy, understand context, or adapt to unexpected changes, ensure that human workers remain essential in high-value roles. Real-world examples, from Ian Lind's investigative journalism to financial analysts redoing AI's faulty work, demonstrate that AI often creates more problems than it solves. While the hype predicts mass layoffs, with estimates like 300 million jobs impacted globally, the reality is that AI's errors and lack of agency keep humans in the driver's seat. For the anti-AI crowd, the message is clear: hone skills that AI can't replicate, expose its failures, and advocate for policies that prioritise human accountability. Until AI overcomes its fundamental flaws, your job is safer than the doomsayers claim!

https://www.zerohedge.com/ai/maybe-ai-isnt-going-replace-you-work-after-all

"AI fails at tasks where accuracy must be absolute to create value.

In reviewing the on-going discussions about how many people will be replaced by AI, I find a severe lack of real-world examples. I'm remedying this deficiency with an example of AI's failure in the kind of high-value work that many anticipate will soon be performed by AI.

Few things in life are more pervasively screechy than hype, which brings us to the current feeding-frenzy of AI hype. Since we all read the same breathless claims and have seen the videos of robots dancing, I'll cut to the chase: Nobody posts videos of their robot falling off a ladder and crushing the roses because, well, the optics aren't very warm and fuzzy.

For the same reason, nobody's sharing the AI tool's error that forfeited the lawsuit. The only way to really grasp the limits of these tools is to deploy them in the kinds of high-level, high-value work that they're supposed to be able to do with ease, speed and accuracy, because nobody's paying real money to watch robots dance or read a copycat AI-generated essay on Yeats that's tossed moments after being submitted to the professor.

In the real world of value creation, optics don't count, accuracy counts. Nobody cares if the AI chatbot that churned out the Yeats homework hallucinated mid-stream because nobody's paying for AI output that has zero scarcity value: an AI-generated class paper, song or video joins 10 million similar copycat papers / songs / videos that nobody pays attention to because they can create their own in 30 seconds.

So let's examine an actual example of AI being deployed to do the sort of high-level, high-value work that it's going to need to nail perfectly to replace us all at work. My friend Ian Lind, whom I've known for 50 years, is an investigative reporter with an enviably lengthy record of the kind of journalism few have the experience or resources to do. (His blog is www.iLind.net, This email address is being protected from spambots. You need JavaScript enabled to view it.)

The judge's letter recommending Ian for the award he received from the American Judges Association for distinguished reporting about the Judiciary ran for 18 pages, and that was just a summary of his work.

Ian's reporting/blogging in the early 2000s inspired me to try my hand at it in 2005.

Ian has spent the last few years helping the public understand the most complex federal prosecution case in Hawaii's recent history, and so the number of documents that have piled up is enormous. He's been experimenting with AI tools (NotebookLM, Gemini, ChatGPT) for months on various projects, and he recently shared this account with me:

"My experience has definitely been mixed. On the one hand, sort of high level requests like 'identify the major issues raised in the documents and sort by importance' produced interesting and suggestive results. But attempts to find and pull together details on a person or topic almost always had noticeable errors or hallucinations. I would never be able to trust responses to even what I consider straightforward instructions. Too many errors. Looking for mentions of 'drew' in 150 warrants said he wasn't mentioned. But he was, I've gone back and found those mentions. I think the bots read enough to give an answer and don't keep incorporating data to the end. The shoot from the hip and, in my experience, have often produced mistakes. Sometimes it's 25 answers and one glaring mistake, sometimes more basic."

Let's start with the context. This is similar to the kind of work performed by legal services. Ours is a rule-of-law advocacy system, so legal proceedings are consequential. They aren't a ditty or a class paper, and Ian's experience is mirrored by many other professionals.

Let's summarize AI's fundamental weaknesses:

1. AI doesn't actually "read" the entire collection of texts. In human terms, it gets "bored" and stops once it has enough to generate a credible response.

2. AI has digital dementia. It doesn't necessarily remember what you asked for in the past nor does it necessarily remember its previous responses to the same queries.

3. AI is fundamentally, irrevocably untrustworthy. It makes errors that it doesn't detect (because it didn't actually "read" the entire trove of text) and it generates responses that are "good enough," meaning they're not 100% accurate, but they have the superficial appearance of being comprehensive and therefore acceptable. This is the "shoot from the hip" response Ian described.

In other words, 90% is good enough, as who cares about the other 10% in a college paper, copycat song or cutesy video.

But in real work, the 10% of errors and hallucinations actually matter, because the entire value creation of the work depends on that 10% being right, not half-assed.

In the realm of LLM AI, getting Yeats' date of birth wrong--an error without consequence--is the same as missing the defendant's name in 150 warrants. These programs are text / content prediction engines; they don't actually "know" or "understand" anything. They can't tell the difference between a consequential error and a "who cares" error.

This goes back to the classic AI thought experiment The Chinese Room, which posits a person who doesn't know the Chinese language in a sealed room shuffling symbols around that translate English words to Chinese characters.

From the outside, it appears that the black box (the sealed room) "knows Chinese" because it's translating English to Chinese. But the person--or AI agent--doesn't actually "know Chinese", or understand any of what's been translated. It has no awareness of languages, meanings or knowledge.

This describes AI agents in a nutshell.

4. AI agents will claim their response is accurate when it is obviously lacking, they will lie to cover their failure, and then lie about lying. If pressed, they will apologize and then lie again. Read this account to the end: Diabolus Ex Machina.

In summary: AI fails at tasks where accuracy must be absolute to create value. lacking this, it's not just worthless, it's counter-productive and even harmful, creating liabilities far more consequential than the initial errors.

"But they're getting better." No, they're not--not in what matters. AI agents are probabilistic text / content prediction machines; they're trained parrots in the Chinese Room. They don't actually "know" anything or "understand" anything, and adding another gazillion pages to their "training" won't change this.

The Responsible Lie: How AI Sells Conviction Without Truth:

"The widespread excitement around generative AI, particularly large language models (LLMs) like ChatGPT, Gemini, Grok, and DeepSeek, is built on a fundamental misunderstanding. While these systems impress users with articulate responses and seemingly reasoned arguments, the truth is that what appears to be 'reasoning' is nothing more than a sophisticated form of mimicry.

These models aren't searching for truth through facts and logical arguments--they're predicting text based on patterns in the vast datasets they're 'trained' on. That's not intelligence--and it isn't reasoning. And if their 'training' data is itself biased, then we've got real problems.

I'm sure it will surprise eager AI users to learn that the architecture at the core of LLMs is fuzzy--and incompatible with structured logic or causality. The thinking isn't real, it's simulated, and is not even sequential. What people mistake for understanding is actually statistical association."

AI Has a Critical Flaw -- And it's Unfixable

"AI isn't intelligent in the way we think it is. It's a probability machine. It doesn't think. It predicts. It doesn't reason. It associates patterns. It doesn't create. It remixes. Large Language Models (LLMs) don't understand meaning -- they predict the next word in a sentence based on training data."

Let's return now to the larger context of AI replacing human workers en masse. This post by Michael Spencer of AI Supremacy and Jing Hu of 2nd Order Thinkers offers a highly informed and highly skeptical critique of the hype that AI will unleash a tsunami of layoffs that will soon reach the tens of millions. Will AI Agents really Automate Jobs at Scale?

Jing Hu explains the fundamental weaknesses in all these agents: it's well worth reading her explanations and real-world examples in the link above. Here is an excerpt:

"Today's agents have minimal true agency.

Their 'initiative' is largely an illusion; behind the scenes, they follow (or are trying to) tightly choreographed steps that a developer or prompt writer set up.

If you ask an agent to do Task X, it will do X, then stop. Ask for Y, and it does Y. But if halfway through X something unexpected happens, say a form has a new field, or an API call returns an error, the agent breaks down.

Because it has zero understanding of the task.

Change the environment slightly (e.g., update an interface or move a button), and the poor thing can't adapt on the fly.

AI agents today lack a genuine concept of overarching goals or the common-sense context that humans use.

They're essentially text prediction engines."

I've shared my own abysmal experiences with "customer service" AI bots:

Digital Service Dumpster Fires and Shadow Work

Here's my exploration of the kinds of experiential real-world skills AI won't master with total capital and operational costs that are lower than the cost of human labor: oops, that hallucination just sawed through a 220V electrical line that wasn't visible but the human knew was there.