Why AI Should Not Be Granted Legal Personhood: States are Right to Draw the Line in the Cyber-Sands! By Professor X
In early 2026, a quiet but important legislative wave is sweeping through American states. Idaho and Utah have already passed laws declaring that AI systems are not legal persons. Similar bills have advanced in Ohio (House Bill 469), Oklahoma, Missouri, Pennsylvania, and others. These measures explicitly state that AI is nonsentient, cannot serve as a corporate officer or director, and cannot be granted the rights and responsibilities that come with legal personhood.
The ZeroHedge article by Siri Terjesen and Michael Ryall argues that these states are making the correct philosophical, legal, and practical choice. Granting legal personhood to AI would be a profound mistake — one that risks creating unaccountable power structures, diluting the meaning of human rights, and undermining the foundations of a responsible society.
What Legal Personhood Actually MeansLegal personhood is not a casual label. It confers the ability to:
Own property
Enter contracts
Sue and be sued
Hold assets and liabilities independently
We already extend a limited form of personhood to corporations — but corporations are ultimately controlled by humans, with mechanisms like fiduciary duties, shareholder oversight, and the "piercing the corporate veil" doctrine to hold real people accountable when needed.
AI is fundamentally different. It is a tool — sophisticated pattern-matching software running on silicon and electricity. Even the most advanced large language models or autonomous systems operate through statistical correlations, not genuine understanding, moral reasoning, or sentience.
Core Reasons AI Should Never Be a Legal Person1.AI Lacks Consciousness, Sentience, and Moral Agency. True legal personhood implies the capacity for suffering, intention, guilt, and ethical deliberation. AI has none of these. It simulates language and behaviour impressively but possesses no inner experience, no sense of justice, and no ability to truly understand cause and effect in the human sense (as Judea Pearl's work on causal inference versus mere pattern recognition demonstrates). Aristotle distinguished phantasia (sensory imagination shared by animals) from nous (human abstract reasoning about universals like justice). AI remains stuck at the level of advanced pattern recognition.
2.It Would Create Dangerous Accountability Vacuums. If an AI system could own assets, sign contracts, or be held "liable," humans behind it (developers, deployers, or corporations) could potentially shift blame onto the machine. This creates a perfect liability shield: "The AI did it." Victims of AI-caused harm — whether financial loss, biased decisions, or physical damage from autonomous systems — would struggle to find a responsible human party. Powerful actors could insulate themselves while profiting from AI.
3.It Dilutes the Moral Weight of Personhood. Personhood should remain reserved for beings capable of rights and responsibilities. Expanding it to machines risks trivializing the hard-won legal protections for actual humans, especially vulnerable or marginalised people whose full personhood is still contested in practice. Energy spent debating AI "rights" distracts from securing justice for real people.
4.The Corporate Analogy Fails. Corporations are aggregates of human will and remain tethered to human accountability. AI can act with increasing autonomy, but without genuine agency or conscience. Treating it as a person severs the essential chain of human responsibility that law depends on.
Why States Are Acting Now — And Why It MattersState legislatures are responding to growing concerns raised by figures like Yuval Noah Harari, who has speculated about AI one day opening bank accounts or owning property independently. Rather than waiting for federal inaction or court battles, states are proactively drawing a bright line: AI is a tool, not a rights-bearing entity.
This state-level push is pragmatic. It prevents a patchwork of confusing rulings and sends a clear signal to developers and corporations: you remain fully accountable for what your systems do. AI can (and should) be highly useful — for productivity, science, and innovation — but it must stay under human oversight and control.
Counterarguments and Why They Fall ShortSome futurists argue that sophisticated AI will eventually deserve rights, or that personhood could solve liability gaps in highly autonomous systems. Others claim it's inevitable as AI capabilities grow.
These views confuse capability with moral status. Even if AI becomes extremely powerful, that power does not create sentience or ethical standing. Granting personhood prematurely would likely benefit big tech companies more than society, allowing them to diffuse responsibility while concentrating control and wealth.
Existing legal tools — product liability, negligence laws, strict liability for dangerous technologies, and direct regulation of developers — are sufficient (and improvable) without inventing new "persons."
Bottom Line: Keep AI as a Tool, Not a PeerStates rejecting AI legal personhood are not being anti-technology or Luddite. They are exercising common sense and philosophical clarity. AI should remain firmly in the category of advanced instruments — like computers, vehicles, or software — for which humans bear ultimate responsibility.
Preserving this distinction protects human accountability, safeguards the integrity of the legal system, and ensures that moral and legal energy stays focused on real people rather than machines. As AI grows more capable, the need for firm boundaries becomes more urgent, not less.
Law, at its best, is "reason without passion." Declaring AI nonsentient and ineligible for personhood is exactly that: a reasoned safeguard for a future where technology serves humanity, rather than the other way around.
https://www.zerohedge.com/ai/why-states-are-right-reject-ai-legal-personhood
