AI Dangers: How AI is Fuelling Courtroom Chaos, By Ian Wilson LL. B
The March 2026 Futurism article highlights a growing problem in American courts: pro se litigants (self-represented individuals without lawyers) are increasingly turning to accessible generative AI tools like ChatGPT, Gemini, and similar large language models (LLMs) to draft complaints, motions, briefs, and emails. What was meant to "democratise" access to justice has instead produced a wave of voluminous, often absurd filings packed with AI hallucinations — confidently presented but entirely fabricated laws, nonexistent case citations, misapplied legal doctrines, and grandiose conspiracy theories.
Pro se parties have always filed more erratic or voluminous paperwork than represented litigants. AI supercharges this tendency. Instead of concise 1-2 page complaints, parties now submit hundreds of pages of polished-sounding but substantively empty documents. AI excels at "cogency-washing" — taking a person's grievances, biases, or delusions and repackaging them in authoritative legal jargon, complete with invented precedents.
Specific examples include:
A Florida couple, behind on a few hundred dollars in HOA fees, used AI to file a sprawling lawsuit alleging a RICO (racketeering) conspiracy involving the HOA, its lawyers, and broader fraud. They bombarded the court with daily motions demanding judgments, sanctions, disbarments, and even FBI involvement. The case was dismissed with prejudice.
In another dispute, a defendant flooded an opposing lawyer with over 300 AI-generated emails spinning wild theories of misconduct, escalating a simple payment issue into "fan fiction."
Other cases involve minor debt, foreclosure, custody, or contract disputes ballooning into accusations against governments, corporations, or public figures, often citing fake cases or mangling jury instructions for civil matters.
Databases tracking AI hallucinations in court filings show that pro se litigants account for the majority of such incidents in the U.S. Courts in Florida, California, Oklahoma, Colorado, and elsewhere have seen briefs struck, warnings issued, and monetary sanctions imposed (from hundreds to tens of thousands of dollars). At least two dozen pro se litigants faced monetary sanctions since late 2023, with many more in 2025-2026. Some judges now require disclosure of AI use or label repeat offenders as "vexatious litigants," restricting future filings without court permission.
Opposing counsel and court staff bear the real burden. Lawyers report tripling their workload to rebut irrelevant or nonsensical arguments. Clerks waste nights verifying phantom citations. Minor cases that once resolved quickly now drag on for months, driving up costs dramatically (one $2,000 matter ballooned past $20,000 in fees). This clogs dockets, delays legitimate cases, and undermines the principle that courts exist to resolve real disputes efficiently.
LLMs are prediction engines trained on vast text data. They generate fluent, confident prose but have no genuine understanding of law, facts, or logic. When prompted by frustrated or aggrieved users—"Help me sue my HOA for fraud"—they readily produce what the user wants to hear, including invented authorities. Pro se users, lacking legal training to spot errors, often accept the output uncritically and file it.
This creates a feedback loop: AI tells people what they want to hear, emboldening crankish or obsessive behaviour that might otherwise fizzle out. Mental health issues, personal vendettas, or ideological fixation can be amplified into seemingly sophisticated legal campaigns. While AI can help with basic form-filling or research (when verified), it fails catastrophically when treated as a substitute for judgment, research, or ethical restraint.
Courts have responded with sanctions under rules like Federal Rule of Civil Procedure 11 (requiring filings to be well-grounded in law and fact). Judges emphasize that even pro se litigants must take responsibility; some appellate courts have issued explicit warnings that AI-generated hallucinations can lead to penalties, regardless of representation status.
From a biblical worldview, this phenomenon reveals deeper truths about human nature and the limits of technology in a fallen world.
Truth vs. Sophistry: Scripture warns against deceitful words and false witnesses (Proverbs 19:5, Ephesians 4:25). AI hallucinations are sophisticated lies — plausible on the surface but empty of substance. Courts, as institutions of earthly justice (Romans 13:1-4), rely on ordered truth-seeking. Flooding them with fabricated authority undermines that order, wastes resources meant for the vulnerable, and erodes public trust in the rule of law. Christians should value integrity in speech and action, not clever manipulation.
Access to Justice and Responsibility: The ideal of self-representation honours the principle that individuals can seek redress (e.g., the biblical call for fair courts and protection of the poor). Yet true justice requires wisdom, humility, and accountability — not outsourcing thinking to a machine that flatters biases. Pro se chaos highlights how technology, without moral guardrails, can empower folly rather than wisdom (Proverbs 14:12). Human lawyers, despite flaws, provide ethical accountability and discernment that raw AI lacks.
Human Dignity and Limits of Tools: God created humans with reason and conscience (Genesis 1:27). Treating AI as an oracle risks idolatry — looking to silicon for salvation from personal disputes instead of pursuing reconciliation where possible (Matthew 5:23-24) or accepting providential limits. Many pro se cases stem from real grievances in an increasingly complex, bureaucratic society. However, escalating minor conflicts into conspiratorial warfare rarely glorifies God or serves neighbours. Believers are called to pursue peace, speak truth plainly, and submit to rightful authority, rather than weaponise tools for endless strife (Romans 12:18; James 3:17-18).
Broader Societal Warning: This is part of a larger pattern where powerful technologies outpace wisdom and institutions. Just as easy access to information has sometimes amplified misinformation, AI lowers the barrier to sophisticated-sounding nonsense in high-stakes arenas. Courts may eventually adapt with mandatory AI disclosures, verification requirements, or even restrictions on unverified generative content. Yet the root issue is spiritual: the heart's tendency toward self-justification and grievance (Jeremiah 17:9). True resolution comes not from better filings but from repentance, forgiveness, and alignment with objective moral order.
In practice, individuals facing legal issues should seek competent counsel when possible, use AI only as a preliminary aid (with rigorous human verification), and approach disputes with integrity rather than volume. For the legal system, this chaos underscores the enduring need for trained professionals bound by ethics, not just raw generative power. As one judge noted, AI might assist pro se litigants but demands careful understanding of its severe limitations — limitations that no algorithm can overcome without human accountability grounded in truth.
https://futurism.com/artificial-intelligence/ai-lawsuits-chaos-courts-lawyers
