Over in the sleek, glass-walled headquarters of Europol in The Hague, a quiet rebellion is brewing, not against criminals, but against the very laws designed to restrain power. Jürgen Ebner, the agency's deputy executive director, has issued a public plea that should chill every citizen who values privacy over promises: Relax the rules on AI. Let police bypass the GDPR, sidestep the EU AI Act, and deploy cutting-edge surveillance tools in "emergency" situations before legal reviews are complete. His justification? Criminals are "having the time of their life" with malicious AI, while Europol waits up to eight months for compliance checks.
This is not a technocratic footnote. It is a declaration of intent: When the state labels something an emergency, democratic safeguards become negotiable. And once that door is cracked open, it rarely closes. Ebner's rhetoric, framed in the language of urgency and innovation, masks a deeper shift: the transformation of law enforcement from a rule-bound institution into a pre-emptive, predictive, and increasingly unaccountable machine. This blog post traces the arc of Europol's ambition, the dangers of its proposed "fast track," and the eerie parallel with Christine Lagarde's impatience with democracy itself. What begins as a plea for speed ends as a blueprint for control.
Ebner's grievance is specific but revealing. Under current EU law, any new AI system used in law enforcement must undergo rigorous assessment under the GDPR (for data protection) and the EU AI Act (for risk classification and transparency). These aren't bureaucratic hurdles, they're constitutional firewalls. They mandate:
Data minimization: Only collect what's strictly necessary.
Purpose limitation: No repurposing data without consent or legal basis.
Human oversight: No fully automated decisions with legal or significant effects.
Transparency: Citizens must know when AI is profiling them.
For Europol, these rules mean delay. A facial recognition system, a predictive policing algorithm, or a mass-decryption platform can't be rolled out until independent experts verify it won't misidentify innocents, discriminate by ethnicity, or vacuum up unrelated personal data.
Ebner calls this an eight-month "burden." He claims lives are at stake, child trafficking networks, terrorist cells, ransomware gangs, all moving faster than the law allows. But here's the sleight of hand: The delay isn't in the technology, it's in the accountability. The AI tools already exist. What takes time is proving they won't be abused.
And abuse is not hypothetical. Europol already operates the Traveler Information System, a database of 2.5 million suspicious travel records; the Europol Analysis System, ingesting terabytes of telecom and financial data; and decryption platforms that crack encrypted communications en masse. These systems were built under existing oversight, and still, privacy advocates have documented overreach: bulk data retention, weak deletion policies, and opaque cooperation with third-country intelligence agencies.
Now imagine those systems on steroids, with AI that predicts crime, profiles populations, and automates arrests, deployed before anyone checks if they work fairly or lawfully. That's the "emergency" lane Ebner wants.
History is unambiguous: Emergency powers expand. They do not contract.
Post-9/11, the U.S. Patriot Act's "temporary" surveillance measures became permanent.
During COVID-19, contact-tracing apps in France and Germany were promised as short-term tools, some still collect data in 2025.
Pegasus spyware, sold only to vetted governments for "counter-terrorism," was later found targeting journalists and dissidents.
Ebner's "emergency" bypass would work like this: Europol identifies a "high-risk" scenario, say, a new deepfake scam or a quantum-powered encryption break. It petitions an internal committee (or perhaps just its own leadership) for a waiver. The AI tool is deployed. Only after the operation, months or years later, does full review occur.
But by then, the precedent is set. The tool is in use. Victims of false positives have already been detained. Data has been retained. And the definition of "emergency" stretches: from imminent terrorist attacks… to organised crypto fraud… to "disinformation campaigns" during elections.
This is mission creep encoded into law.
The mindset is eerily familiar. At the Bank of Finland's monetary conference in October 2025, Christine Lagarde, President of the European Central Bank, openly lamented that democratic processes had delayed the Digital Euro rollout past her term. "The drag of parliamentary scrutiny," she reportedly called it, has prevented the ECB from delivering "financial sovereignty" on schedule.
Both Ebner and Lagarde operate from the same premise: Institutional goals (security, monetary control) justify suspending democratic friction. Both frame delay as danger. Both position themselves as stewards of a higher necessity, whether it's stopping child exploitation or preventing financial fragmentation.
But the public isn't the enemy of efficiency, it's the purpose of restraint. The GDPR, the AI Act, and parliamentary oversight aren't bugs. They're features. They exist because unchecked power, whether in policing or central banking, tends toward abuse.
The timing is no accident. In 2026, the European Commission, led by Ursula von der Leyen, will propose legislation to double Europol's workforce and transform it into a "central hub" for combating organised crime across physical and digital domains.
This isn't organic growth. It's a deliberate escalation:
More staff = more data processing capacity.
Digital-physical fusion = justification for total-spectrum surveillance.
Central hub status = direct pipelines into national police databases, telecom providers, and tech giants.
Ebner himself oversees governance and data protection at Europol, yet he's the one pushing to weaken those protections. The irony is staggering.
He also calls for deeper public-private partnerships, noting that "artificial intelligence is extremely costly." Translation: Europol wants Big Tech to bankroll its AI arsenal, in exchange for data access, regulatory immunity, or both. We've seen this film before: Palantir, NSO Group, Clearview AI, all started as "law enforcement partners."
Without pre-deployment review, these tools don't just catch criminals, they create suspects. A false positive isn't a glitch; it's a ruined life. And in a post-GDPR world, there's no meaningful recourse.
This isn't a call to cripple law enforcement. It's a demand for smarter, not faster systems. Solutions exist:
1.Pre-approved AI sandboxes — Let Europol test tools in controlled environments with oversight, not without.
2.Red-team mandates — Require independent hackers and civil society to probe systems before deployment.
3.Sunset clauses — Any "emergency" AI use expires after 90 days unless renewed by public parliamentary vote.
4.Transparency dashboards — Real-time public logs of what data is collected, from whom, and why.
Criminals are using AI. But the answer isn't to become the mirror image of the enemy. It's to stay the adult in the room.
Ebner wants to save time. He may lose liberty.
Lagarde wants to beat the clock. She may break the social contract.
Europe stands at a fork:
Path A: AI deployed in haste, reviewed in regret.
Path B: AI deployed with rigour, restrained by law.
The choice is not between security and freedom. It is between accountable power and absolute power.
And absolute power, as always, corrupts — absolutely.
Europol's emergency lane isn't a shortcut to safety. It's an on-ramp to surveillance. The question is: Will Europe hit the brakes before it's too late?
https://reclaimthenet.org/europes-ai-surveillance-race-against-the-rules-that-protect-privacy