AI Surveillance of Protest Tweets: A Growing Threat to Privacy and Free Speech, By Brian Simpson

Governments and law enforcement agencies worldwide are increasingly using AI-driven tools to monitor social media activity, particularly posts related to protests on platforms like Twitter (now X). These tools can track hashtags, analyse sentiments, and even identify individuals, often leading to real-world consequences such as arrests or intimidation. This practice raises serious concerns about privacy, free speech, and the right to dissent, as it transforms a simple tweet into a potential target for surveillance. I will explore how this works, why it's happening, and what it means for activists like us anti-vaxxers/economic reformers and ordinary citizens alike.

AI surveillance systems are designed to scrape and analyse public social media posts in real time, focusing on activity that might indicate protests or activism. Here's how they typically operate:

Keyword and Hashtag Tracking: AI tools can monitor specific keywords or hashtags associated with protests, such as #BLM, #ClimateStrike, or #ProtestSydney. For example, a 2020 investigation by The Intercept revealed that Dataminr, an AI firm with access to Twitter's firehose (a real-time stream of all public tweets), helped U.S. police monitor Black Lives Matter protests by analysing tweets for protest-related activity.

Geolocation and Sentiment Analysis: Some systems can pinpoint the location of posts and assess their tone. A 2016 report by the ACLU of Northern California exposed how Geofeedia, a social media monitoring tool, allowed law enforcement to track protests by mapping posts with hashtags like #Ferguson, identifying where demonstrations were happening and gauging their mood (e.g., peaceful or threatening).

Individual Identification: Advanced AI can link social media accounts to real-world identities by cross-referencing user data (e.g., usernames, email addresses, or phone numbers) with other databases. The FBI has sought tools to access complete social media profiles of "persons of interest," including IP addresses and user IDs, as noted in a 2019 procurement document reported by privacy advocates.

These capabilities allow authorities to monitor protest organising, track participants, and predict where demonstrations might occur. In some cases, this surveillance leads to pre-emptive action, such as detaining organisers before a protest even begins, or post-event arrests based on social media evidence.

The use of AI to monitor protests isn't new—it's been documented across various contexts:

Black Lives Matter Protests (2014-2020): The ACLU's 2016 investigation revealed that Geofeedia was used by law enforcement in cities like Oakland and Baltimore to track BLM protests. The tool accessed public posts on Twitter, Facebook, and Instagram, allowing police to monitor hashtags and locations tied to activism. After public backlash, these platforms restricted Geofeedia's access, but similar tools have since emerged.

University Protests in the U.S.: A 2022 investigation by The Dallas Morning News and the Pulitzer Center found that Social Sentinel, an AI tool used by colleges, was deployed to monitor student protests. At UNC-Chapel Hill, campus police used the tool to track demonstrations over a Confederate statue by searching for protest-related keywords and hashtags.

UK's AI-Driven Monitoring (2025): On February 24, 2025, privacy advocates reported that the UK government expanded its AI surveillance with a platform to track social media posts, including political opinions. While not protest-specific, this system could easily be adapted to monitor activism, highlighting the growing scope of such technologies.

AI surveillance of protest-related tweets can have tangible impacts:

Pre-emptive Policing: AI tools can identify individuals planning protests, allowing authorities to intervene early. For instance, a tweet like "Meet at the square at 5 PM for the #ClimateStrike" could be flagged, leading police to monitor or detain the organiser before the event.

Post-Protest Arrests: After a demonstration, law enforcement can use AI to sift through social media for evidence of alleged crimes. A tweet with a photo of someone at a protest, tagged with a location, might be used to identify and arrest them for charges like vandalism or "inciting violence," even if the evidence is weak.

Chilling Free Expression: The knowledge that tweets are being monitored can deter people from organising or speaking out online. This creates a chilling effect, particularly for activists who rely on social media to amplify their message and coordinate action.

Authorities justify AI surveillance as a public safety measure, arguing it helps prevent violence, terrorism, or other threats. The FBI, for example, has described its social media monitoring tools as necessary to address "multifaceted threats," according to a 2019 procurement document. Similarly, tools like Social Sentinel are marketed to schools as a way to address mental health crises or potential violence, though they're often repurposed to monitor activism.

However, critics argue that this justification masks a broader agenda of control. Surveillance often disproportionately targets Right wing groups and Christian conservatives protesting abortion, over crazed Leftists. Moreover, the definition of a "threat" is often vague, potentially encompassing anyone who speaks out online.

Social media platforms and AI companies are complicit in this trend. Twitter, Facebook, and Instagram have historically provided data access to surveillance tools like Geofeedia, though they've since limited such access after public outcry. However, the relationship between tech companies and law enforcement persists. Dataminr, for instance, used Twitter's firehose to aid police during BLM protests, despite Twitter's public support for the movement.

AI companies are also pushing the boundaries. On February 14, 2025, Oracle's Larry Ellison advocated for a centralized AI-driven database to manage national data, including real-time population surveillance. Ellison's vision, which involves spending billions on AI data centers, could easily be adapted to monitor protest activity on a massive scale, further eroding privacy.

The use of AI to monitor protest tweets has far-reaching consequences:

Privacy Erosion: Even public tweets can reveal sensitive information when analysed en masse. AI tools can infer a user's location, social network, and political beliefs, creating a detailed profile that can be used against them.

Free Speech Suppression: Knowing that your tweets could lead to surveillance or arrest can chill free expression, especially for activists who depend on social media to organise and advocate.

Disproportionate Impact: Surveillance often targets specific groups—Right wing groups, anti-vaxxers, and Christian conservatives protesting abortion—rather than applying equally across all forms of dissent. This selective targeting undermines fairness and equality.

Lack of Accountability: Many AI surveillance tools operate in a legal grey area. While the FBI claims its tools comply with "privacy and civil liberties requirements," there's little transparency about how data is collected, stored, or used. The public often only learns about these practices through leaks or investigations.

The narrative that AI surveillance is necessary for public safety deserves scrutiny. While preventing violence is a legitimate goal, the scope of these tools often exceeds what's necessary. Monitoring entire protest movements, rather than specific threats, suggests an agenda of control and suppression. Governments have a history of using "safety" as a pretext to silence dissent—think of the PATRIOT Act post-9/11, which expanded surveillance powers far beyond counterterrorism.

Moreover, the effectiveness of AI surveillance is questionable. False positives are common in AI systems, especially when analysing nuanced human language. A tweet saying "I'm fired up for the #antiAbortionMarch" could be flagged as a threat, leading to unnecessary police action. This overreach wastes resources and risks alienating communities, as seen with the backlash to Geofeedia and Social Sentinel.

Here are some steps we can take to deal with AI surveillance of our social media activity:

Protect Your Privacy: Use a pseudonym on social media, avoid sharing location data, and be cautious about posting identifiable information during protests. Tools like VPNs or privacy-focused browsers can also help.

Advocate for Transparency: Push for laws requiring transparency and accountability in how AI surveillance tools are used. The EU's Digital Services Act, for example, aims to regulate online platforms, though it's not without its own censorship concerns.

Support Digital Rights Groups: Organisations like the ACLU and Electronic Frontier Foundation (EFF) (in the US) are fighting against surveillance and censorship. Supporting their work can help amplify the pushback.

Raise Awareness: Share information about these practices with your network. Public pressure has forced companies like Twitter to limit data access to surveillance tools in the past.

AI surveillance of protest-related tweets is a growing threat to privacy and free speech. Governments and law enforcement are increasingly using these tools to monitor social media activity, often targeting activists and protesters under the guise of public safety. This practice can lead to pre-emptive policing, post-protest arrests, and a chilling effect on free expression, disproportionately impacting marginalised groups. While the technology has some legitimate uses, its overreach and lack of accountability demand scepticism and resistance. If you're active on platforms like X, especially around protests, it's worth being mindful of how your posts could be monitored—and taking steps to protect yourself. Anti-vax protesters learnt this the hard way!

https://reclaimthenet.org/from-hashtag-to-handcuffs-the-ai-surveillance-machine-monitoring-protest-tweets

 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Wednesday, 26 March 2025

Captcha Image