The UK government's development of a "murder prediction" programme, initially dubbed the "homicide prediction project" and later rebranded as "sharing data to improve risk assessment," represents a bold but deeply troubling attempt to use artificial intelligence to identify individuals most likely to commit serious violent crimes, including murder. By analysing personal data from police, probation services, and other official sources, the Ministry of Justice aims to enhance public safety through what it describes as innovative risk assessment techniques. The programme, uncovered through Freedom of Information requests by the civil liberties group Statewatch, has sparked fierce criticism for its potential to erode privacy, entrench discrimination, and undermine fundamental democratic principles. While the government insists the initiative is a research project focused on convicted offenders, the dangers it poses—ranging from biased algorithms to the chilling prospect of pre-crime profiling—demand scrutiny. This blog piece explores why the programme's risks outweigh its purported benefits, arguing that its reliance on flawed data, lack of transparency, and ethical shortcomings threaten both individual rights and societal trust.

At the heart of the programme lies a profound risk of algorithmic bias, a problem that has plagued predictive policing efforts worldwide. The system draws on data from institutions like the police and the Home Office, which critics, including Statewatch researcher Sofia Lyall, argue are steeped in "institutional racism" and socioeconomic disparities. Historical patterns of over-policing in racialised and low-income communities mean these groups are overrepresented in criminal justice records, skewing algorithms toward falsely identifying them as potential threats. This perpetuates a cycle of discrimination, where marginalised individuals face heightened scrutiny not for their actions but for their demographic profiles. The programme's inclusion of sensitive data—such as mental health records, addiction histories, and self-harm incidents—further exacerbates this issue. By assuming these "health markers" have significant predictive power, the system risks stigmatising vulnerable people, conflating social stressors with criminal intent. Without transparent mechanisms to audit and correct biases, the programme could codify structural inequalities, producing outcomes that are neither fair nor accurate.

Privacy violations represent another grave concern. The programme processes highly personal information, including names, ethnicity, and health details, often without the knowledge or consent of those involved. Statewatch alleges that data from individuals without convictions—such as victims of domestic abuse or those who sought police assistance—is included, based on a data-sharing agreement with Greater Manchester Police. Although the Ministry of Justice denies this, claiming only convicted offenders' data is used, the mere possibility of non-offenders being profiled is alarming. Victims reporting crimes or individuals seeking help for mental health issues could find themselves flagged as risks, eroding trust in public institutions. Even for those with convictions, the breadth of data collected raises questions about proportionality. Processing records of self-harm or disability as predictors of violence is not only intrusive but also speculative, assuming correlations that may lack empirical grounding. The programme's secretive development, revealed only through external pressure, compounds these concerns, suggesting a lack of accountability that undermines public confidence.

The ethical implications of predicting murder before it occurs evoke dystopian fears of pre-crime surveillance, reminiscent of science fiction narratives like Minority Report. By labelling individuals as potential killers based on probabilistic models, the programme challenges the presumption of innocence, a cornerstone of democratic justice systems. Such profiling could lead to pre-emptive interventions—ranging from increased monitoring to potential detention—that punish people for crimes they have not committed. Beyond legal concerns, the knowledge of such a system could create a chilling effect, deterring individuals from seeking help for mental health issues, domestic abuse, or other vulnerabilities out of fear that their data might mark them as suspects. While the government emphasises that the project is currently for research, references to "future operationalisation" in documents suggest a possible expansion into real-world use. Without stringent safeguards, this could pave the way for mass surveillance, where entire communities are subjected to algorithmic scrutiny based on flawed assumptions.

The technical limitations of predictive models further undermine the programme's credibility. Decades of research on predictive policing have shown that forecasting complex human behaviours, especially rare and extreme acts like murder, is fraught with error. Algorithms often produce false positives, misidentifying individuals as threats, and false negatives, missing actual risks. These inaccuracies stem from reliance on historical data, which reflects enforcement biases rather than true crime patterns. For example, police records emphasise areas with heavy patrols, not necessarily where crime is highest, skewing predictions toward over-policed neighbourhoods. The programme's use of health data adds another layer of unreliability, as issues like addiction or self-harm may correlate with social deprivation rather than violent tendencies. Misinterpreting these factors could lead to unjust outcomes, where individuals face scrutiny or stigma based on erroneous risk scores. The stakes are high: a single false positive could ruin lives, while a false negative could fail to prevent harm, negating the programme's stated goal of protecting the public.

Public trust, already strained by years of contentious policing practices, stands to suffer further. The programme's covert origins—commissioned under former Prime Minister Rishi Sunak and rebranded to soften its image—suggest an attempt to obscure its implications. Renaming it from "homicide prediction" to "sharing data to improve risk assessment" does little to address the underlying concerns when the system's mechanics remain opaque. Communities, particularly those disproportionately targeted by law enforcement, may feel alienated by a tool that appears to codify their marginalisation. This risks deepening divisions at a time when social cohesion is vital. Moreover, the programme's techno-solutionist approach diverts resources from proven interventions—like addressing poverty, mental health, and domestic abuse—that tackle the root causes of violence. Critics argue that investing in welfare and community support would do more to prevent crime than speculative algorithms, which offer false promises of precision while ignoring systemic issues.

Finally, the programme raises profound legal and ethical questions that challenge its legitimacy. Profiling based on ethnicity, health, or socioeconomic status could violate anti-discrimination laws and human rights principles, particularly if non-offenders are included. The European Convention on Human Rights emphasises privacy and fairness, both of which are jeopardised by a system that processes sensitive data without clear justification. Unintended consequences loom large: false positives could lead to wrongful scrutiny, while false negatives could undermine public safety. The absence of robust oversight—such as independent audits or public consultation—heightens these risks, allowing the programme to operate in a governance vacuum. Even as a research project, its potential to set precedents for broader predictive policing demands caution. The government's assurances that it aims only to improve existing risk assessments ring hollow when the system's design invites abuse and error.

In conclusion, the UK's murder prediction programme, despite its aim to enhance public safety, is a perilous step toward a surveillance state that sacrifices fairness for algorithmic efficiency. Its reliance on biased data, erosion of privacy, and speculative profiling threatens individual rights and societal trust. By prioritising technology over human-centered solutions, the government risks exacerbating the very harms it seeks to prevent. The programme's secretive development and lack of accountability only deepen these concerns, suggesting a disconnect between its intentions and its impact. Rather than forging ahead with flawed predictions, policymakers should invest in addressing the social conditions that drive violence, ensuring justice remains grounded in fairness, transparency, and respect for all.

https://www.theguardian.com/uk-news/2025/apr/08/uk-creating-prediction-tool-to-identify-people-most-likely-to-kill

"The UK government is developing a "murder prediction" programme which it hopes can use personal data of those known to the authorities to identify the people most likely to become killers.

Researchers are alleged to be using algorithms to analyse the information of thousands of people, including victims of crime, as they try to identify those at greatest risk of committing serious violent offences.

The scheme was originally called the "homicide prediction project", but its name has been changed to "sharing data to improve risk assessment". The Ministry of Justice hopes the project will help boost public safety but campaigners have called it "chilling and dystopian".

The existence of the project was discovered by the pressure group Statewatch, and some of its workings uncovered through documents obtained by Freedom of Information requests.

Statewatch says data from people not convicted of any criminal offence will be used as part of the project, including personal information about self-harm and details relating to domestic abuse. Officials strongly deny this, insisting only data about people with at least one criminal conviction has been used.

The government says the project is at this stage for research only, but campaigners claim the data used would build bias into the predictions against minority-ethnic and poor people.

The MoJ says the scheme will "review offender characteristics that increase the risk of committing homicide" and "explore alternative and innovative data science techniques to risk assessment of homicide".

The project would "provide evidence towards improving risk assessment of serious crime, and ultimately contribute to protecting the public via better analysis", a spokesperson added.

The project, which was commissioned by the prime minister's office when Rishi Sunak was in power, is using data about crime from various official sources including the Probation Service and data from Greater Manchester police before 2015.

The types of information processed includes names, dates of birth, gender and ethnicity, and a number that identifies people on the police national computer.

Statewatch's claim that data from innocent people and those who have gone to the police for help will be used is based on a part of the data-sharing agreement between the MoJ and GMP.

A section marked: "type of personal data to be shared" by police with the government includes various types of criminal convictions, but also listed is the age a person first appeared as a victim, including for domestic violence, and the age a person was when they first had contact with police.

Also to be shared – and listed under "special categories of personal data" - are "health markers which are expected to have significant predictive power", such as data relating to mental health, addiction, suicide and vulnerability, and self-harm, as well as disability.

Sofia Lyall, a researcher for Statewatch, said: "The Ministry of Justice's attempt to build this murder prediction system is the latest chilling and dystopian example of the government's intent to develop so-called crime 'prediction' systems.

"Time and again, research shows that algorithmic systems for 'predicting' crime are inherently flawed.

"This latest model, which uses data from our institutionally racist police and Home Office, will reinforce and magnify the structural discrimination underpinning the criminal legal system.

"Like other systems of its kind, it will code in bias towards racialised and low-income communities. Building an automated tool to profile people as violent criminals is deeply wrong, and using such sensitive data on mental health, addiction and disability is highly intrusive and alarming."

A Ministry of Justice spokesperson said: "This project is being conducted for research purposes only. It has been designed using existing data held by HM Prison and Probation Service and police forces on convicted offenders to help us better understand the risk of people on probation going on to commit serious violence. A report will be published in due course."

Officials say the prison and probation service already use risk assessment tools, and this project will see if adding in new data sources, from police and custody data, would improve risk assessment."