
On the Seductions of Technology and the Racialised Harms of the Criminal Legal System of England and Wales

Written by Dr Patrick Williams. The views expressed in this essay are the author’s own and not necessarily those of Amnesty International UK.
On 15 September 2017, 11 young people from the central area of Greater Manchester, UK were sentenced for the murder/manslaughter of 18-year-old, Abdul Hafidah in what was described by Greater Manchester Police (GMP) as a ‘gang-related’ murder. The prosecution narrative purported that 11 young people conspired to attack and kill Abdul, who too was described by the police and prosecution as a ‘gang member’, who had ventured into 'their territory’. Using the highly controversial legal doctrine of Joint Enterprise, 11 of the young Black and mixed-race people, the youngest of whom was 14 years of age, were collectively sentenced to 168 years in prison, with only one person acquitted. For the majority, there were no previous convictions, no antecedents of violent behaviour and no prior relationship with the victim. Crucially, most of the children and young people who were convicted were not ‘at the scene’ or in proximity of the violence that caused Hafidah’s death. Some of the sentenced boys did not know one another and all denied being gang members or having any association with ‘gangs’.
Across England and Wales, the increasing use of collective forms of punishment such as joint enterprise and conspiracy is a criminal legal system practice that disproportionately affects and harms Black and racially minoritised people. In Manchester, between 2009 and 2016, 54 young people were collectively punished to 749 years in prison for seven violent offences. Even more recently and documented by the Manchester based youth organisation Kids of Colour, since 2021, 29 young people, across six court cases have been collectively sentenced to 450 years in prison.
It is this reality which prompted the journalist Harry Stopes in 2017 to ask the question, ‘How do 11 people go to jail for one murder?’ The question remains pertinent today, demanding answers to the following - what ‘intelligence’ do the police rely upon to co-locate and connect non-criminal people to offences that they did not commit? In response, this essay considers the encroachment of technology into policing and law enforcement practices arguing that converging onto the historical and contemporary realities of racialised over-policing, criminalisation and the disproportionate rates of punishment and imprisonment; Black and racially marginalised communities are also contending with (and resisting) the encroachment of technology into criminal legal practices which compounds injustice as it intersects with the criminal legal system.
While the adoption of assistive technologies into policing, law enforcement and wider criminal legal system practices are not new, the central points to be made below concerns data-driven and predictive policing capabilities as a dangerous development adding to an ever-expansive ‘hostile environment’ designed to pre-emptively harm those who have always been viewed with suspicion and as crime prone - those who have been failed by the state and mediated as undeserving of care and support; and those who are presented by the police and wider criminal legal system (CLS) as unassailable risks to be managed and contained in order to ‘protect the public’.
From this position, the racial, social, cultural, and economic composition of non-white communities drives inscriptions of risk, threat and disorder – with such communities viewed as criminogenic and therefore deserving of intensive forms of surveillance, regulation and control. To this end, the seductive and illusionary logic of predictive policing relies upon a delivery model of criminal justice that unashamedly removes the guardrails of due process, forsaking the principle of presumption of innocence, and usurps the rule of law for what Neocleous has defined as the ‘rule of police’.
‘Incessant chatter’: the seductions and (false) promises of AI and technology.
“We choose fully to embrace the opportunity that AI presents to build a better future for all our citizens”
The UK Government has publicly declared ‘the arrival of AI’ posturing a commitment to support with significant investment Artificial Intelligence across the UK, opportunistically proclaiming a vision where the UK is a welcoming ‘pro-innovation’ space for the development and growth of AI to drive economic efficiency and effectiveness across the public sector. AI and data-driven technologies are uncritically valorised as the long-sought after solution to a plethora of stubborn, political, social and economic problems. And yet, amid the noise and the chatter, there is a remarkable silence of the (data, digital and legal) harms that are inevitably tied to the ‘AI revolution’.
Announcing a Sentencing Review in response to the crises of overcrowding in prisons across England and Wales, the UK government chillingly states, ‘the [review] will examine the tough alternatives to custody, such as using technology to place criminals in a prison outside prison’. Indeed, within the same proposal document the Lord Chancellor and Justice Secretary Shabana Mahmood declares “I believe in punishment. I believe in prison, but I also believe that we must increase the range of punishments we use.” Here we can detect tech-speak as a political strategy that conceals and proposes remedy to the perennial failures of the criminal legal system – from the overcrowding of prisons caused by an insatiable desire for ever increasing punitive and longer custodial sentences; to the preoccupation of youth justice, probation and prison practitioners with assessments of ‘offender risk’, rather than the needs of people who encounter the CLS. It is such seemingly intractable criminal justice problems which have heralded the expansion of the ‘tech architecture’ thoroughly documented by Amnesty International (2025) with facial recognition and automatic number plate recognition cameras, mobile biometric fingerprint scanners, and the algorithmic determination of risk status ascribed to people in prisons which follows them ‘through the gate’. It is within this context that the seductive claims of tech-solutionism are pertinent in response to the following questions.
The seductions of technology
-
What do we know? Represents the fusion (merging) of previously separate and distinct government and public sector datasets resulting in state agencies, government departments, private entities and corporations, non-governmental agencies now having access to an incredible amount of personal, familial, health, education, and employment data/information (for instance see Administrative Data Research, UK). Buoyed by tech vendors, academic institutions and government researchers, there is a fast growing trend toward combining databases for population oversight, control and to assist in future planning and service provision.
-
Who is this? offers the promise of harnessing powerful algorithmic technology to quickly confirm the identity of a person (e.g. facial recognition technology, (mobile) biometric fingerprint scanners), including the interoperability of such tools with available public and private reservoirs of data. This seduction encourages the unregulated trampling over our data-privacy rights as a route into the virtual and social media worlds of unsuspecting members of the public through ‘fishing’ expeditions –highly speculative trawls of social media sites/data by the police and other law enforcement agencies to inform intelligence (see Operation Alpha and the trawling of thousands of rap/drill music online videos).
-
Who knows who? promises a capability to build and affirm (data and digital) associations and relationships - across groups of previously unconnected individuals. Building methodologically on social network analysis AI technologies facilitates the interrogation of the data gathered by public and private institutions and organisations. Such analytics are drawn upon by police and law enforcement agencies to establish ‘associations’ between and across police-determined nominals or ‘suspects’ (critical for the legal regulation of lawful protestors) including enhancements through the capability to ‘scrape’ online social media sites Facebook, WhatsApp, Telegram.
-
What will happen? the ultimate seduction of tech is to predict what will happen in the future. Through the manipulation and interrogation of historical and place-based data generative technologies may calculate where a crime may take place and/or who is most likely to commit an offence (risk of offending scores). Predictive policing facilitates the development of pre-emptive policing strategies and approaches, targeted and trained upon individuals and/or ‘hot spot’ areas and communities assessed as presenting ‘high risk’ or ‘high harm’ thereby averting the crime occurring in the first place.
From this position, it is hardly surprising that state institutions, government organisations and private corporations are eager to harness the power of technology with the capability to algorithmically identify and digitally trans-carcerate (‘prison outside prison’) those who have always been mediated as posing a threat to the imagined [white] public.
Yet the veneer of technology as neutral, independent of biases, progressive and scientifically reliable for the purposes of crime identification, prediction and resolution is false and in need of urgent critical intervention. In the same way that the discretionary powers of the police render policing discriminatory, so too predictive policing will hardwire discrimination that preemptively targets the ‘usual suspects’. Similarly, tech cannot be debiased, because it is trained using biased data. We cannot build non-discriminating, pre-emptive models of policing because the tech is procured by the state to ‘smash the ‘gangs’’, to ‘stop the boats’, to detect (benefit) fraud (see the ‘post-office scandal’), to monitor, track and ‘neutralise’ the racially, socially and economically marginalised in an endless pursuit to ‘end gangs and youth violence’ (EGYV). Narratives of harm, such as these are integral to the expansion of tech which is intended to legitimise the monitoring, surveillance and policing of marginalised communities.
So, how do 11 people go to prison for one murder: the (re)production of guilt in ‘suspect communities’.
“They think they know me. They don’t know me. That’s what these police officers go off. They think they know you because they see things on paper and they think they can make a judgement. It’s like, ‘No, you can’t search me. I am not going to bow down to you because you found out I have been in trouble with the police. I haven’t got any drugs’”
[Paul, cited in StopWatch 2018]
“The thing [that] pisses me off is that they have the power to do stuff, extra stuff, and their power derives from intelligence. You can ask them, ‘What’s the intelligence?’ They’ll say they’re not allowed to tell you. ‘We’re not allowed to tell you.’ Now, your intelligence is not a proven piece of information. Intelligence that you might have got from a grass, you might have got from someone that just dislikes other people, they’re just chatting shit. You could have got it from anywhere. It’s not proven in court. So, why is it then allowing you the powers to come to oppress me with, you know what I mean? You’re oppressing me with power that you shouldn’t even have”
[Garry, cited in StopWatch 2018]
For Chilokoa-Mullen the notion of guilt and innocence as legal categories is contested, arguing that such constructs are arbitrary, socially produced categories defined by the police, politicians and other law enforcement practitioners. Given the cultural and racialised determinants of guilt/innocence across the UK, the reproduction of guilt violently extends beyond the individual being transmuted by the police onto marginalised communities. For Paul and Garry cited above, it is not guilt that drives their experiences of being policed, but their presence within areas and communities labelled and defined as ‘gang-affected’. To build on this, of the 32 boroughs of Greater London, 18 have a dedicated ‘gangs’ unit, of which 16 were allocated grants of approximately £3.9M through the initial EGYV strategy of 2012/13. Of these, nine boroughs were also constructed as ‘at risk’ of radical extremism to which £8.3M was allocated. All these boroughs have dedicated police officers in schools significantly increasing the surveillance and ‘intelligence-gathering’ capacity on school children - who in turn become objects to be policed.
What connects these communities is not the incidence of crime, but the size of the non-white population, alongside comparatively higher levels of income deprivation and poverty. As noted by Kundnani, it was the size of Muslim population, rather than extremism (however defined) that determined those areas eligible for PREVENT funding and attention. In 1982, Gordon likened such communities to local police states in which, the rule of police dominates and where community members are policed with impunity - areas sacrificed to extreme exposure to unregulated and unfettered police surveillance and monitoring practices under the rubric of public protection and crime control. Of concern, the discriminatory effects of such policing realities are defended by the state as ‘objectively justified’ based on a falsehood that racially minoritised people offend at a higher rate than their white counterparts – this is simply untrue.
In ‘Dangerous Associations: Joint Enterprise, Gangs and Racism’, an often overlooked but significant finding – is comparatively, white people who were convicted of JE violent offences, and who were ‘not at the scene’ of the offence, were less likely to have evidence presented to the court as evidence of gang membership. Conversely, for the Black cohort, prosecution strategies relied upon tech-solutions to infer guilt by digitally co-locating children and young people who were ‘not present’ at the scene of the offence. Including, police ‘intelligence’, CCTV cameras, the use of cell-site experts and facial mapping technologies. This finding points toward the embeddedness of tech-infrastructures and ecosystems within marginalised communities as a ready-made repository (mis)used to confer and (re)produce guilt on the non-criminal behaviour of community members.
Within racially, socially and economically marginalised communities, the non-criminal behaviours of racialised children and young people become framed through a presumption of criminality. This is a remarkable feat – where being present in a music video, the genre of music you listen to, where posing for a photograph becomes admissible as evidence of a propensity toward violence and criminality. Within such communities, family artefacts showing family relationships and friendships are interwoven into police’ social network analyses of crime networks and enterprises. Within sacrifice zones, the clothes you wear and exposure to adverse conditions and (financial and economic) vulnerability become repurposed by the police as ‘risks to be managed’ through the use of monitoring, curfews and exclusion conditions. Indeed, the politically sanctioned rule of police as a means toward order maintenance, empowers the police to bracket out ‘rule of law’ and substitute a regime of managerial direction and police organisational priorities.
This better explains how Child Q was physically violated by the police while in attendance at school, and how a young Black British boy can be placed in an immigration detention unit, because he was not identified by a biometric fingerprint scanner. It is the notification from an ANPR camera and police intelligence which precipitated the police use of lethal force against an unidentified Black man driving a car in London. And finally, it was the rule of police that drove the collective punishment of the ‘Manchester 10’, another group of children and young people who again were collectively punish to 131 years in prison for a violent offence that did not happen – with the evidence against them being their expressions of grief and loss in a ‘telegram chat’.
The conclusion…predictable policing
The social production of guilt within ‘suspect’ communities eventuates the criminalisation of non-criminal behaviours for racialised children and young people. The encroachment of technology compounds racial injustice by (digitally) increasing the pervasiveness of police encounters, arrest, charges, convictions and punishment for offences they did not commit.
Sadly, writing ten years after the publication of Dangerous Associations, Black and racially minoritised people remain more likely to have their everyday, non-criminal lives technologically biographed by a hostile tech-infrastructure that assumes criminality and compounds the pre-existing harms of the criminal legal system apparatus. Seductions of technology, amid the promises of pre-emptive and predictive policing undermines the rights of racially, socially and economically marginalised people under the guise of managing risks and protecting the public. As such, one need not look beyond the most recent ‘statistics on ethnicity in the criminal justice system of England and Wales’ where Black children (those aged under 18 years of age) who make-up 5% of the population of England and Wales today comprised 30% of those languishing in prisons. Again, differential crime rates cannot account for this ‘disparity’. Rather, it is exposure to an expansive, tech-enabled criminal legal system, trained toward the maximisation of convictions and the parsimonious goal of improved policing (in)effectiveness which eventuates the predictable outcomes – that is the surveillance, monitoring, regulation and harmful control of marginalised groups and communities across the criminal legal system of England and Wales.
Our blogs are written by Amnesty International staff, volunteers and other interested individuals, to encourage debate around human rights issues. They do not necessarily represent the views of Amnesty International.
0 comments