The Dark Side of Facial Recognition Technology

Image Credit: Jan Canty

The introduction of Facial Recognition Technology (FRT) in policing in Ireland has sparked a debate on its accuracy, bias, and potential privacy infringements.

Every day, Artificial Intelligence (AI) makes headlines with its incredible advancements and practical applications worldwide. Klarna, a Swedish fintech company, plans to replace 700 jobs with AI chatbots, while the Sora model showcases mind-blowingly realistic text-to-video AI prompts. AI has taken over the news and our daily lives. It has even become indispensable for students, who rely on Chat GPT for assignments, meal plans, and countless other tasks, resulting in over a billion visits in December 2023. Now, AI sets its sights on yet another domain: policing. However, concerns about accuracy and bias raise the question: is this a step too far?

In May of 2022, the Minister for Justice Helen McEntee suggested the introduction of an amendment to the Garda Síochána (Recording Devices) Bill, which had been proposed to introduce body cam and expand the uses of CCTV and autonomic car plate recognition through ANPR, among other features. This amendment would have allowed the Gardaí to use Facial Recognition Technology (FRT) when investigating, allowing them to compare images of a suspect against other pictures of them or those that look like the suspect from public places – intended to increase the speed of investigations. 

In May of 2022, the Minister for Justice Helen McEntee suggested the introduction of an amendment to the Garda Síochána (Recording Devices) Bill, which had been proposed to introduce body cam and expand the uses of CCTV and autonomic car plate recognition through ANPR.

The Government and Gardaí initially advocated implementing body camera legislation alongside FRT. One source in the Gardaí stated, "We can't effectively operate body cameras without facial recognition technology. There's no point in having body cams without FRT." However, the inclusion of FRT in the bill was postponed due to opposition from experts and the Green Party. The current proposed legislation is a separate amendment to the Garda Síochána (Recording Devices) Bill 2023. This bill had previously passed without including the initially suggested FRT clause.

“One source in the Gardaí stated, "We can't effectively operate body cameras without facial recognition technology. There's no point in having body cams without FRT." 


The introduction of FRT must comply with the General Data Protection Regulation (GDPR) and undergo consultation on data protection and human rights. It must also consider the implications of upcoming EU legislation. The European Union (EU) has proposed an Artificial Intelligence Act (AIA) to establish a unified framework for using AI. Under the AIA, using technologies like FRT in policing would be classified as high risk. Permission from all member states may be required to allow the introduction of FRT, and significant restrictions, such as the types of cases in which FRT can be used, may be implemented. Individual uses of FRT would need approval from a separate judicial authority, except in cases of mitigated urgency. The EU has already proposed a ban on the real-time use of FRT as part of this legislation. 

During a November Oireachtas committee meeting, the Garda commissioner defended using FRT, emphasising that it would only be employed retrospectively to scan footage for serious crimes. The Garda commissioner also clarified that “it is not our intention to run images against a database” and that the proposed aim is to identify instances of people captured through FRT rather than relying solely on the work of individual Gardaí.

The directive initially focused on serious crimes like murder, rape, child sexual abuse, kidnapping or abduction, and national security offences. However, it has since been expanded to include riots and violent disorder following the November 23rd riots. 

The government argues that introducing FRT would expedite investigations, enable quicker identification of suspects and crimes, and aid in time-sensitive cases. Minister McEntee supported FRT as a means to enhance the effectiveness and strategic capabilities of the Gardaí, aiming to modernise Gardaí through technology.

In a significant case from the previous year, AI played a pivotal role in solving a catfishing incident involving child sexual abuse. The use of AI technology was crucial in apprehending the perpetrators and highlighting the potential benefits of AI in combating such crimes. Presenting FRT in a positive light as a tool to catch more criminals and prevent more crimes might seem logical. Despite these advancements, experts, including those from UCD, have raised concerns about the dangers associated with FRT.

Critics of the Gardaí’s decision to pursue this legislation raise concerns about using FRT beyond the right to data protection and privacy. They argue that it infringes upon fundamental rights such as freedom of expression, information, assembly, and association, as the EU Charter of Fundamental Rights outlines. FRT can collect extensive data on individuals, including irrelevant information for basic policing, effectively turning it into a tool for mass surveillance rather than just crime prevention or law enforcement.

Despite these advancements, experts, including those from UCD, have raised concerns about the dangers associated with FRT.

Speaking to The University Observer, Dr Elizabeth Farries, the co-director of UCD Centre for Digital Policy and a leading expert battling the introduction of this legislation in Ireland, expressed that she and her colleagues are against any introduction of FRT, stating “the government aren’t listening to our expertise and is blatantly disregarding the ‘red line’ we’ve suggested [in relation to FRT]; as a democratic society, we are not taking the appropriate steps, unfortunately.” In the view of experts, the government is refusing to engage in dialogue and steps to test the viability of FRT in an Irish context.  

FRT also does not limit the rights of all equally but limits those already disadvantaged racially or through gender. Specifically, FRT has been measured as being less likely to recognise people of colour, especially Black people. As AI can only learn from the data it has been given, this bias could be developed into the system when a FRT system is trained using more pictures of white people than those of other races. It could also simply reflect existing systematic biases, wherein using historical data would reflect existing and long-lasting discrimination. 

“Speaking to The University Observer, Dr Elizabeth Farries, the co-director of UCD Centre for Digital Policy and a leading expert battling the introduction of this legislation in Ireland, expressed that she and her colleagues are against any introduction of FRT”


FRT has shown an error rate highest for identifying Black women at 35% in comparison to less than 1% for white men. In addition to this, even if there are models that are free of bias when being tested in a lab, they often become discriminatory simply by being released for use. The issue of bias is so rampant that many companies, such as Amazon and Microsoft, have refused to sell their FRT programmes to police.  

“FRT has shown an error rate highest for identifying Black women at 35% in comparison to less than 1% for white men.”

As Dr Farries pointed out, at the most recent Oireachtas Committee for Justice, the government used a report from the National Institute of Standards and Technology (NIST) in America, picking out an algorithm used by a Chinese company focusing on mass surveillance of the Uyghur Muslim minority, which the Chinese government has been accused of genocide. They then used this misleading data to claim 99% accuracy of FRT as a supporting argument for the bill's introduction, even though, as Dr Farries points out, the “specific algorithm is proprietary – there is no way to evaluate it as we don’t have access to that algorithm as it is privately owned. The government does not have access to it.”. In reality, in cases of using FRT, such as in Wales, over 500,000 faces were scanned, but it only led to 3 arrests and around 3,000 false identifications. 

Dr Farries also criticised the suggestion that the Gardaí would not be running images against a database, stating, “I don't see how they could do [running images] that without a database”, viewing it as ‘technocratic idealism’ by Irish officials who do not understand how FRT technologies work. 

“In reality, in cases of using FRT, such as in Wales, over 500,000 faces were scanned, but it only led to 3 arrests and around 3,000 false identifications.”

The government and Gardaí propose using retrospective FRT as an alternative to real-time FRT, which will likely be banned at the EU level. They argue that retrospective FRT would have a lesser impact on Irish citizens. However, experts from Irish universities, such as UCD, and non-governmental organisations (NGOs) have raised significant concerns. They point out that the abundance of retrospective images creates a higher risk of extracting more information from individuals, potentially resulting in increased privacy violations and other fundamental rights.

As the world of artificial intelligence advances, it has turned its attention to policing, specifically by incorporating Facial Recognition Technology (FRT). However, as this technology gains traction, it brings a host of concerns related to accuracy, bias, and the protection of privacy rights.

One of the primary concerns surrounding FRT is its accuracy and reliability. Studies have shown that FRT systems are prone to biases, especially against people of colour. Privacy rights also come into play when discussing the use of FRT in policing. The ability to capture and analyse vast amounts of facial data raises concerns about mass surveillance and potential infringements on individual privacy. Retrospective use of FRT, even if intended for investigative purposes, can still lead to significant privacy violations as it enables the extraction of information from many individuals.

Furthermore, the government's reliance on accuracy rates from a Chinese mass surveillance algorithm has raised eyebrows among critics. Dr Farries states that “When the state is an autocratic state, like China, surveilling an oppressed and vulnerable population, ‘it’s fine’, but it shouldn’t be a reality, the government shouldn’t make biased claims that don’t have a solid argument”. Without access to evaluate its accuracy or compatibility, doubts linger about the effectiveness and appropriateness of such technology in an Irish context.

Experts like Dr Farries feel they are speaking into a black hole, repeating the same warnings that FRT will not be a magic solution to solve complex policing problems. Statements made by the government and Gardaí “don't reflect a clear understanding of the technical uses or consequences, yet nonetheless, they [government and Gardaí] are pressing for it [FRT] despite significant expert feedback from leading computer scientists, legal experts and human rights experts and this is perplexing.”

Experts like Dr Farries feel they are speaking into a black hole, repeating the same warnings that FRT will not be a magic solution to solve complex policing problems.

Balancing the need for effective law enforcement with the protection of fundamental rights is a critical challenge. While proponents highlight the potential benefits of FRT in solving crimes more quickly, opponents argue that the risks associated with accuracy, bias, and privacy infringement  – such as breaching GDPR – outweigh the advantages.

As society grapples with these complex issues, it becomes essential to engage in meaningful discussions and evaluate the potential consequences of implementing FRT in policing. Striking the right balance between technological advancements and safeguarding individual rights is paramount as we navigate this uncharted territory of AI in law enforcement.