A recent post by Barry Scannall highlighted the unique powers Ireland has regarding the adoption of the AI Act—that Ireland does not need to implement the same safeguards as the other EU countries regarding the regulation of AI technologies. Now with the formation of a new Irish government, including Fine Gael, which has pushed for the introduction of the use of facial recognition technology (FRT), and the release of the Programme for Government, which explicitly stated the intention to roll out mass FRT in limited cases (national security, missing persons, terrorism), where should the line be drawn between keeping people safe and keeping an eye on everyone? And do we trust An Garda Síochána to watch us?

How does this relate to the new Irish government?

It was confirmed on the 15th of January 2025 that a new Irish government will be formed through a coalition of three factions: Fine Gael, Fianna Fail, and a gaggle of independents, many of whom can be categorised as centrist to right-wing. The three groups published their agreed aspirations of the period of ruling in a Programme for Government, which is 162 pages long. In this document, agreed intentions include the growth in the number of data centres, continuing to reduce the student contribution fees for third-level education, and so on. On page 117, the PfG explicitly states that:

This government will:

  • Expand the number of cameras using Automatic Number Plate Recognition (ANPR) to fight serious and organised crime.
  • Support the Gardaí to use artificial intelligence in criminal investigations.
  • Deploy facial recognition technology (FRT) for serious crimes and missing persons with strict safeguards.
  • Introduce live FRT in cases of terrorism, national security, and missing persons, with strict safeguards.

Some of these things on their own don’t sound too bad, and I would be in support! I think that we should improve the efficiency of fining those who use the bus lanes illegally or break red lights, BUT the context of this more conservative government, with a party that pushes greater surveillance of individuals and a police force that has a record of the overcollection of data and has created files on infants (this is not hyperbolic; see this amazing article in The Journal for more info).

Okay, what does this mean?

The timing of this government formation aligns with the implementation period of the AI Act, and due to some of the requirements of the AI Act, I expect that the Act will be supplemented and clarified by national legislation. Pair this with a very unique position granted to the UK (while it was a member of the EU) and Ireland under protocol 21 of the TEU (12016E/PRO/21), which allowed for both countries to opt out of EU laws relating to the area of freedom, security, and justice. This opt-out applies to certain provisions of the AI Act, including bans and restrictions on high-risk AI systems like real-time facial recognition technology (FRT) for law enforcement.

While other member states are bound by the stricter limitations of the AI Act, Ireland has room to wiggle out of these strict requirements. Suddenly, the goals of this new government align with the chance to break away from the strict rules about AI safety and bans. This could potentially create an advantage for Ireland in the tech industry, as the current European hub for big tech would be enticed to stay and develop (and possibly deploy) their AI systems here even if banned on the European mainland. It would keep that sweet corporation tax that Ireland relies heavily on flowing.

In contrast, it would see a breaking of unspoken norms regarding privacy in Ireland, norms that have been pushed to the brink several times in the past. From the overreach of the Public Services Card, which was ruled illegal by the Data Protection Commission, to the Digital Rights Ireland Ltd v Minister for Communications case, which saw the invalidation of the Data Retention Directive (a requirement for EU member states to store information on all citizens’ telecommunications data for 6 months to 2 years).

So what is the issue with Facial Recognition Technology (FRT)?

A success rate that wouldn’t pass an exam

Facial Recognition Technology (FRT) is not the most reliable technology out there. A report titled the “Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology” by Professor Pete Fussey & Dr. Daragh Murray outlined the failures of Live FRT, with an accuracy rate of 36.36%. Yes, it was accurate 36% of the time. That result wouldn’t see you pass a college module, so deploying a surveillance technology that is more wrong than right in an effort to improve policing would statistically lead to more people wrongfully accused of crimes.

To make matters worse, the report highlighted how these inaccuracies disproportionately affect certain groups, compounding issues of bias and discrimination. The London Metropolitan Police’s trials revealed that while officers deemed 26 instances of computer-generated matches credible enough to warrant intervention, 14 of these were ultimately proven incorrect after identity checks. This means that individuals were unnecessarily stopped and subjected to scrutiny in 63.64% of cases—more often than not, the technology failed.

Consent or coercion? When opting out becomes a red flag

In the same report, it was highlighted that during police briefings, actions like looking away or covering one’s face were not automatically seen as suspicious. Instead, they were seen as people practicing their right to privacy. But over time, this changed. Avoiding FRT was seen as a sign of possible wrongdoing, which meant that the police had a reason to be suspicious and step in. By changing the way we think about voluntary consent, the line between protecting privacy and rendering it illegal became blurred.

Interventions with individuals who attempted to circumvent cameras increased considerably during the trial and in separate trial locations. This showed how erratic discretionary policing may become, which may worsen bias and result in unequal enforcement. Concerns about “surveillance creep” were raised when technologies that were legal for one purpose ended up being utilised for other, unchecked purposes. This led to arrests that had nothing to do with the original goals of the FRT rollout.

Who watches the watchmen?

  • In 2014, the European Court of Justice invalidated the EU Data Retention Directive. The directive required telecommunications companies to store metadata for six months to two years for law enforcement purposes. An Garda Síochána had used the collected telecommunications data when the law was active.
  • In 2017, nine gardaí faced official disciplinary action over the misuse of the Pulse computer system system, when they accessed it for personal or inappropriate reasons.
  • In 2017, Dara Quigley took her own life after members of An Garda Síochána (AGS) recorded footage of her naked on CCTV, and shared it on WhatsApp.
  • In 2019, the Data Protection Commission found that AGS violated the Data Protection Act 2018, by failing to ensure transparency, implement appropriate contracts with data processors, and conduct proper assessments regarding the use of Automatic Number Plate Recognition cameras.
  • In 2024, it was discovered that thousands of unlawful Garda surveillance dossiers were created about children under 12 including 587 intelligence records of children under the age of 3.

This is just a short list of violations by An Garda Síochána regarding the misuse of collected data over the past decade. It is clear that AGS does not have a strong culture of privacy, data protection, and transparency. Yet, this next government plans to provide some of the most invasive yet inaccurate technology into our lives. Incorporating Automatic Number Plate Recognition cameras into further law enforcement practices when the organisation has a history of misuse. The revelation of thousands of unlawful surveillance dossiers created about children, including infants, raises serious concerns about the protection of sensitive data. Introducing inaccurate and inconsistent Facial Recognition Technology will lead to more false arrests and potentially violate the rights of innocent individuals. This goes against the principles of privacy and data protection that our society values. A principle enshrined in the EU Charter of Fundamental Rights in Articles 7 and 8.

As we navigate the delicate balance between security and privacy, it is vital to question how much we are willing to sacrifice in the name of safety. Are we comfortable with the idea of our children being monitored without our knowledge? How can we ensure that law enforcement agencies are held accountable for their actions when it comes to handling sensitive data?

By Daniel Whooley

I am just a guy interested in data protection, cybersecurity, politics, environmentalism, urban design, public transport, and history (I have too many hobbies).