London - Paris
Data@datarainbow.eu

Facial Recognition technologies

Facial Recognition technologies

and their threats to fundamental human rights and security

Leaked EU documents were intercepted by Euractiv journalist Samuel Stolton who published and article “LEAK: Commission considers facial recognition ban in AI ‘white paper’.

The EU Commission white paper suggests a temporary ban of 3 to 5 years on facial recognition in public spaces. Reminding this could provoke a serious conflict between The EU_Commission and certain member states – such as France and Germany having already taken different positions.

In fact, the French CNIL published a document in November 2019 with an English version in December “Facial recognition: for a debate living up to the challenges” calling for an in-depth political debate. The purpose of which should be ‘to determine in which cases facial recognition is necessary in a democratic society, and in which cases it is not.’ Suggesting a balanced overview of the different biometrics existing technologies to assess the risks and determine which technologies are not acceptable in a democratic society and which ones can be assumed with appropriate safeguards. Within the framework of the GDPR, the CNIL has already had the opportunity to allow certain uses in principle, while regulating them in practical terms (border control at airports), and to refuse others (controlling student access in schools).

The Swedish data protection authority recently imposed a fine on a school for testing facial recognition technology to track its students’ attendance. On the other hand, the Swedish DPA authorise the use of FR by police forces.

Iberia Launches Facial Recognition App As Airline Industry Biometric Adoption Advances

San Francisco recently became the first US city to ban police and other agencies from using automated facial recognition, following widespread condemnation of China’s use of the technology to impose control over millions of Uighur Muslims in the western region of Xinjiang. Somerville, Massachusetts, then Oakland have banned government use of facial recognition.

Following the London council used facial recognition technology on streets without consulting residents, the UK Information Commissioner’s Office published its Opinion 2019/01 on the use of live facial recognition technology (LFR) by law enforcement in public places dated 31-10-2019. in summary :

(1) The use of LFR involves the processing of personal data and therefore #data protection law applies, whether it is for a trial or routine operational deployment.

(2) The processing of personal data by ‘competent authorities’ for ‘the law enforcement purposes’ is covered by UK law.

(3) The use of LFR for the law enforcement purposes constitutes ‘sensitive processing’ as it involves the processing of biometric data for the purpose of uniquely identifying a data subject.

(4) Such sensitive processing relates to all facial images captured and analysed by the software. A data protection impact assessment and an ‘appropriate policy document’ must be in place.

(5) Sensitive processing occurs irrespective of whether that image yields a match to a person on a watchlist or the biometric data of unmatched persons is subsequently deleted within a short space of time.

(6) Data protection law applies to the whole process of LFR, from consideration about the necessity and proportionality for deployment, the compilation of watchlists, the processing of the biometric data through to the retention and deletion.

Facial images are biometric data as: ‘relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person’ (GDPR Art. 2(14)). The GDPR generally forbids the processing of biometric data for uniquely identifying purposes unless relying on one of the ten exemptions listed in Art. 9(2).

For such technology to be deployed, given that it relies on the large-scale processing of sensitive data, consent would need to be explicit as well as freely-given, informed and specific. Additionally, when deployed in public spaces, how anyone could opt out?

The UK ICO questioned how far should we, as a society, consent to police forces reducing our privacy in order to keep us safe? They found that ‘the current combination of laws, codes and practices relating to LFR will not drive the ethical and legal approach that’s needed to truly manage the risk that this technology presents.’

Statement from Elizabeth Denham, Information Commissioner, on the use of live facial recognition technology in King’s Cross, London : “Scanning people’s faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all. That is especially the case if it is done without people’s knowledge or understanding. “I remain deeply concerned about the growing use of facial recognition technology in public spaces, not only by law enforcement agencies but also increasingly by the private sector. My office and the judiciary are both independently considering the legal issues and whether the current framework has kept pace with emerging technologies and people’s expectations about how their most sensitive personal data is used. “Facial recognition technology is a priority area for the ICO and when necessary, we will not hesitate to use our investigative and enforcement powers to protect people’s legal rights. “We have launched an investigation following concerns reported in the media regarding the use of live facial recognition in the King’s Cross area of central London, which thousands of people pass through every day. “As well as requiring detailed information from the relevant organisations about how the technology is used, we will also inspect the system and its operation on-site to assess whether or not it complies with data protection law. “Put simply, any organisations wanting to use facial recognition technology must comply with the law – and they must do so in a fair, transparent and accountable way. They must have documented how and why they believe their use of the technology is legal, proportionate and justified. “We support keeping people safe but new technologies and new uses of sensitive personal data must always be balanced against people’s legal rights.”

After the High Court ruled in September that the South Wales Police had acted lawfully when a shopper complained his human rights were breached when he was photographed, civil rights group Liberty said it was akin to the unregulated taking of DNA or fingerprints without consent, and is campaigning for an outright ban of the practice. Facial Recognition is Arsenic in Water of Democracy said Liberty. Martha Spurrier, a human rights lawyer, said the technology had such fundamental problems that, despite police enthusiasm for the equipment, its use on the streets should not be permitted.

Ms Denham said the use of facial recognition tools represents a “step change” in policing techniques.

An investigation by commissioner Elizabeth Denham has raised “serious concerns” over use of the technology.

Ms Denham called on the government to introduce a statutory code of practice.

Several civil liberties organisations, such as the Electronic Frontier Foundation, EPIC or EDRI have expressed concerns. As the CNIL pointed out, facial recognition has the potential to become a particularly ubiquitous and intrusive tool. https://www.eff.org/deeplinks/2019/12/year-fight-against-government-face-surveillance

The use of FR in Hong Kong and London Kings Cross has been seen as “a fundamental threat to society” says Owen Hopkins.

The European Data Protection Supervisor, Wojciech Wiewiórowski, questioned: Facial recognition A solution in search of a problem?

Based on a probability rather than on the absolute certainty of a match between the faces being compared and the baseline “template”. Variations in performance can therefore have far-reaching consequences for individuals who are mis-identified. Over-reliance on technology can result on serious mis-judgements. The lack of accuracy has serious consequences for individuals being falsely identified whether as criminals or otherwise. Its accuracy has been challenged by several researchers.  They are serious potential dangers associated with current racial biases in face recognition, and misidentification could lead to wrongful conviction, or far worse.

To resist against the rise of the intrusive technology, designers have created clothing and accessories that helps to conceal people’s identities from A.I.

Whatever the EU decide or regulate, the technology comes from China or the US where Amazon Plans Ring Facial Recognition-Based ‘Watch List’, Report.

This article by the privacy expert journalist Kashmir Hill is thought provoking on the use of children’s pictures posted by parents online on website such as Flickr.

Thiis a technology with many caveats and a major societal impacts. You are invited to read more HERE from various sources to make your own opinion.