A Hermes Center study on Identification, Facial Recognition, and European Union Funding
Migration flow towards the European Union has been a constant part of current events for years, with the southern member states usually being the main points of entry. Italy experiences regular arrivals from both dangerous sea crossings from North Africa across the Mediterranean sea and the Western Balkan Route, an overland transit corridor that leads from the coasts of Greece and Bulgaria to central Europe. For many people Italy is just a stop along the way—their journey will continue northwards—but for others, they are looking to stay.
Even though the number of people travelling through Italy has trended downward, the migration issue remains ever present in the Italian public debate. Some politicians characterise migration as an invasion by foreigners who choose to break the law instead of going through the proper channels. However, according to current legislation there is no way to enter Italy on a sponsorship or for job-seeking purposes. Non-citizens wanting to work have to request a rarely granted special visa through their home country’s embassy or else be considered “illegal migrants” on arrival. Only those requesting international protection as refugees and asylum-seekers have the right to access reception measures, leading to an influx of claims.
Ever since the height of the Syrian refugee crisis in 2015, both national and EU authorities have searched for a more efficient way to identify and document incoming migrants, many of whom don’t have valid forms of identification on their person. As is often the case in modern society, the chosen solution is tech-driven — relying on the collection of vast amounts of biometric data including fingerprints and facial images. An extensive report by the Hermes Center for Transparency and Digital Human Rights released last December warns about the dangers of such rampant datafication and its lack of transparency.
According to current operating procedures set by the Italian Ministry of the Interior, all new arrivals are subject to identification practices before being moved into either reception and integration centres for asylum-seekers (CPA/CAS) or holding structures (CPR), where “illegals” are detained awaiting repatriation. They should be provided with a form outlining their rights as investigated individuals and be granted the chance to confirm or amend any data requested during the procedure, but no independent legal professional or oversight body representative is present to guarantee this, shedding doubts on the integrity of their informed consent.
Article 4 of the European Union’s General Data Protection Regulation (GDPR) states that “‘consent’ of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.”
Researchers of the Hermes Center argue that migrants aren’t able to exercise this right. “Even if they had the chance to provide informed consent and they could fully comprehend the motivations behind the treatment of their biometric data, their condition of vulnerability and marginalization would not allow them to protest, deny consent or ask for any modification to said treatment, an option that is open to all European and Italian citizens.”
The migrants are not only asked to barter away their personal data in exchange for access to basic aid, but the use of coercive force and temporary detainment for migrants trying to resist is authorised by European regulations.
Once their name, likeness and medical data has been collected, it’s unclear where it will be stored and how it may be used in the future. While the central European Asylum Dactyloscopy Database (EURODAC) has specified data retention times, the Automated Fingerprint Identification System (AFIS) used by the Italian police does not. EURODAC is also still in the process of implementing a facial recognition function, which could be used on minors as young as six years old, the Italian version on the other hand has been up and running for years.
Proof comes in the form of a YouTube video dating back to 2018: A brief interview with a police officer operating the new SARI Enterprise facial recognition system shows the presence of migrant profiles among the results drawn from AFIS. In one of the photographs returned as potential matches there is a clearly visible identification number of the kind used during disembarking operations of migrant vessels in coastal hot spots. Later this was officially confirmed by the Parliamentary Investigation Committee on Reception, Identification and Expulsion Systems.
Facial recognition technology in general has been widely criticised for its lack of transparency and significant biases, particularly when dealing with people of colour. In an attempt to limit such objections the SARI system only provides a list of individuals that could be a match to a suspect, and leaves it to trained officers and experts to confirm the result. There is, however, no clarity about the nature and depth of said training or the amount of expert forensic officers currently authorised to perform such evaluations. As of now matches yielded by this system can’t be considered conclusive proof, but only circumstantial, they can nevertheless point a disproportionate amount of attention towards already vulnerable segments of the population largely without criminal records.
Technology affects different people in different ways, and it’s not an accident that new, still legally grey applications of experimental tech are being tested on those that have the least recourse against abuse. It doesn’t stop there though. Starting 2023 the European Travel Information and Authorization System (ETIAS) will be fully implemented to pre-screen travellers who do not require a visa to enter the Schengen area and profile opaque “risk indicators”.
While the issue of techno-solutionism is thankfully becoming more and more known to the wider public, this awareness has not translated into a slowing down of its practical enactment. Surveillance systems like those discussed are becoming pervasive and are almost completely unsupervised by civil society, both intrinsically and by design. The promise is that they will keep us safe against dangerous outsiders, as long as we’ll sacrifice a small part of our privacy.
It’s important to reject this line of thinking and oppose the normalisation of procedures which are gradually reversing the burden of proof and creating a world in which individuals are constantly being examined pending future wrong-doings.
To learn more, the full report is available here.