EU and national authorities are increasingly deploying high-tech surveillance systems at the bloc’s external borders, a trend that could accelerate as some countries seek to block refugees arriving from Afghanistan, raising questions about the ethical implications of these technologies.

Greece last week announced the completion of a 40-kilometre fence and new automated surveillance system along the border it shares with Turkey, installed with the expressed aim of preventing people fleeing the Taliban’s takeover of Afghanistan from seeking asylum in Europe.

Frontex, the EU border agency, also announced the trial of high-level surveillance equipment as the Afghan crisis escalated earlier this month.

These moves are part of a much broader trend of AI-powered automation increasingly shaping the experiences of people on the move from well before they reach any borders, to long after they may have crossed them. 

As the use of this technology increases, many working in the field have voiced concerns over its impact on the safety and fundamental rights of people seeking asylum, as well as the implications of increased biometric monitoring and processing for data privacy and protection.

Preemptive technology

Governments’ use of AI technology in relation to migration has been on the rise, notably with the deployment of hard tracking technologies such as autonomous surveillance drones and thermal cameras beyond borders to spot people travelling across expanses of land and sea. 

Data collected by researchers for the purpose of modelling migration patterns has also been co-opted by state agencies to preempt and intercept the arrival of refugees before they can reach borders.

Chloé Berthélémy, a policy advisor at European Digital Rights (EDRi), told EURACTIV that this trend could be seen in recent reforms to the European asylum dactyloscopy (Eurodac) database, which stores the fingerprints of anyone seeking asylum in the EU, proposed in September 2020 as part of the Commission’s New Pact on Migration and Asylum.

“Eurodac is currently being transformed to facilitate this wider surveillance apparatus: the 2020 legislative proposal puts an emphasis on collecting more data to produce statistics about movements to and in Europe,” she said.

AI at the border

A number of EU-funded projects have sought to develop systems such as lie- or emotion-detectors for use as part of asylum or immigration application processes, in spite of existing concerns over the reliability of and potential for discriminatory outcomes from AI systems. 

MEPs call for greater regulation of AI discrimination

The European Parliament’s culture and education committee endorsed on 15 March an opinion calling for a framework to regulate artificial intelligence (AI) in order to “reduce gender, social or cultural bias in technology”, and the European Commission is expected to propose a legislative framework for the matter in April. EURACTIV France reports.

One such programme, the controversial iBorderCtrl project, became the subject of a legal battle earlier this year after German MEP Patrick Breyer took the EU’s Executive Research Agency to court.

The project, which ran from 2016 to 2019, claimed to have developed AI-powered “lie detector” technology, which could use micro-facial recognition software to “determine” whether someone was telling the truth. The tech was trialed at a number of EU borders but drew criticism over its accuracy. 

Petra Molnar, a lawyer specialising in migration and technology, told EURACTIV that iBorderCtrl constituted the “most egregious example of the techno-solutionism making its way into so many immigration processes,” and failed to account for factors such as cross-cultural differences and the impact of trauma on memory.

“While we are very concerned that many of these dubious technologies are based on pseudo-science,” EDRi’s Berthélémy said, “we object to the very uses of these systems as they are grounded in xenophobic and racist policy goals.”

Frontex did not respond to EURACTIV’s request for comment by the time of publication.

Biometric privacy concerns

An increasing amount of biometric data is also being collected from people entering the EU, with iris scanning, facial imaging and fingerprint records now constituting standard parts of many processing systems. 

The Eurodac proposals also include a provision that would further reduce the already low age from which fingerprints are collected from 14 to just 6 years old, a development that Molnar describes as hugely concerning. 

“The collection of data from a child can follow them their entire life and if discriminatory decision-making is baked into that, the likelihood is that that will also follow them”, she said. 

The proposal, Berthélémy said, is “a truly invasive and unjustified infringement on the rights of the child, right to privacy and data protection.”

Data misuse

With this level of personal data collection also come concerns over the consequences of this information being accessed by or shared with certain governments or groups, including those to which failed asylum seekers might be returned by EU member states. 

In recent weeks, the potential dangers of this kind of data collection have been seen in Afghanistan, where the Taliban is reported to have seized biometric devices belonging to the US military containing sensitive identifying data, such as iris scans, collected from thousands of Afghans. 

Afghans scramble to delete digital history, evade biometrics

Thousands of Afghans struggling to ensure the physical safety of their families after the Taliban took control of the country have an additional worry: that biometric databases and their own digital history can be used to track and target them.
U.N. …

“Collecting biometric data enables a permanent and irreversible identification of persons leading to greater tracking and monitoring of their lives and movements”, Berthélémy stated, defining the practice as a “general criminalisation of all migrants in the EU.”

[Edited by Luca Bertuzzi and Josie Le Blond]