Artificial intelligence and human rights – but not for everyone.

Ein Auge eines männlich gelesenen Gesichtes blickt uns durch eine Brille hinweg an, auf das Gesicht und den neutralen grauen Hintergrund sind grüne Zahlen projiziert, die an einen Computer-Code erinnern

The EU's AI regulation deliberately violates the fundamental rights of refugees

The European Union's AI Regulation should actually have protected people on the move, as it follows a risk-based approach. This means that the more risky the use of AI systems is for fundamental and human rights, the more strictly they are regulated. Instead, it even provides for a lower level of protection for refugees than for people who do not have to flee. It creates a parallel legal framework that systematically puts refugees at a disadvantage, as it grants exemptions to migration and security authorities for the use of high-risk and non-transparent technologies.

This is a break with the logic of the regulation. People on the run are known to be particularly vulnerable and the authorities responsible for them are already authorized to intervene deeply in human rights – ranging  from surveillance to detention. In addition, AI often exacerbates existing social grievances. There are two main reasons for this. Firstly, AI reproduces discrimination that it “learns” through  the data used to train it. Secondly, AI is often used with the aim of making processes faster and more “effective”. In the case of refugees, there is therefore a considerable risk that the use of AI will lead to faster and more “effective” human rights violations. This is because the EU's migration policy as a whole is characterized by the dismantling of the right to asylum and violence against refugees. 

Illegal emotion recognition - used on refugees

AI, the use of which is prohibited in other areas, may now be officially used by migration authorities. This applies, for example, to so-called “emotion recognition”, which has already been tested in Greece, Latvia and Hungary with the “iBorder Crtl” system, among others - in other words, these are technologies that the EU is interested in using. 

Emotion recognition systems are highly flawed and discriminatory. According to one study, they rated black NBA basketball players in photos as more aggressive than white athletes. The wording  of the EU regulation also rightly states that emotion recognition technologies “may lead to discriminatory results and constitute an interference with the rights and freedoms of data subjects” . Their use is therefore prohibited “in view of the imbalance of power in the employment or education context”. However, migration and security authorities are now permitted to use them.

Technologies that further undermine the right to asylum are also permitted, including automated “risk assessments” of people who have had to flee their countries. One problem is illustrated by an example taken from the USA: in order to implement Donald Trump's “zero tolerance” migration policy during his presidency, the US Police and Customs Service changed an algorithm for assessing the risk of undocumented migrants. After that, it always indicated such a high risk that it could be used to justify detaining the person. Other examples, other  examples such as  the US justice system, show frequent discrimination against Black people through risk-assessment AI. Racist policies are thus implemented with the help of a seemingly “objective” technique, and the people responsible hide behind it.

Why borders are not considered public spaces

The AI Regulation explicitly does not consider border checkpoints to be public spaces. In fact, restrictions on the use of AI apply here. This is particularly important for the use of biometric mass surveillance systems. Surveillance of people, e.g. based on their faces, is considered a disproportionate interference with the right to privacy in public spaces and therefore contrary to human rights. It is also a tool that can be used to monitor any movement and suppress any form of protest. 

Facial recognition AI discriminates against women and People of Color in particular, for whom the systems have higher error rates. Their regulation was the central point of contention in the negotiations. The regulation now contains a fundamental ban on the use of AI in public spaces, which is more akin to permission with restrictions due to numerous exceptions. These restrictions do not apply to border controls due to the explicit exception to the definition of public space. And the exceptions for other places are also designed in such a way that they allow extensive surveillance of people on the run.

As studies show, the use of surveillance technology in border areas leads to refugees choosing even more dangerous routes - for fear of illegal pullbacks and pushbacks or violent attacks, for instance – and often paying with their lives. A study published in the Journal of Borderlands found a “significant correlation between the locations of border surveillance technology, migrant routes and the locations of mortal remains.” 

Migration authorities are under no obligation to make their use of high-risk AI transparent 

Furthermore, migration authorities are exempt from transparency requirements. The AI Regulation provides for a database in which public authorities must register the high-risk AI they use. The “information shall be accessible and publicly available in a user-friendly manner” (Art. 71 para. 4). Anyone dealing with public authorities should be able to know whether AI has been used to make decisions on a submitted application, for example. However, migration and security authorities are exempt from this obligation. 

This exception makes it almost impossible for refugees, human rights organizations and journalists to verify the use of high-risk AI in the area of migration. This makes it more difficult to take legal action in the event of wrong decisions and incidents of discrimination. The case of a visa streaming algorithm in the UK shows just how important this can be. This was used to automatically check all visa applications from 2015 to 2020, but was shut down following a complaint by NGOs. Due to “feedback loops” (the algorithm based decisions on a list of “problematic” nationalities, among other things, and its decisions in turn influenced the list), members of certain nationalities almost never received a visa, regardless of their individual case.

When national security becomes a blank check for human rights violations

A Europe-wide NGO alliance, the #ProtectNotSurveil Coalition , has repeatedly drawn attention to the double standards and called for the protection of refugees. In the negotiations, however, a discourse of securitization prevailed, which now dominates the migration and domestic policy of many European countries. It is characterized by the fact that refugees are seen less as people who are entitled to rights but rather as potential  “security problems”.

This attitude runs through the Regulation and is also reflected in a fourth, particularly comprehensive exception, which was only added in the final stages of negotiations: The regulation generally does not apply to AI systems developed or used for national security purposes. This is a blank check for abuse. This is because the concept of national security is not clearly defined. Every EU country and every (future) government will interpret it differently. In Hungary, for example, a legislative package to suppress civil society was justified in 2018 with reference to national security. The last elections to the EU Parliament have massively strengthened right-wing populist and far-right parties that portray migration and flight as a “threat to national security”. There is a danger that these forces, if they come to power, will also use prohibited or strictly regulated AI in a non-transparent and unregulated manner, using “national security” as a wild card. 

Conclusion

With the AI regulation, the European Union is leaving almost all doors open to automatically monitor, categorize and fend off people on the run. Of course, digital technologies could also be used to protect refugees. The sea rescue organization “Sea-Watch”, for example, installs cameras on its ships to document misconduct by the Libyan coast guard or Frontex ships. It also films pushbacks and pullbacks from the air. This in turn leads to their further criminalization. Italy, for example, under the leadership of the pro-fascist Prime Minister Georgia Melonie, is trying to prevent civil society “counter-surveillance” by order. “Sea-Watch” has announced that it will take legal action against this in Italian administrative courts.

People on the move also use digital technologies to coordinate and navigate their difficult journey. Civil society organizations across Europe are campaigning for a stricter ban on facial recognition technology and more transparency for migration and security authorities in the upcoming national implementations of the AI Regulation. The regulation introduces a right of appeal for natural and legal persons, which will certainly be used by refugees and NGOs to hold those responsible accountable for violations of the regulation. While the AI Regulation was not the good news it should have been for refugees, there are many small pieces of good news that give hope for improvement.