A shift in digital borders

Mobile phone checks spying on refugees and the unemployed on social networks, automated monitoring systems: the broadening of surveillance systems is massively impacting sections of the population, forcing them, and eventually all of us, into marginalization.

Mobile phones are being read, and WhatsApp histories trawled intending to uncover the identity of refugees. State benefits often serve as legitimation for digital surveillance. Welfare recipients are also forced to accept incisive intrusions into their privacy. Their financial circumstances and living conditions, relationships, and contacts are the subjects of thorough screening.

Case managers are even spying on welfare recipients on social networks, such as Twitter and Facebook, as came to light in 2015. Although the Federal Employment Agency officially condemns online spying and blocks social network sites on their case managers’ PCs, the practice is an everyday occurrence in numerous local offices. During their counseling appointments, welfare recipients have, for example, been confronted with digital evidence showing the casual earnings they have made through eBay transactions, for example, with case managers citing online communications.

More and more data is being collected, databases merged, and the right to privacy and data protection increasingly hollowed out – with the expanded surveillance firstly impacting sections of the population that are the most vulnerable, such as the unemployed or refugees.

Intrusions into privacy are against the law

“International human rights treaties and the Basic Law guarantee everyone the right to have their privacy respected and the right to determine how their information is used, irrespective of their nationality or residency status,” says Eric Töpfer of the German Institute for Human Rights in Berlin. “The state has a right to know who they are dealing with. However, this does not justify placing every asylum seeker or other recipients of benefits under blanket suspicion. If they can credibly prove their identity or entitlement to receive benefits, there is no need to encroach into their rights to privacy.”

Nevertheless, security worries are being instrumentalized to justify far-reaching incursions into digital data, and new laws are authorizing digital surveillance. “The core issue is ignorance of the rights of refugees to privacy and the disregard of the risks of their growing ‘datazication’”, according to Töpfer. “This ignorance is not new. It merely dramatically manifested itself in the statutory law governing the evaluation of data carriers in the summer of 2017.”

The “Law on making the return of illegally staying third-country nationals more effective” (“Gesetz zur besseren Durchsetzung der Ausreisepflicht”) that came into force last year (2017) tightens the country’s asylum law and supports agencies in their access to smartphones and other devices belonging to refugees. Smartphones are becoming a substitute for ID cards: if refugees are unable to verify their identity and origin through identification documents, for example, the Federal Office for Migration and Refugees (BAMF) is allowed to upload and evaluate data from devices, such as smartphones, laptops or USB sticks, without a court order.

The refugee organization Pro Asyl has criticized the “mass readout of data even before a hearing” – claiming refugees are falling under blanket suspicion. “The law creates the legal basis for the transparent refugee: there is a legitimate fear of no checks being possible as to whether private data such as contacts to lawyers, doctors or facilitators will also be tapped,” according to the organization. In “dangerous situations”, BAMF is even permitted to forward sensitive information, such as health data, to other government agencies.

Minimal success rates with smartphone checks

The success reaped through smartphone checks has thus far been minimal; however: in response to a question from the left-wing Die Linke party, BAMF stated that only two percent of the evaluations produced false statements between January and July 2018. In 7,000 cases, digital data was read out, and 2,000 cases were evaluated. Two-thirds of the data checks led to no findings.

Justice activist Harsha Walia, who advises refugees in Canada through her “No One is Illegal” organization, takes a critical view of such smartphone checks. “Telephone records, call lists, and other digital information plays a crucial role in applications for asylum when it comes to credibility and the recognition of a refugee,” says Harsha Walia. “Mobile data is frequently accessed – though this is seen as voluntary in Canada, a lot of pressure and force is exerted on refugees to release their data.”

An asylum expert, she has observed that the recognition process disadvantages women in general – digital surveillance amplifies the effect, according to Walla. “Men tend to be taken more seriously, and they’re more readily believed,” says Harsha Walia. “And women use technologies differently, depending on where they come from. They tend to share less information with their families about how bad their situation is because they want to protect their children and family from these circumstances – this is often used against them.” Officials then proceed to present asylum seekers with a text message, for example, in which they tell their family that they are doing well.

Refugee policy is a digital testing ground where border control systems, such as drones and radar systems as well as digital technologies and new technologies, are being employed.

Artificial intelligence does not help either

The Canadian government is looking to develop a pilot project that automates the asylum process. “Canada is working on a massive AI system to handle the refugee process where recognition will be increasingly reliant on artificial intelligence,” says Harsha Walia. This supposedly objective instrument is intended to disarm software critics and make the process fairer.

To date, the recognition of refugees in Canada has been influenced by numerous local factors. “It wholly depends on your individual experience: some offices have a two percent recognition rate, others 70 percent, and preferences vary from country to country,” criticizes Walia. “Instead of addressing this form of discrimination systematically, artificial intelligence should resolve the problem in hand.”

Cases in other countries have shown that automation tends to perpetuate rather than resolve discrimination and social inequality: software intended to facilitate the police’s pre-emptive work or which calculates the probability of offenders relapsing actually reinforces racism among the police force and within the justice system. In the USA, for example, a program to calculate people’s nursing allowance was wrongly configured, resulting in benefits being unjustly cut for those in need of care.

Wrong decisions based on digital evaluations have serious consequences, while the decision-making processes are often intransparent, and also challenging to comprehend for outsiders. Researchers at the Citizen Lab of the University of Toronto regard migration and asylum law as a “high-risk laboratory” for automation: “This context is particularly important because vulnerable and under-resourced communities such as non-citizens often have access to less robust human rights protections and fewer resources with which to defend those rights,” its report warns. The shift in digital borders often has the most significant impact on vulnerable sections of the population – before the surveillance then also extends to other areas and social groups.