Who actually owns the data on social networks?

Status messages, photos, texts, comments; information about my favourite movies, bands, books; links from articles that I’ve read; events that I’ve attended; likes and reactions; activities in groups; what sites I like and I look at, who I’m friends with – and that’s just a fraction of the data that accumulates when I use a social network like Facebook.

Such data is often public – many users put their content online for all to see. This means that not only other private users but also secret services and law enforcement agencies can directly exploit this data – either by observing individual users online or, collectively, by analyzing large data volumes through text mining. Seemingly private data is also exploited, however – not just by governments and government bodies but equally and, above all, by companies.

Added to the data that users personally contribute comes data on the location from which I’m logging in, how often I’m on the site, which friends I’m chatting with, what we’re writing in our chats (Facebook chats are not encrypted in the first place), how I’ve grouped my friends (family, close friends, acquaintances). All of this, of course, also occurs on my smartphone, so the soliciting company knows where I currently am and how I’m using my smartphone.

Not to forget that so-called social sharing buttons embedded in numerous websites below the articles posted there – allowing me to like or share them instantly – are used by Facebook and other providers to track what websites I visit. And I actually don’t even need to be logged in for this to happen; I don’t even need a Facebook account. Because unique advertising IDs can identify user movements and assign them to a specific profile.

Online advertising is everywhere

In principle, surveillance by government agencies is regulated by law, and operators of social media platforms are obliged to abide by who they provide what data to and when (but, in practice, it’s a different story). The use of data by companies is, firstly, far more sweeping and, secondly, less easy to regulate and especially to control.

Countless companies run user tracking. The major players among them are Facebook and Google, with the latter relatively late to adopt web tracking. But why do all this? It’s quite simple: by tracking user movements, they can place customized advertising for each user. This is fully in line with the TED Talk diagnosis, which sociologist Zeynep Tufekci issued late 2017: “We’re building a dystopia just to make people click on ads”.

What happens with this data?

Here’s an example: Having checked out an online store for a specific type of shoe, I’m repeatedly shown the same or similar products on all types of websites – news platforms, online magazines, online dictionaries. To do this, advertising networks are increasingly employing finely tuned segmentation processes. These are capable, for example, of targeting women aged between 35 and 45, earning a monthly income of between 2,500 and 3,000 euros, living in a major city and with no children. This can even be narrowed down to levels of education and areas of interest.

Facebook’s competitive edge over other providers is that, through the data that users contribute to their platform more or less of their own free will, they hold very precise information about who we are. Big data analyses thus allow seemingly harmless data – for example, what films I like to watch and who I’m friends with – to also deduce personal information, such as sexual orientation. Of greater interest to companies is information on purchase decisions made by consumers.

As far back as 2012, the New York Times reported on a practice applied by US supermarket chain Target, which linked customer data from multiple platforms to create profiles and target them more specifically. The practice involves companies combining both online and offline behavior to generate predictive analytics – how a customer will most likely behave – based on the behavior of other customers displaying a similar profile. Facebook is also said to be acquiring offline purchase data, in addition to data that accrues in the course of using the platform (online and mobile), in order to refine their profiles.

Is this even allowed?

In Germany – and the European Union – the collection of personal data is subject to rules and regulations. Exactly how meaningful these rules are in terms of profiling and personalized advertising is a source of contention. The EU General Data Protection Regulation (GDPR), which came into effect at the end of May 2018 (and has, thus far, primarily come to the attention of users because they’ve been asked to re-confirm their newsletter subscriptions), in principle allows companies to generate anonymized profiles. 

Such profiles are, on the one hand, assigned to one individual, but not in the sense that a name or identity is linked to them. As such, the conventional form of data protection that protects personal data does not apply. This is data that can be directly associated with an individual: their name, date of birth, address. Added to this comes personally identifiable data. This data does not make any direct reference to one individual but can very easily be associated with them: an e-mail address, for example. Other instances include telephone numbers and IP addresses. This, in connection with other data, easily leads to the identification of an individual.

Specifically, protected areas include health data, information on ethnic origin, political, religious, and trade union affiliation as well as sexual orientation. This data may only be saved and processed in exceptional cases. Studies have shown, however, that even seemingly neutral data, such as consumer buying habits or preferences, can be used to deduce very intimate personal circumstances. Target, for example, was able to predict with a good degree of certainty which of its customers were pregnant and consequently specifically target them, as young parents are a prized target group.

study jointly conducted by scientists at the MIT Lab, Rutgers University and Aarhus University in Denmark found, in 2015, that individuals could be precisely identified based on four credit card details, even if their personal data had been removed. Anonymisation is, therefore, only effective to a certain degree.

Who uses this data apart from companies?

The biggest data collectors online – Facebook, Google, Amazon, and Co. – don’t forward the profile data they collect to third parties directly. Doing so would harm their business model. User tracking by companies in order to sell more products is not the only danger. This is borne out by the Cambridge Analytica scandal – because wherever data is to be had, abuse is not far away. In 2014, using a psychological test app (along the lines of “What type of personality do I have?”) that people could click on when on Facebook, the data analytics company was able to get 320,000 users to unlock their friend lists. 

Through a gap in Facebook’s interface, Cambridge Analytica was able to acquire data sets on some 80 million Facebook profiles, above all in the United States. It is reported that these were subsequently used to influence the US presidential election in particular. Facebook has since closed the interface in question, but the almost daily reports on security gaps, hacks, and leaks are not exactly cause for optimism that a repeat incident will never happen.

What can I do about this as a user? The answer, unfortunately, is not much. For most of us, doing without Facebook or Google or selling our smartphone and only going about our business using analog is not an option. This would presumably only help to a limited degree because digital technologies have long since become interwoven into our everyday lives. A separation no longer seems possible.

And the classic data protection instrument is controversial: on the one hand, it has only limited capability to intervene against companies’ omnipresent collecting mania. On the other hand, civil rights activists criticize that excessively tight regulations restrict freedom of opinion.

We continue to find ourselves right in the middle of a digital upheaval. Debates within society are as necessary now as ever before if we are perhaps, at some point, to arrive at a workable regulation. Campaigns such as “Europe vs. Facebook” by Austrian online activist, Max Schrems, who, in 2015, succeeded in causing the downfall of the Safe Harbor Agreement between the US and the EU governing the processing of data on both sides of the Atlantic, illustrate that civil society involvement is both essential and vital – and is potentially of greater benefit than mere government regulation.

But who monitors our government bodies? The work conducted by parliamentary oversight committees has been a source of repeated criticism. The Parliamentary Oversight Panel (PKGr), whose task it is to oversee the intelligence services in Germany, is repeatedly described as “toothless”. With that being said, the Panel has no means of imposing sanctions if – as has occurred in the past – intelligence services lie to the members of parliament (which they do for reasons of state security, of course).

More info online

Valie Djordjevic, David Pachali, Alexander Wragge: Who owns my data?, 13.12.2018.
iRights.info, https://irights.info/artikel/wem-gehoren-meine-daten/14308

Ingo Dachwitz, Tomas Rudl, Simon Rebiger: FAQs: What we know about the scandal surrounding Facebook and Cambridge Analytica, 21.03.2018, Netzpolitik.org
https://netzpolitik.org/2018/cambridge-analytica-was-wir-ueber-das-groesste-datenleck-in-der-geschichte-von-facebook-wissen/

Charles Duhigg: How Companies Learn Your Secrets, New York Times 16.02.2012 https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html?pagewanted=all&_r=0

Yves-Alexandre de Montjoye, Laura Radaelli, Vivek K. Singh, Alex Pentland: Unique in the shopping mall: On the reidentifiability of credit card metadata, January 2015, Science 347(6221):536-9.
https://www.researchgate.net/publication/271591449_Unique_in_the_shopping_mall_On_the_reidentifiability_of_credit_card_metadata