Gendered disinformation: the US can’t be content with content solutions

Commentary

In regulating online spaces, if we treat the problems of harmful content as separate from the problems of harmful systems, we risk not solving either. Addressing disinformation and online violence against women requires a holistic regulatory response. 

Reading time: 7 minutes

“It should be revoked because it is not merely an internet company. It is propagating falsehoods they know to be false.”

This was Joe Biden in 2020, talking about Section 230 of the United States Communications Decency Act. Section 230 currently protects companies in the United States from liability for user-generated content hosted on their platforms. It became a target under the last administration after Trump began falsely criticising platforms for ‘censoring’ conservative views and so wanted increased platform accountability for (what he saw as) excessive content moderation. Biden, on the other hand, wants increased accountability for inadequate content moderation, particularly to tackle harms like disinformation. Combine this with the promises of Biden’s Plan to End Violence Against Women - including a taskforce to look into how extremism, online abuse, and violence against women are linked - and you might think that things look promising for tackling gendered disinformation in the United States.

But you know what they say about good intentions. 

Content liability is a high-risk, likely ineffective move against disinformation

Gendered disinformation content is highly varied and diverse. Making platforms liable for every piece of gendered disinformation content online, from a mean post to a false news article from an obscure blogger to a hashtag targeting a politician to a derogatory meme, would mean incentivising serious over-moderation by platforms. If you have to catch every piece of ‘bad’ content, you will always end up taking down innocuous pieces of content along with them: and this is often the content posted by members of marginalised groups due to moderation algorithms being trained on biased data. Not to mention that these campaigns are notoriously good at evolving to evade algorithmic detection, the only kind of moderation that would be possible at that kind of scale. 

Treating ‘social media’ as a monolith of bad content to be regulated like any other bad thing - bad food, bad medicines, bad behaviour - sets up the state as the arbiter of what  is good online, what is safe, what is correct, what is true, with the power to police and enforce it. This positive view of the state is likely to be an image that Biden would be keen to pursue, especially having won the election that his campaign termed the ‘battle for the soul of the nation’.

But the United States (along with many other countries) has an extremely recent history of its leaders engaging in gendered disinformation, and state-aligned gendered disinformation campaigns escalating online. From Trump’s racist and misogynist attacks on four Democratic congresswomen (‘the Squad’), to Vice-President Harris being the target of campaigns that weaponised misogyny, racism and transphobia against her during the election - not to mention the ‘Pizzagate’ campaign waged against Hillary Clinton: gendered disinformation is baked into how the democratic process operates in the United States and around the world. It is not a problem that exists outside of political institutions: it is something that policymakers and voters both engage in.

In this context, a government just doubling down on content-focused solutions - how much of a certain type of content is allowed or not - is a risky strategy. 

Why we shouldn’t give up on legislation altogether

Legislation certainly can’t be the whole answer to gendered disinformation, but that’s not to say it can’t help. 

Gendered disinformation campaigns require scale and engagement. One offensive tweet or one obscure false story is not a campaign - rather, it’s the stories that get picked up and shared and distorted even further, the offensive comment that then spawns a series of memes, the clicks and likes and shares. It is interaction, suggestion, promotion and ad revenue that keep the disinformation ecosystem churning along. 

And these are systems that increasingly rely on artificial intelligence and algorithmic analysis. Rather than presenting regulation of AI as an essentially separate conversation from reducing harmful content online like gendered disinformation, the overlaps between how regulation of the former could support the latter should be more fully explored. And so there is perhaps some hope offered, not through Section 230, but through the Algorithmic Accountability Act. 

The potential for algorithmic regulation

Originally put forward in 2019, the Algorithmic Accountability Act now looks to be making a comeback. The proposals failed to advance in either the Senate or the House in 2019, but are planned to be reintroduced this year. The Act would require large companies to  conduct risk assessments for ‘high-risk’ algorithms: algorithms which use personal data to make decisions that could be biased or discriminatory. Its focus is systems such as facial recognition used in policing - content curation algorithms, recommender or search algorithms are not the primary target. 

There is growing interest in AI development, such as the establishment of the National Artificial Intelligence Research Resource Task Force by Biden, as well as, across the Atlantic, the draft EU AI regulation. The European bill looks to ban ‘unacceptably’ risky AI systems and introduces graduated transparency and oversight requirements for ‘high-risk’ and ‘low-risk’ systems, with a broad remit across areas such as the use of AI in education, employment, identification and scoring. 

For both regimes, it seems there is room for content curation or recommendation algorithms to be brought into scope of regulation. In the US proposals, this is because decisions about content curation could be made on the basis of information collected about users (for instance, not promoting content shared by users that have been algorithmically determined to be likely part of a disinformation campaign). In the EU, some algorithms could fall within the definition of prohibited manipulative or exploitative practices; but which ones and how is not yet clear. 

This is an opportunity. The backbones of AI regulation proposals are risk assessments, transparency requirements, and in some cases, independent testing and algorithmic auditing. As they stand, both proposals have been criticised for having limited requirements on transparency, particularly relating to the lack of requirement to make risk assessments (US) or conformity assessments (EU) public. Strengthening the transparency requirements on platforms regarding AI systems writ large, and not only on a few subgroups of systems, would help increase our understanding of and platform accountability for systems that lead to exclusion, suppression and violence online. If we are interested in democratising power over the online world, without making a state the arbiter of speech online, then requiring more open systems, whose effects can be openly measured and their impact analysed is a better route than demanding content takedown. 

And we needn’t stop at transparency: many have set out more substantial requirements for effective AI regulation  - from putting civil rights at the heart of AI regulation to creating an algorithmic bill of rights to enshrine principles such as consent and redress in AI development.  

This isn’t a panacea by any means. ‘Fix the algorithm’ can’t solve a problem rooted in social attitudes and prevent all attacks by malign actors, but it’s part of the solution. It would represent a shift in focus from seeing ‘online problems’ as problems of individual people saying bad things online, to recognising the role of technology and corporations in driving what happens online. Taking steps to redress these power imbalances would be a start. 

 

This article was first published by Heinrich Böll Stiftung Brussels