2020 has proven that the human experience of technology from society’s margins is in dire need of transformation —yet, that transformation is not happening by itself.
If a tumultuous 2020 has taught us anything, it is that the digital world is no distant universe apart from our everyday reality. Still less is it a magical frontier, where human differences fall away, rendered equal by the common need for connection. 2020 has proven that the human experience of technology from society’s margins is in dire need of transformation —yet, that transformation is not happening by itself. For many people, inequities in the human condition have been worsened by technology. The multi-ethnic, multi-racial, gender-equal fantasy of the old “Star Trek” television series has given way to an era in which the majority of meaningful life activities take place online, but the risks and consequences of participation vary dramatically, based on one’s skin color, gender and ethnic/racial or sexual identity. The custom-built, computer-facilitated lifestyle of the old “Jetsons” cartoon is not the digital universe that lucky online denizens now inhabit. Instead, the digital landscape of the twenty-first century has become a place that marginalised people and communities struggle to access, struggle to gain visibility in, and struggle to avoid harm and/or exploitation through their digital engagement.
Who is at home in the digital realm?
Are women at ease in cyberspace, where “deep fake” technology, “revenge porn”, and a range of digital-driven harassment (e.g., “doxxing” and “dog piling” abuse) disproportionately targets them? Are ethnic minorities at ease, such as Uiyghurs and Rohiynga Muslims, whose online interactions have been surveilled and sabotaged by social-networking platforms and text-messaging apps that police their activities? How comfortable are older digital navigators with being tracked and targeted for hearing aids, adult diapers and life-insurance policies online?
The digital world has not been built for all of us. If anything, it mirrors all of the problems of the physical world —and more, when you consider who bears the brunt of its challenges, online and offline, and why that is. In every area that counts in the human lifecycle, from health, education, finance, and employment, to physical safety and security, people suffer the effects of racism, sexism, classism, ableism, queer phobia and xenophobia. Given the parallels between real-world erasure and disempowerment of women and minorities and this same phenomenon online, we should question who benefits from engagement with digital tools, and how destructive they can be for those who do not benefit. For example, when digital systems become self-referential, such that “verified” social media users of “public interest” are confirmed based on open-sourced social media like Wikipedia —we find that an overabundance of male editors consistently fail to support creation of public information pages about women’s accomplishments. This is also true of other minoritized groups. We are increasingly faced with human vetting, curation and decision-making processes by one narrow group of racially, economically, geographically and gender-conforming people. Those excluded from the “in” group of digital architects are, too often, also excluded from equal participation in the digital world.
Because digital products and services are no longer limited to the realm of video-games, digital exclusion is no longer limited to whether “Pac Man” and “Space Invaders” computer games are marketed primarily to boys rather than to girls. The architects of software-powered games have become the principal architects of our software-powered lives —and exclusion from their ranks means more than just video games which lack fictional avatars from different racial backgrounds. Digital exclusion today means British digital passports are introduced with facial-recognition software that fails to recognise Black and brown faces. It means test-proctoring software that obliges Black students to take online tests while shining bright light directly at their faces, lest the proctoring algorithm write them off as “absent,” is rolled out at thousands of schools. It also means the repeated detention or arrest of Black civilians (sometimes under legal age), based on inaccurate facial-recognition software is widely used by local and international law enforcement.
A More Inclusive Digital Sphere?
Decolonization of the digital sphere means untethering the digital world from the views, preferences and prejudices of a tiny minority, such that those who are most likely to experience exclusion and/or harm from digital tools and systems are not also excluded from the tech industry’s design processes. An incremental approach to change (aka token inclusion here and there) will do little more than slightly improve the bad optics of a deeply segregated industry. That is to say, the increased inclusion of “women” in the digital world in terms of access, privacy, protection from online abuse and so on is urgently needed. However, a feminist push for women’s digital rights will typically embody a racialized, often geographically-specific depiction of “normal” female experience —e.g., a white, straight, cisgendered person of Western origin. This homogeneity can be traced to the absence of ethical-, diversity-, and harm-reduction training amongst digital advocates and technologists alike. But decolonization is also about reducing the impact of well-intentioned but homogenous women advocates, whose experience of feminism often does not reflect the lived experiences of Black, Muslim, queer, poor, disabled or Global South-resident women, let alone women who claim all of these identities.
We are too far gone to simply “lessen” the harms of technology that enables stereotyping and punishment of individuals based on group associations. Construction of a more inclusive digital sphere requires a class-, culture-, and gender-inclusive approach to enable systematic change, e.g., the centering of those persons who have been rendered most marginalised and vulnerable in the digital environment to date. What is needed is a wholesale rethinking of how to bridge gaps between the digital haves and have-nots, whether those groups are conversant with technology or not. We need to decolonize the concept of a digital world as it exists today, because, unchallenged and unaltered, it will remain an exclusionary simulacrum of the real-world imbalances of power that created it.
That we currently have a racialized, often gender- and geographically-specific depiction of “normal” human experience embedded into digital products and services is why digital products such as the Apple health app once failed to consider menstruating women; why most pulse oxidometers perform worse on dark-skinned people; why Zoom background-masking did not initially work when Black persons used it; and why Facebook’s “safety checks” of the local area around security incidents was unavailable to Kenyans and Lebanese in proximity to bomb blasts, but was available to Parisians and Americans. Intentionally or not, the “depiction” of humanity by tech developers in search of “test personas” often amounts to a filtering exercise. Absent a broadly intersectional and feminist evaluation of these personas (and the products and services they are used to produce) the digital world will remain as hostile to women and to marginalised persons as the biggest tech companies already are.
The Digital Universe and The World We Know
The digital world does not exist in a vacuum —even if its architects behave as if they do. One mechanism that has long propped up digital innovators in most, if not all major private tech companies is the venture capital industry, which is still as white, as male, as wealthy and as limited in its network reach as a group of country-club denizens in search of a fourth player for bridge. That’s the real-world model for a digital world devoid of women and minorities who have an equal say. The truth is that tech innovation now rarely comes from state- and university-research lab spin-offs. Instead of the IBMs, Bell Labs and Siemens of the world developing cutting-edge tech within unionized, labor-law protected hubs, the 21st century has given rise to an army of “start-up” businesses funded by privately-held trusts that are, in turn, run by venture and private-equity financiers without much love for or inclination to respect labor laws.
That the capital behind the tech industry is mostly male should not shock anyone. What is shocking is government and private-sector acceptance of products and services developed and funded by a clone-like cabal of experts, without any standard process for evaluating the negative impact of these digital innovations on their constituents. Who is served by technology is a great question to ask, but who is victimised by it is a better question —starting with the women and marginalized persons who are least well represented within the tech vacuum.
A Digital Realm of Possibility?
If we are to re-make the digital world in ways that make it a realm of possibility for all, we must demand broad representation of marginalised persons in the testing and implementation of new technologies. As feminists tired of watching big tech giants diminish, disrespect and discard the most diverse women on their staffs, we must insist that inclusion be the yardstick by which digital products be evaluated. Much more can be achieved by demanding that local and national agencies begin to fund ethical-tech teams that operate independent of big tech. These teams should be composed of diverse experts from different educational, racial, gender and social backgrounds. They should be tasked with ensuring that the digital definition of “human” is not limited by a singular “norm.” At the very least, normalizing the existence of a non-commercial, intersectional tech ethics function will require state and individual actors to recognise that overlapping intersections of human identity exist— and require protection, both in and outside of the digital realm.