During IMPAKT festival 2023 lawyer Jan Fermon and artist Jonas Staal presented their collective action lawsuit Collectivize Facebook. The lawsuit aims to force legal recognition of Facebook as a public domain, owned and controlled by its users. During their panel talk they were asked the following question; »Do your Facebook posts providing updates on the developments of the lawsuit ever get downgraded in the feed or effectively censored by Facebook?« Their striking answer was that they have never witnessed the level of downgrading as in the case of their Palestine-related posts.
Festivals such as IMPAKT, unfolding in Utrecht, NL, every autumn since 1988, “critically monitoring the influence of the media from the perspective of art” are an essential node in uncovering the vast power structure behind the media we consume on an industrial scale. As we transition into a futuristic and often toxic codependent relationship with the media we use, its critical monitoring becomes increasingly urgent. The discussions taking place during IMPAKT touch large swaths of the global citizenry, and the panel talks refreshingly host artists, activists, researchers, and designers alongside policymakers, politicians, and the occasional head of a corporate company.
In light of the most recent humanitarian tragedy of the Israeli bombardments of the Gaza Strip (acknowledging victims on both sides alongside acknowledging the unquestionable genocide of the Palestinian people), there has been a wider awareness of the algorithms that influence the visibility of specific content. Around the world, cyber-activism become one of the main ways of spreading otherwise hard-to-access viewpoints and information and an important way for communities to take a stance and exercise agency. I recently published an article on the commercially oriented public and private digital sphere, however, what we are now witnessing is the ranking of content that does not adhere to merely profit-oriented marketing principles but rather very clear political and ideological fault lines. The ideological fault lines should be emphasized because the shadow banning is unprecedented and for the most part unaddressed.
We discussed shadow banning during the panel Queer Visibility On Social Media, curated by Dunja Nešović in regards to reports on shadow banning LGBTQI+ content and hashtags. The head of TikTok Netherlands, Alexander Jansen who was one of the panelists, predictably denied the existence of such schemes. As an answer to the question of why the content sorting algorithms are not made public, which would solve a lot of the questions of transparency and trust, Jansen explained the commercial nature of the advertisement model that sustains the company, maintaining that revealing the algorithms would be detrimental to their enterprise, a logical stance to have. That conversation didn’t go far.
As of yet, we do not have a legal framework equipped to investigate harmful content ranking. Attempts such as Collectivize Facebook, artist Julia Janssens’s lawsuit against Amazon (in regards to illegal tracking of users’ internet activity), or whistleblower Daniel Motaung’s lawsuit against Meta (for violations including forced labour, human trafficking and union busting,) are all aiming to ‘retool’ the law, and create legal instruments to take control by requesting responsibility for unlawful conduct from the commercial data processing plants of our networks.
According to Chelsea Manning, one of the OG whistle-blowers and the activist who opened the festival, it would be naive to expect governmental bodies to fully regulate these commercial conglomerations. The Deputy Director General on Digitalization, NL, Roz Roozendaal, who had a talk earlier in the evening, claimed the same, but for different reasons I assume. He made it seem as if the disputes around privacy, censorship, and education online are a responsibility that is apolitical and to be shared by many different bodies.
Chelsea Manning on the other hand would prefer to focus on the lobbying mechanisms that managed to significantly water down the much-celebrated General Data Protection Regulations (2019) and the Data Services Act (2023).
GDPR is a set of regulations understood as one of the first attempts to institutionally guarantee the right to data anonymity and the right of erasure, focusing on minimising the amount of collected data and making companies specify its usage. The DSA curbs targeted advertisements and makes sure that one’s identity markers such as sexual orientation, religion, and political positioning do not affect the personalisation of advertisement and sorting algorithms. Under the DSA, content moderation should be transparent as well as the notification of the accounts that have been banned. So what are some lobbying mechanisms that water down these regulations from the onset to the extent that they are not fully implemented and enforced?
A term that aptly describes a pervasive aspect of lobbying, is the revolving door (its practical application is explained in detail in the documentary Crime of the Century part 2, on the example of the accelerated fentanyl crisis in the USA.)
»In politics, a revolving door is a situation in which personnel move between roles as legislators and regulators, on one hand, and employees or lobbyists of the industries affected by the legislation and regulation, on the other. It is analogous to the movement of people in a physical revolving door. Political analysts claim that an unhealthy relationship can develop between the private sector and government, based on the granting of reciprocated privileges to the detriment of the nation, and can lead to regulatory capture.«
A telling example of the revolving door in the Palestine case is that of Emi Palmor. After her lifelong career as a lawyer and civil servant in Israel, most recently a five-year tenure as the Director General at the Ministry of Justice of Israel finishing in 2019, she was immediately selected to be one of the first members on the Oversight Board at Facebook in 2020. The Oversight Board is responsible for content moderation, and handling appeals for blocking and removing content which predictably contributes to the shadow banning and content ranking that favours the Israeli narrative.
Reciprocal privileges between the private advertising corporations on the one hand and the public/state sector on the other, alludes to industries and beneficiaries that are not associated with social networks, but reach far beyond. Not in the sense of Let’s actively facilitate this candidate winning this election (although also), but rather; Which industries stand to benefit from the prolonging of the armed conflict? How might this situation lead to mutually beneficial gain across various interest groups, and how can content distribution encourage this monetary gain? Specific ideological positioning seemingly necessitates expanding commercial interests.
Chelsea Manning provided a more politically correct explanation about the downplaying of essential content. In her opening speech, she mentioned that because of the advertising business model and increased EU scrutiny, these (social) media corporations want to downplay radicalised identity niches and political content in general, to cultivate a homogeneous audience providing a neutral plane for advertisers. She says: »They really just want you to watch a lot of Kim Kardashian videos,« kind of banal, harmless, absurd and advertisement-oriented.
(Now we are veering into major side note territory, but there is some weight to this. The Kardashian mention rang a bell because I must admit, I do watch quite a bit of Kardashian offspring videos, and I kind of love them. I love all my useless reels tbh. The rise of the mid, or this very neutral corporate chic and shallowly memefied aesthetic plane, has been noticed by many media analysts. This theory can be further demonstrated by the collapse of good TV, with networks such as Netflix and HBO churning out tons of really quite remarkably boring series. The only shows garnering excitement are in the form of comebacks like the Sopranos, Sex in the City and The Wire. These popular contemporary shows are trashy reality TV competitions or occasional series that (seemingly ironically) fetishise the mega-rich (The Crown, White Lotus, Succession). A reaction to all this prepackaged shit we deal with on a daily basis is the rising trend of entropy posting or blurry deep friend images with miscellaneous and often confusing imagery, grey and domestic and random. End of side note, thank you for indulging.)
Let us get back to the presumable increase in regulations witnessed in the last three years (GDPR and the DSA). Various types of lobbies affect the decision-making processes as far up as the European Data Regulation Agency. Regarding regulatory policy, as Roz Roozentaal pointed out, it is difficult to navigate between the trifecta of privacy, freedom, and control, indeed a conflict that kept recurring throughout the conversations at IMPAKT in different forms.
One example of what gets lost in this negotiation is demonstrated in the case of Thomas Le Bonniec, who spoke on the panel Whistleblowers, Meet the People Who Made a Difference. Thomas worked in Cork for a short time for an Apple subtracted company that was responsible for transcribing all of Siris recorded conversations gathered while the app was activated, although much of the transcribing work was dedicated to millions of conversations that were recorded while Siri was not being actively used. In 2019, Thomas broke the NDA contract he had signed and quit with a selection of screenshots confirming the long-suspected sonic spying. He released all of this information to various journalists, NGOs, and media outlets, and even wrote an open letter because what happened with this information in the end, was absolutely nothing. I spoke with Thomas during IMPAKT, where he explained in more detail about the institutional bodies responsible for this legal lethargy. Watch the video below (3.31) Watch it now, please, it’s short and important, and an integral part of this report:
To the inquiry of where can network users find agency, other than joining class action lawsuits, there is the option of becoming a member of consumer unions. Consumer unions are organisations that can be found in all the EU member states and do everything possible to prioritise consumers. These unions do not necessarily have to deal with the tech and digital market dynamics, but often do, since they provide »comprehensive sources for unbiased advice about products and services, personal finance, health and nutrition, and other consumer concerns.« The most basic consumer union in the Netherlands is the consumentenbond. A great example of a consumer union is the action called the Right to Repair, which unionised consumers to successfully lobby the EU parliament and tech organisations to be able to freely repair electronic products. In the case of Apple products, for instance, the motherboards and often batteries are hardwired to the body of the object, making it difficult and expensive to repair at home. The sole incentive for this is to accelerate the purchasing and production cycle. Essentially, consumer unions and trade unions exist to protect the anonymous masses of consumers and invisible workers.
Thomas Le Bonniec is working hard on exposing the lopsided power dynamic of the big tech industries and all the forms of invisible human labour that go into maintaining them. And there are so many forms of invisible labour; the Siri transcribers like Thomas and his colleagues in Ireland and other countries, or the human content moderators such as Daniel Motaung, working in equivalents of sweatshops and reviewing incredibly violent content for $2.20 an hour in Kenya, for whom Facebook requested a gagging order in an explicitly racist statement to the judge for ‘his lordship to crack the whip’. There are various annotation stations across the Global South using neocolonial exploitation and invisible data workers whose jobs are to label images and objects in videos as training data sets for the self-driving car industry.
Video piece Unknown Label (2023) by Nicolas Gourault and Lucas Azemar, displayed in IMPAKT project space.
Why do you think you must tick an image box for a motorcycle, bus, bike, or sidewalk to check that you are not a robot to enter websites? Every such click is an additional chunk of data, to be carefully processed and filed for the purpose of more efficient self-driving cars. So for instance, when Elon Musk tweets that not only is the term »from the river to the sea, Palestine will be free« antisemitic, but also the term »decolonisation«, to be censored from his censorship-free platform X, it is easy to make sense of his stance. Elon, alongside many of his fellow large business conglomeration owners, has a business incentive to be against decolonisation, since he needs to keep these traditional colonial labour markets in the Global South operational to have a cheap way of fine-tuning his Tesla auto-pilot system, which is for now still illegal in Europe due to its lack of safety.
The invisible AI workers are perfecting data sets for large language text and image models, thus being the human backbone sifting through trillions of data points. Sadly and again, not too surprisingly, this human backbone is predominantly located in areas where there is less access to legal consultation and support and an abundance of cheap labour. And we haven’t even gotten to the actual physical workers in the mines of the material and mineral resources necessary to produce the technologies that the data models feed into. EU is absolutely not exempt as it recently passed the Critical Raw Material Act that through regulations and financial support secures critical minerals predominantly from the Global South, in the name of climate action for a variety of industries, including competitive tech development.
So this digital neocolonialism is reflected in very traditional forms of material and human exploitation targeting the Global South but also in contemporary forms of self-colonization applied to all users through the attention and reputation economy in our revamped technocracy. Thus we are entering a new period whereby we, as the users, are being extracted. It is not about spying on individuals per se, although that is an important side-effect, but rather just the pure extraction of anonymous data. It is not about me Klara, talking about maybe shoplifting or buying drugs in a little WhatsApp chat, no one really cares about that. It is not even about me personally sharing Palestine-related content, although that certainly adds to my data profile, but rather the stifling of a unified data profile, a unified narrative. For now, it seems to be about this behavioural data being extracted in bizarre quantities and unknown qualities, to be sold to industries that in the end, are financially beneficial to the new digital colonisers. Who by coincidence, happen to be mostly in the West. The reason I say unknown qualities is because there are data points we are not even aware of. For instance how fast we scroll, how long we hover and move our finger and even our walking gait are just as predictive as our Google searches and shopping patterns in determining our mood and identity. So it is indeed important to opt out of cookies but essentially useless in the face of alternative data point collection.
All together these nodes of production (the users who produce the data, the data, the technologies that use the data, the data that produces the technologies, the actual raw material needed, and the raw labour that goes into all of these nodes in different capacities) form hyper-dimensional systems that are extremely difficulty to trace and link.
Vladen Joler is an artist and researcher, a critical and counter cartographer. He makes intricate maps to investigate the black boxes of the supply chains, the geology of the media, and the philosophical dimensions of these new extractivist systems. I spoke with Vladen shortly during IMPAKT, where he explained in more detail his cyber forensic work, the fantasy of transparency we like to imagine, and the way he thinks of AI. Watch the video below (9.34) Watch it now, please, even though this one is a bit longer than the previous one, and very important:
The embedding of particular ethical and political stances in AI has been clear also in the case of Palestine and Israel. When ChatGPT was asked do Israelis deserve justice, the first sentence of the answer was: Justice is a fundamental principle that applies to all individuals and groups regardless of their nationality or background. When the same question was asked for Palestinians, the first sentence was: The question of justice for Palestinians is a complex and highly debated issue. This says enough about the subtle ways that programmed biases and political stances are projected to the world and engrained in the psyche of those who fail to diversify their sources of information. Fully avoiding false and biased information however is a tough task, especially in the post-truth era where not only is the concept of reliable information sources utterly slippery and subjective at this point, but the window of predominately human-generated information is slowly closing as well.
Chelsea Manning, the OG whistle-blower previously mentioned, who had access to all the military data she revealed because of her position as a data analyst, remains in close contact with software and the interior functioning of progressive technologies. She outlined one novel solution which is also linked with the huge amount of energy consumption that each ChatGPT search consumes which Vladen Joler mentioned, and the general intense energy toll for producing deep fakes and other AI malinformation. She spoke of metadata watermarking or cryptography signatures, which differentiate AI-produced metadata from human-generated data. AI can detect AI.
AI systems cannot however replicate the necessary and specific metadata watermarking precisely because they are using such a huge amount of energy and according to Chelsea do not have the capability of faking the metadata verification. So it is not all gloom and doom.
A panel talk at IMPAKT called Anonymity Dilemma 2: Information Warfare addressed some of the topics of misinformation, misinformation and AI propaganda through the lens of cross-platform analysis and activism. Using the example of Southeast Asia and Chinese operations as well as Ukraine and Russian information operations the panelist dissected different methods of tackling propaganda in a time where cyber warfare almost always runs in parallel to ground warfare and is often considered equally important.
(The principles of FIMI (Foreign Information Manipulation and Interference) are clearly explained in this recent EU document on the example of campaigns targeting LGBTIQ+ communities.)
I spoke with the curator of the panel on cyber warfare, Emily Hsiang Yun Huang shortly during IMPAKT, where she gave a few ideas and examples of how to deal with the current forms of propaganda. Watch the video bellow (4.38) Watch it now please, as it’s another short one, and essential:
The term Anonymity Dilemma titling the panel deals with this problematic trifecta of privacy, freedom and control, which I mentioned earlier in the context of regulatory policy. There is a dissonance between freedom of speech and censorship, yet a type of censorship is often requested in the name of progressive freedom and censoring human right-wing trolls and is required in the case of AI misinformation. An even sharper ethical dilemma is the one that contrasts digital privacy and freedom with security and justice for victims in the case of image-based sexual abuse and coordinated harassment. Clearly, there must be some type of judiciary overview of aggressive online activity.
In comparison, the question of targeted algorithms for shopping and swaying political stances seems banal. But it’s not at all of course, and the difficulty of this discourse comes from the contrast between having privacy and not benefiting from the polarising yet still sweetly personalised algorithms. Because the sweet personalisation algorithm lovingly curates your feed, your online community and source of information, your online shopping experience and content that fits incredibly well to your momentary needs. For example, in the panel talk Entangled Networks 2: Empires curated by Miranda Mungain, artist Lukas Engelhardt and policy maker Lootje Beek considered ways to come together to create alternative networked infrastructures. After their discussion, three people separately stood up and publicly maintained that they love their personalisation algorithms, so what now? Slightly idiotic line of inquiry since you have the option to opt in or out of the personalisation algorithm. (Although let us keep in mind that even if we opt out of personalisation algorithms, no one at IMPAKT could answer the question of; whether is the data still being collected and simply not applied to the algorithm?)
When I later asked Lukas what he thought of those questions he laughed them off but I do think there is something to this idea of the platform and tech Stockholm syndrome. The core of the contradiction is that a lot of applications and technologies are a result of damaging and neo-colonial, (digital) extractivist processes but are simply so nice to use. I mean, the Mac interface is just so smooth that we cannot deny ourselves the pleasure of its silvery surface. Siri is so practical and friendly. Zoom is smooth and quick, and Insta-reels and TikTok are unfortunately quite entertaining. And for good reason, since these algorithms and technologies have been developed alongside behavioural psychologists and expert designers. Everybody at IMAKT had a Mac computer.
Ultimately, it comes down to our mainstream communication and media landscape being politicised and polluted and our digital worlds increasingly gentrifying, just like all the cities we used to love but are now slowly yet surely becoming uninhabitable cesspits of neoliberal, smooth interface, mid-aesthetic, corporate meme advertisement fields with apartments available for only the mega-rich or those with accumulated inter-generational wealth. Ew.
I also spoke with Lukas Engelhardt at his studio in Amsterdam, a few days after the IMPAKT festival, where he explained his work that researches autonomous digital zones, self-managed servers, and independent digital toolsets. Watch the video below (7.06) Watch it now, please. This one is the last interview, plus it’s a good one, so let’s be consistent:
As Vladen Joler said, these are not issues that are solvable by something, there needs to be a diversity of different things happening for change to come. And change will come, I mean it is coming already. Even this obvious shadow banning of the Palestinian narrative is somehow a change, or the Western financing of the conflict. Processes that are not necessarily new, indeed hundreds of years old, and have been hiding behind beautiful ideas of the European Union, democracy and love and peace and money for all, (ideas that academics of a specific generation still indulge in,) are now coming to the surface and sitting there for all to see, glistening in their toxic poisons.
So we can see them now, and that is good. But what can be done, what can be done? There is plenty, so stay tuned. For a start, detach from the networks that hegemonize us. Because there is indeed a revolution coming and its first and most essential frontier is definitely going to be on the internet. And if I wanted to be even cornier than I am, I would insert something about the revolution not being televised.
Here is the manual Lukas spoke about: https://self-hosting.guide/
And here is a video that encapsulates all of these ideas we have discussed: https://www.dailymotion.com/video/x2se7mq