Published in
Digital Narrations: Fails and Errors
10.37198/APRIA.04.05.a4

Democratise the Cyberspace!

Storytelling in the Digital Era

Abstract: ‘Democratise the Cyberspace!’ is a personal reflection on the larger technological developments that fundamentally shape how we tell stories. It draws on discourses from media theory, art, activism, and media. I describe three phenomena that I identify as failures and errors, reflecting on how they influence storytelling. The first is the invisibilities of subjects and their lives that accompany these technological developments, which are reinforced along imperial and postcolonial axes. The second error is the public’s widespread digital illiteracy due to a lack of a broader discourse and the manifestation of micro-temporal processing in digital devices. The final error is the potential for distortion of realities through artificial intelligence and the overreliance on the accuracy of numbers. The essay ends with a suggestion of how cultural workers can face these shortcomings.

Keywords: storytelling, democracy, memory, language, numbers


Introduction

As is the case with many people, my first direct lessons about failure and errors happened at kindergarten and school. In my case, it was primarily due to my (back then undiagnosed) ADHD and, since it wasn’t my mother tongue, my precarious German skills. I remember attending a special language class, available to only a select few, while the other children with normal language skills, normal snacks, and normal clothes were laughing and playing outside.

Normal for me meant: no jumpers, jackets, caps, or shoes that were emblazoned with glitter or rhinestones. Normal for me meant clothes that weren’t washed out, had no traces of wear and tear, and didn’t smell of paprika and other foreign spices. I was sitting in the class trying to understand German’s opaque set of rules. I hated it. I felt excluded and stupid. I felt punished for my poor language skills while the other children were rewarded for their ‘normalcy’ by playing outside. Our parents were not informed about these school activities because, as some teachers often snidely said, they would not be able to understand things here anyway.

Time passed, and I learned the language through failure, after many tears and dictation exercises covered with red corrections marks. I’d spent a huge amount of time reading many novels and short stories, watching German TV, and meeting people from a non-migrant background. I eventually got into secondary school to get my diploma (called ‘Matura’ in Switzerland), which was (and is) needed to attend higher school education.

This was in a small city in Switzerland in the 1990s and early 2000s, where education streamlining into different classes starts by age 12 at the very latest. The local language and mathematics were and are the only decisive subjects. At that point, only one-fifth were selected to attend secondary school (called ‘Gymnasium’ in German-speaking Switzerland). Most of my peers from a migrant background, especially those from working-class homes, were much more likely to do the poorly paid jobs that (most) Swiss people don’t want to do.

I was mostly lucky and had some resources that helped me attend secondary school (such as great curiosity for some school subjects and a motivating sister). However, that is not how those selection processes were explained in school, in the neighbourhood or in the media. And no one mentioned that some children did not have anybody at home to help them with homework or money for private lessons. We were told a different story that had nothing to do with money. Educators and even peers would say in different words: ‘Look, some people are intelligent and talented, so they are allowed to go on to university or other kinds of higher education and develop their higher abilities.’

I internalised that story, just as I internalised the story that our foreign parents couldn’t understand things, which caused pain and anxiety. These senses of failure made me intuitively understand how the promotion and selection procedures of public institutions could operate and how they could violently objectify people and feed them into an economical production system in which some abilities were more valued than others. I understood that in Switzerland, the degree of mastery of the national language(s) could determine whether one could potentially devote oneself to a more valued employment or whether they would be ‘condemned to do the dirty work.’ And it became apparent to me that the degree of mastery could determine whether one was taken seriously or valued at all. That’s why I was keen on learning that difficult language.

After several years of education and expensive therapies, I now see more clearly that those failures and errors were judgements that were often based on certain values. They didn’t necessarily reflect something objective or universal (as speaking a language poorly does not necessarily indicate ‘low’ intelligence or the lack of any other kind of ability).

 

Reconsidering Failures and Errors

In their book Failure, Arjun Appadurai and Neta Alexander take this idea even further. They say failure ‘is not a self-evident property or quality of projects, institutions, technologies, or lives. Rather, it is a product of judgments that reflect various arrangements of power, competence, and equity in different places and times. As such, failure produces and sustains cultural fantasies and regimes of expectations. And by reading failure as a judgment, it reveals its relation to memory, storytelling, and capital.’1

Their book critically reflects the discourse around failure today, examining the culture of Wall Street and Silicon Valley that promotes the illusion that scarcity can and should be eliminated in the age of ‘seamless flow.’ They refer to different schools of thought, such as queer theory, that offer another understanding of failure. For instance, look at the shining success story that is Netflix. If we didn’t consider revenue or growth as the only factors to determine success but considered others, such as sustainability or health, Netflix’s bright success story would quickly darken. That’s because it consumes minds and bodies to maintain its high revenues. Employees have to perform at a high level, regardless of whether this has a bad effect on their health or social behaviour at work.2

Appadurai and Alexander offer productive and appealing ways of thinking about the stories around the school selection processes that I undertook. On the one hand, they give another sense of how, for instance, precarious language skills could have been re-evaluated back then. Other abilities, such as multilingualism and intercultural flexibility, could have been made visible and valuable.

On the other hand, the book emphasises yet again the attractive force of storytelling. Back in school, I didn’t have the words to explain it, but I understood the power of those stories. Remembering the realities at our school (like many of my migrant peers) over the years, I started to wonder, what would happen if we told stories about their lives? Wouldn’t they impact the school system and maybe even question the economic system of Switzerland? That is to say, wouldn’t some stories have the capacity to challenge some power arrangements and some of those ‘cultural fantasies and regimes of expectations’?3

Adopting a ‘queer’ notion of failure, I consider this as a failure or error. That is to say, I am seeking an understanding of failure beyond the usual categories by which success is measured, perhaps even an understanding that lies beyond a clear binary between success/failure.

From both repressive regimes and nation-state democracies like Switzerland, we know that stories are often well curated such as national legends or the belief in equal opportunities in the public school system. Those stories can sometimes be more insidious and are also carried by political parties or other political actors, for instance, in questions of border policy. A well-known story is the ‘flood of strangers’ that ‘threatens’ a fixed thing like the nation and its population.

Now, considering the global impact of digital technologies and connectivity, who or what else curates stories like these? Do machines curate or even tell stories too? And, in keeping with the ‘queer’ notion of failure suggested by Appadurai and Alexander, what kind of failures and errors occur?

The Emergence of the Internet and a New Kind of Storytelling

Figure 1. Scan of printed drawing on Microsoft Paint by Maria-Cecilia Quadri, date unknown, Baar, Switzerland.

The time frame when those special language classes took place correlates with the commercialisation and wide-spread use of personal computers and the internet. I remember the first computer my father brought home, an AT&T device that didn’t seem particularly special—it looked like a grey, matt monolith. However, the way my mother and sisters behaved when using it gave it an enigmatic aura. Quite quickly, I was sat in front of it, painting my first digital art pieces with Microsoft Paint.

Like many other children, I started my artistic practice in those early years, although most of my peers stopped because they thought they weren’t skilled enough. Back then, skilled meant painting gracefully, without spill over the edges, or painting something realistically. Working on Paint, however, I started to understand that this only applied to the non-digital world. On the software, you could click on return and jump back in time and correct your errors. It was like magic.

The digital world for me meant also going to the basement. I would start the PC, which took several minutes to access to the internet via the telephone cable, and listen to the beeping, raspy sound of the modem. Eventually, I would be able to start the browser and spend a precious and very expensive ten minutes chatting with random people. It was mind-blowing. Some years later, I would begin customising my Myspace account, which was my first introduction to what programming was and allowed me to connect with people from all over the globe (and friends and family abroad). This was even more mind-blowing.

Like many of my millennial and older peers, I saw platforms emerge like YouTube, Facebook, Airbnb, Spotify, Netflix, Zalando, and so on. In the following years, I watched cute cats and influencers’ morning routines. When I asked my boomer parents if they could have imagined those developments, they shook their heads. They thought computers, the internet, and the world wide web would stay in the military and academia.

The internet also influenced storytelling. Modes of storytelling that differ from traditional media emerged, such as tweets, social media posts, hashtags, and hypertext (or hypertext fiction). I saw how 3D renderings, virtual/augmented reality, and gaming construct immersive and interactive story worlds that display new and (sometimes) breath-taking aesthetics. Story worlds were and are carried forward through adaptations and fan fiction that is shared in communities. The possibilities seem endless: there are unlimited forms of image/text combinations, such as memes or the constant sampling, quoting, collaging, and sharing of content. In the persistent practice of copy-paste, authorship became blurred with unmanageable legal consequences. And now, even artificial intelligence (AI) technologies operate as authors or co-authors,4 reporting and creating stories for news services,5novels, or screenplays.

So, scratching the surface of my shiny displays, I somehow understand that some of those modes are due to the structural and technical conditions of the apparatuses that we use every day. It is clear that, even if some formats seem like traditional media, they often just mimic them, like the coloured pencil on Microsoft Paint, displaying new modes and possibilities. That is to say, many stories that are told today unfold within an environment that has fundamentally changed.

For instance, Netflix went from shipping DVDs ‘offline’ to streaming online in 2007. This structural shift made Netflix not only ‘fly to the moon’6 but made the company way more potent through its ability to collect data on a vast scale and use AI technology. Other platforms and ‘curators’ of stories like TikTok and Facebook also use AI. In a broader sense, we could consider a search engine like Google Search as a curator (of information) that uses AI.

Does that have an impact on the stories we tell? For instance, does it influence which stories are told? If so, how? When thinking about storytelling and what kind of ‘errors’ or ‘failures’ influence it, I sense three kind of errors that I will try to illustrate in the following chapters.

Error 01: Invisible Lives

I was influenced by Hispanic and Latin American literature and culture before facing German-speaking literature and culture during my high school and later professional years in the performing arts. So, I’ve experienced how easy it is to take some narratives for granted and be ignorant of others (I am guilty of that, too). For instance, even in contemporary German theatre, there is a limited body of literature work that is used as a reference point. This is no big surprise as an art discipline is mostly based on language. However, considering that Germany and Switzerland are post-migrant societies,7 I found this lack of awareness and interest to be a lost opportunity. Just like back in my kindergarten years, I felt that the richness of our experiences and knowledge weren’t valued again.

In that sense, when post-colonial studies became better known and more popular in universities and cultural institutions in Germany and then later in Switzerland during my late art school years, I was relieved and even hopeful. I started to understand things I had personally experienced because of my family background with more depth and in a broader context.

Feminist activism and theories also brought some critical insight to power dynamics and storytelling. As someone socialised and educated in Europe, Virginia Woolf’s essay ‘A Room of One’s Own’8 comes quickly into my mind. This feminist essay shows the obstacles or the conditions that influence who will write and who won’t. It’s a great piece to understand the act of writing (or making any other work of art) and to challenge the romantic idea of just needing a great mind and talent to do so.

In her fiction, female characters often pass by public institutions to which they don’t have access. The authorities and the thick walls of those monuments keep those spaces separate from the public. Women remain in the private sphere of home and domestic work, contemplating the world from this narrow position. From there, as a woman, you could not have written an enthralling piece of adventure about sailors, pirates, and other world figures—you just wouldn’t have seen enough of the (public) world to do so.

Woolf uses her tools of literary fiction in the essay. Her figure also walks across a university campus called ‘Oxbridge’ (a portmanteau of Oxford and Cambridge) to which she does not have access. Although this picture is primarily understood as a metaphor for women not having access to education and the technics, tools, and other knowledge that are required for the praxis of writing, it also represents a real physical and architectonical organisation of the space that regulates the access and exclusion of certain bodies to resources.9

Woolf’s essay was written in the midst of the highs of the nation-state, with its regulatory logic of public and private institutions—such as universities, archives, and libraries—but also private enterprises, such as publishing companies and news services. She lived in a time when public institutions decided that, for instance, if you were a woman or, say, a person speaking in dialect or with broken language skills, you would have found it incredibly difficult to publish any piece of work. You also would not have had access to universities, editorial offices, or any other decision-making entity. In that sense, I still find the essay helpful when considering the digital milieu we are in. For instance, we can see that digital technologies can (technically) overcome nation-states and their regulatory logic, making other perspectives visible.

The internet indeed triggered a revolutionary shift in power relations through decentralisation, the empowerment of new groups, and new and flat forms of organisation in its first phase in the 1990s. New voices would arise, be heard, and have more access to knowledge and other people. Let’s put it this way: new voices could overcome the thick walls of institutions like ‘Oxbridge’ and their gatekeepers. However, the second phase of the internet in the 2000s is characterised by increased communication and interaction in closed systems or platforms. Users face more closed networks that force them to communicate (and consume) within a specific platform, restricting open communication like it is possible with email protocols.10 For instance, we can write an e-mail from a Gmail account to a user of another e-mail provider like GMX or Outlook. However, from my Instagram or Twitter account I only can write to users within this platform.

It’s true, again, that new voices arose during this time period. People around the globe can participate in online communities and share their stories on the internet. These technical possibilities bring a more diverse representation and visibility of people and lives. Furthermore, critical individuals, groups, and communities can (self-)organise and share information through digital tools and display different values, such as other notions of beauty or sexuality.

People can expose existing norms as idealistic, racialised or misogynistic and can offer inspiring and fruitful alternatives. Powerful hashtag campaigns like #MeToo and #BlackLivesMatter or performances such as ‘un violador en tu camino’ (‘a rapist in your path’) by the Chilean feminist collective La Tesis and even social movements like Occupy Wall Street or the Arab Spring wouldn’t have had the visibility they received without the global connectivity through digital technologies.

Looking into the realities of participation in the digital space today, however, is a sobering experience. Mark Graham at the Oxford Internet Institute and Anasuya Sengupta, co-founder and coordinator of the global campaign ​‘Whose Knowledge?’11 give great insight into the geographies of the internet.12 They collect and analyse the data of the internet, exploring it like a territorial space to display its existing inequities and invisibilities. For instance, they collect information about how many articles on Wikipedia are published in English or other languages; how many domain names are registered all of sub-Saharan combined (0.7%); or where its authors are writing from (e.g., the people writing about African countries are mostly males in Europe and North America). They show that 20% of the world or less, mainly in the Global North, shapes our understanding of 80% of the world.

The dominance of English on the internet is common knowledge since it’s the lingua franca, and there are plenty of memes making fun of people excusing themselves for their ‘bad English,’ showing that it’s not their first language (the language with the most native speakers by far is Mandarin). The dominance of English and other powerful languages of the Global North on the internet is the product of colonialism, imperialism, nation-state building, and globalisation in the pre-digital age. Although in every day digital interaction, we seem to have some collective memory loss since we don’t address this fact critically, and if we do, we just say how important it is to learn English for business or traveling purposes.

Postcolonial discourses have questioned this, though, and helped is understand the social impact of the dominance of English. The authors of Why English? Confronting the Hydra, for instance, analyse and display the specific strategies and language policies of different places and regions in the world that face a dominance of English and the displacement of language diversity, memory loss, and an extraordinary waste of minds and creative energies. So, it seems that these power relations somehow found their way into the digital space. 13

In that sense, what are the authority architectures of the digital space? They don’t seem any different and as hard as those masses of stone of Oxbridge in Woolf’s essay that separated women, the poor, the racialised, the ill, the differently abled, and all those marginalised bodies from the public and decision-making spaces. Some national institutions—such as the language academies regulating language policies—that are involved in organising social classes have lost some power to be sure. However, as we can see in the data of Mark Graham’s research work, 14 (in)accessibility to the internet (and thus to knowledge) spans along (post)colonial axes, dividing the Global South and the east (yes, including eastern Europe) from the Global North yet again. This I consider to be Error 01.

So, did things stay somehow the same? Or did other elements come into play? Which power structures are operating here, if not those of nation-states and international institutions of the nineteenth and twentieth century?

As mentioned above, some of the influential ‘curators’ of stories and information are Big Tech corporations. They have great power over what is shown and told within their platforms. They govern on their terms and conditions (that no one ever reads). They determine whether a video on YouTube gets demonetised or not. They decide what will be told and what is not without any insight into how that happens. Mexican journalist and activist Antonio Martinez Velasquez15 once said to me at a conference in 2014 that one of his political aims is to make Google a public entity as it stresses the very undemocratic aspect of the impactful decision-making of Big Tech corporations. He wanted to make it clear that powerful decision-makers in upper management would have to be elected by the public, like politicians working for a government. I found the idea of dispossessing Google very intriguing.

The ‘sovereign’ power of some Big Tech companies is indeed concerning. And I am writing from a particular perspective: as someone who is not only inspired by the liberal ideologies of some internet pioneers 16 but who also lives and experiences concrete democratic instruments and processes in which the public can have a say in and insight into decision-making. Keeping my school memories in mind, I’m not saying that European democracies like Switzerland are fairy tale places in which everyone experiences full access to their rights as a citizen, equality and fairness. Those considered to count among ‘the public’ or citizens reflects power arrangements and ideologies, which can and do exclude others.

Switzerland, where I mostly live and work, celebrates and markets its image as a nation with a functioning direct democracy. Still, one-quarter of the people living and working (some even born!) in Switzerland don’t have fundamental political rights like the right to vote or secure residence status. No other country in Europe faces more difficult naturalisation processes than here. In addition, one of its business models is linked to a global economy with its exploitative and deadly practices, where many people do not have fundamental human rights. Talking about democracy is insidious. I want to be careful.

In the following two sections, I will explain two other phenomena around storytelling in the digital era that I also consider failures and errors. I will illustrate them less extensively, but I think they are potent and probably related to Error 01 and each other.

Error 02: Digital Illiteracy

Figure 2. Scan of printed drawing on Microsoft Paint by Maria-Cecilia Quadri, 1995, Baar, Switzerland.

In ‘How a Machine Learns and Fails: A Grammar of Error for Artificial Intelligence,’ Matteo Pasquinelli also focuses on the notion of error. He gives illuminating insight into the limitations of AI. Ironically, AI is trained through errors: the programmer programs an algorithm to achieve something specific. Then they let the machine run to accomplish that particular thing (and not how it accomplishes that particular thing). Every time it performs incorrectly, the programmer informs the machine about it. So, AI works towards an approximation of rightness, but errors are never entirely excluded.17 

Pasquinelli wants to offer an accurate understanding of the ‘inner logic’ of AI because he claims that in public debate, there is a polarisation between integrated and apocalyptic positions that do not reflect those ‘inner logics’ accurately. This polarisation, he believes, is due to the complexity of mathematics involved. Remembering the grey monolith of my childhood, where I painted on Microsoft Paint, I agree.

If we were more curious about the mathematical operations running our devices and the business behind them, we might get overwhelmed. If we aren’t programmers or any other kind of expert, we hardly really get to understand the opaque operations behind those machines and how they, for instance, influence storytelling. There is a kind of widespread digital illiteracy, which I consider to be Error 02.

In politics and public debate, it’s only in recent times that we find a more accurate and broader discussion about the implications of digital technologies, primarily focusing on Big Tech corporations like Facebook (sorry, Meta) and the others that make up the big five— Google, Amazon, Apple, Microsoft.

There are bodies of jurisdiction like the European Union (EU) that deal with this topic and since the Cambridge Analytica scandal, we have seen more public discussions on misinformation, data privacy, and hate speech that threaten democracy. And currently, experts, crypto enthusiasts, venture firms, and other groups of interest are starting to talk about Web 3.0 more publicly, which is the idea of a new iteration of the world wide web that incorporates concepts such as decentralisation, blockchain technologies, and token-based economics.

However, considering the considerable power monopoly and intransparency that we are facing, public discussions take place relatively rarely and stay in more technical or governmental niches with their own (power) interests18 and lack of transparency.19

What effect does this digital illiteracy (Error 02) have on the stories we tell? It seems that we tend to speculate about it, fantasising about powerful superintelligence destroying humankind. Or we embrace every new and shiny phone or piece of software, knowingly or unknowingly agreeing with any set of term and conditions (possibly arguing that we are powerless and have nothing to hide anyway).

Error 03: Numeric Invisibilities, Lies and Distortions

I previously mentioned the ‘curators’ of stories, such as Netflix, TikTok, and other Big Tech platforms that use AI and wondered if AI also operates as a curator of stories. We encounter a new situation when we think beyond digital formats that are different from traditional media like hypertext or motion capturing and look at the increasing use of AI.

As stated in the previous section, we face digital illiteracy (Error 02). Our digital illiteracy is caused not only by the lack of public debate but also by the lack of transparency of the mathematical operations involved and how they are applied. We created a world in which we increasingly lack the bodily sensory perceptions to capture those inner movements of the micro-temporal processing and whispering devices we wear on our bodies and use in our homes. In fact, we feed them with lucrative information. We face alienation from our environment that computes without us knowing exactly what and how.

For instance, we might know that our digital device is tracking the female cycle and our daily steps (towards a good life), our movements in the city, and our intimate conversation next to Alexa or Siri, but we don’t perceive it. Evelyn Wan20 offers excellent insights on this topic and makes this lack of visibility very tangible. Wan draws on Michel Foucault’s theories about biopower and Achille Mbembe’s concept of necropolitics to examine how we as bodies were and are subjected to biopolitical control through time-related technologies.

I earlier mentioned that, in a broader sense, search engines like Google could be considered curators (of information) that operate with AI. Thinking about the literary canon and the practice of quoting in literature, which makes some authors and their stories visible, I wonder about the invisibilities of this intelligent curator. By the end of the 1990s, Google PageRank analysed the structure of the links on the world wide web, looking not at the quality of a document/website but the quantity (of links connected to both).21,22 That is to say, the number of references was counted to determine the relevance of the work or author.

As feminists and decolonialist activists and thinkers made explicitly clear, references and a canon (for instance, in literature) reflect power relations and do not represent an objective truth about quality. So, just because a specific author is mentioned all the time does not mean their work is always ‘better’ or more relevant. Their work is more visible within a closed universe of information. Now, Google’s search engine has been updated and extended with personalisation and contextualisation, positioning documents within a dynamic and singular information cosmos for every single user 23 (the results for every user are different). This makes everything even more complicated. In any case, we do not know precisely how AI is involved.

However, we do know, for instance, that Google Flu detects a flu epidemic just by analysing the Google search behaviour of users. I wonder, can Google also detect cultural fantasies and social movements? If so, what narratives can be drawn from this? And isn’t this powerful knowledge for Google to hold (without us knowing)? Or, indeed, any other entity with this kind of technology?

Popular books such as Cathy O’Neil’s Weapons of Math Destruction give some crucial insights into how AI is applied and operates. From her own professional experience as a mathematician and data scientist in different fields (finance, e-commerce, and public administration), she brings various examples of the use of AI operating as ‘Weapons of Math Destruction,’ or ‘WMD’ as she calls it. She shows how AI is programmed and trained by people (and their shortcomings and biases) using proxies. She also shows how they are applied in those fields as efficient and cheap labour and as a pseudo-scientific seal of approval, leading to distortions of realities and loops of something destructive like self-fulfilling prophecies.24 

Some of Neil’s stories struck me deeply when thinking about storytelling. Like the one in Washington DC about Adrian Fenty, a former city mayor. Fenty wanted to turn around the city’s underperforming schools so he introduced a teacher assessment tool called ‘Impact’ which evaluated teachers through the performance of their students, without knowing how that judgment was made. The main idea was that poorly performing students were doing so because of poorly performing teachers. In the end, a good teacher lost their job in a poor neighbourhood and got a new one in a wealthier district, where they encountered fewer poorly performing students, because they access to resources like private lessons. There are plenty of concrete examples of WMD, operating in fields like the criminal justice system, job applications, city police departments, and the health industry.25

This is the third phenomenon, which I consider to be Error 03.

We are not only digitally illiterate and invisible, but we also face distortions of realities and lies. And I am not talking about bold lies in a Trumpian or propagandist manner (which most public debates are about). I am referring to a more insidious distortion. The use of AI in the schools of Washington D.C., for instance, reinforced one untrue narrative: poor people have the same starting position as everybody else or, even worse, people living in poverty are stupid and lazy.

This distortion makes other narratives invisible—for example, that people living in poverty face exclusion from resources such as health, time, money and knowledge. I find it unbearable that reinforcements of wrong narratives are already happening because the errors are not detected. We trust numbers, and some big corporation know how to abuse that.

In the data age, numbers are not just numbers. We need context and judgment. And as I mentioned at the start, this is linked to storytelling. It matters which stories are told. And it is the accurate and truthful stories that we must be able to tell. For that, we must also consider numbers within their context, even if there is no absolute truth, as many classical modernist artists such as Virginia Woolf explored in their work. We need to know the architectures and structures of our lives and the possibilities they hold.

Technologies are not per se evil; they are tools like any other. However, we need to understand when tools become weapons.

Towards an Approximation of Error Correction

The powershifting capacities that digital technologies hold do exist. 26 Projects such as Wikipedia and many others can oppose closed and exploitative systems to offer something more open, decentralised and fruitful. Even if there are disagreements and different ideas on how we want to do that, I am convinced we need to have accurate debates in various fields of knowledge, work, and social practices.

As users and hopefully citizens of some kind, we must observe closely in which direction powerful Big Tech corporations want to push us. As mentioned in the previous section, I think that it should be a matter of negotiation and debate that involves different public entities like societal groups, communities and individuals. And for this, I am convinced we not only need the tools and places, but we must also understand what we are facing on a more technical level.

For those who have even modest recourses, like cultural workers and artists, I see here an important and needed space for exploration and action. We have the chance to understand not only our professional environments but also the powerful forces of the digital era that structure our lives. As artists and curators, we have experience in experimental, innovative, or alternative practices of knowledge production that could be very fruitful in this regard.

We are all-rounders and in positions of mediation that often bring different discourses of diverse fields together. We open spaces where those conversations can take place. We might start to understand more about our living conditions and know what we must do so we do not reinforce those powers that suppress us. We have to understand what we want to work on—for instance, on something meaningful that works towards a practice of care and an approximation of error correction.

Maria-Cecilia Quadri

Maria-Cecilia Quadri studied Media Art and Theatre at the Zurich University of the Arts. As a freelance curator and dramaturge, she questions the notion of authorship. She seeks collaborative work, participating in projects such as SchwarzenbachKomplex and Raum//Station in Zurich (2016-20), a project space where she co-develops the series Digital Narrations. Quadri has worked as a cultural mediator at the Volksbühne Berlin, TanzPlanOst and the Theater Spektakel Zurich. In addition to her curatorial work, she is co-managing director of the think and act tank Institut Neue Schweiz INES, which works at the intersection of science, activism, and cultural politics to deal with post-migration, anti-racism, and diversity.

Bibliography

Bibliography

References
↑ 1

Arjun Appadurai and Neta Alexander, Failure (Cambridge: Polity Press, 2020).

↑ 2

Shalini Ramachandran and Joe Flint, ‘At Neftlix, Radical Transparency and Blunt Firings Unsettle the Ranks,’ The Wall Street Journal, October 26, 2018, https://www.wsj.com/articles/at-netflix-radical-transparency-and-blunt-firings-unsettle-the-ranks-1540497174.

↑ 3

Appadurai and Alexander 2020.

↑ 4

Danny Lewis, ‘An AI-Written Novella Almost Won a Literary Prize,’ Smithsonian, March 28, 2016, https://www.smithsonianmag.com/smart-news/ai-written-novella-almost-won-literary-prize-180958577/.

↑ 5

Jaclyn Peiser, ‘The Rise of the Robot Reporter,’ The New York Times, February 5, 2019, https://www.nytimes.com/2019/02/05/business/media/artificial-intelligence-journalism-robots.html

↑ 6

Appadurai and Alexander 2020.

↑ 7

Post-migrant society (from Latin post 'behind,' 'after') refers to a social order shaped by the experience of migration. The term refers to the political, cultural and social changes in society resulting from demographic change through immigration. In this perspective, migration is understood as a process that contributes significantly to shaping society. The term post-migrant became known in Germany through the Berlin theatre director Şermin Langhoff, who gave her theatre Ballhaus Naunynstraße the name ‘Post-migrant Theatre.’

↑ 8

Virginia Woolf, A Room of One's Own/Three Guineas (London: Penguin Classics, 2019).

↑ 9

It’s worth pointing out here that Virginia Woolf didn't only adventures or great heroic deeds the only stories worth telling. As a great admirer of Jane Austin, she claimed it is worth recounting the lives we know—for instance, lives within the private sphere related to domestic work, care, friendship, marriage, and social class.

↑ 10

Felix Stalder, Kultur der Digitalität (Berlin: Edition Suhrkamp, 2016).

↑ 11

Mark Graham and Martin Dittus, Geographies of Digital Exclusion: Data and Inequality (London: Pluto Press, 2022).

↑ 12

Mark Graham and Anasuya Sengupta, ‘We're All Connected Now, So Why Is the Internet So White and Western?’ The Guardian, October 5, 2017, https://www.theguardian.com/commentisfree/2017/oct/05/internet-white-western-google-wikipedia-skewed.

↑ 13

I am not sure about this narrative though. There is this narrative around the ‘digital revolution’ that computers and digital technologies changed our lives and that there was something like a carte blanche in the beginning of the cyberspace, free of the power relations of the non-digital world. However, I think it’s interesting to know that some economic and social changes happened before the ‘digital revolution’ and that those where crucial drivers for the development of digital technologies such as the personal computers, e.g., for making business and the use of work force more flexible. I wonder, is there an immanent exploitative feature in digital technologies? So maybe, rather than saying that some power relations ‘found their way into the digital space,’ it is better to say that digital technologies reinforced power relations and ideologies (like neoliberalism or colonialism).

↑ 14

Graham and Dittus 2022.

↑ 15

David Ormeño, ‘Pensar Internet—Antonio Martínez Velázquez, Mexico,’ Partido Pirata de Chile, January 22, 2015.

↑ 16

In the 1960s and 1970s, there were some entanglements between tech pioneers in the Bay Area in San Francisco and the counter-movements, which maintained a revolutionary sentiment towards communal life and technology. Famous figures include John Perry Barlow or Richard Brautigan. See Fred Turner, From Counterculture to Cyberculture (Chicago: University of Chicago Press, 2006).

↑ 17

Matteo Pasquinelli, ‘How a Machine Learns and Fails—A Grammar of Error for Artificial Intelligence,’ Spheres, November 20, 2019, https://spheres-journal.org/contribution/how-a-machine-learns-and-fails-a-grammar-of-error-for-artificial-intelligence/.

↑ 18

Freedom House, ‘Global Battle over Internet Regulation Has Major Implications for Human Rights,’ September 21, 2021, https://freedomhouse.org/article/new-report-global-battle-over-internet-regulation-has-major-implications-human-rights.

↑ 19

Sometimes even worryingly slowly. The government in Switzerland, for instance, only published in April 2022 a report on the regulation of AI. A civic organisation responded to that report, criticising not only its many weak points but also lamenting the lack of public discussion about the implementation and the use of digital technologies and AI. See David Sommer, ‘Bund Engagiert Sich Zu KI- und ADM-Systemen,’ Digitale Gesellschaft, April 28, 2022, https://www.digitale-gesellschaft.ch/2022/04/28/bund-engagiert-sich-zu-ki-und-adm-systemen-kuenstliche-intelligenz-und-internationales-regelwerk/.

↑ 20

Evelyn Wan, Clocked! Time and Biopower in the Age of Algorithms (Zwolle: Probook, 2018).

↑ 21

Stalder 2016.

↑ 22

The introduction of quantity wasn’t new, since science administrators at universities have also used that criterion to handle the growing volume of publications since the 1950s and 1960s. That is to say, the quantity of references was counted for the relevance of the work or author.

↑ 23

Stalder 2016.

↑ 24

Cathy O’Neil, Weapons of Math Destruction (London: Penguin, 2016).

↑ 25

Ibid.

↑ 26

Kevin Kelly, ‘The New Socialism: Global Collectivist Society Is Coming Online,’ Wired, May 22, 2009, https://www.wired.com/2009/05/nep-newsocialism/.