Published in
Digital Narrations: Fails and Errors

Experiential Literature?

Comparing the work of A.I. and Human Authors

Abstract: Using artificial intelligence (AI)-authored texts as a baseline for reading literary originals can help us discern what is new about today’s literature, rather than relying on the AI itself to embody that newness. GPT-3 is a language model that uses deep learning to produce human-like text. Its writing is (in)credible at first sight, but, like dreams, quickly becomes boring, nonsensical, or both. Engineers suggest this shortcoming indicates a complexity issue, but it also reveals an aspect of literary innovation: how stylistic tendencies are extended to disrupt normative reading habits in ways that are analogous to the disruptive experience our present and emergent reality.

There is a dark irony to GPT-3’s inability to write coherently into the future: large language models are exploitative and wasteful technologies accessible only to multi-million-pound corporations. The commercial ambitions of the tool are evident in a curiously banal kind of writing, entirely symptomatic of the corporate-engineered sense of normalcy that obscures successive, irreversible crises as we sleep walk through the glitch era. Contrary to this, experimental literary practices can provoke critical-sensory engagement with the difficulties of our time.

I propose that GPT-3 can be a measure of what effective literary difficulty is. I test this using two recent works, The Employees, a novel by Olga Ravn, and the ‘Septology’ series of novels by Jon Fosse. I contrast their ‘experiential literature’ with blankly convincing machine-authored versions of their work.

Keywords: glitch poetics, GPT-3, posthuman, literary analysis

How Does AI-Authored Prose Compare to Human Literature?

Author’s note: text written by GPT-3 is presented in green in this article.

I fall asleep reading. I dream the next episodes of the novel, generating them from the raw material of the preceding pages. Something, perhaps the ‘author in me,’ drives the dream-eye on across the dream-page. The dream of the novel is made of a kind of language-froth: phrases that repeat profligately, with aimless variation; episodes emerging from phrases, becoming phases, phasing in and out. There is a relation between the prose I was reading and the language-froth the dream makes, though it is also populated with the structures of events as such, a novel-mesh-template through which the new one is pushed. Like a mesh also, my body feels only partially there, feeling like I am reading while actually I am re-ing things previously read. Thoughts of the book sift through these, unbuilding the web the author had structured, turning it into an inhumanly boring prose vapour, a thought system in which the component parts miss one another, fall lightly on the mind, do not disturb it.


Drifting from the shaking eyes back into the head without consequence or meaning, the glimmering after-prose of a book dreamed, deriving sub-optimal, directionless iterations based on the permutations available in the language space. These are similar to the outputs generated by the Generative Pre-trained Transformer 3 (GPT-3)—the cutting-edge, artificially intelligent large language model (LLM).1 GPT-3 is one of the most advanced artificial intelligence (AI) language tools available to authors today, 2 although there are now multiple other LLMs that approximate its functionality.

Originally conceived as an experiment in comprehension and question answering, it has shown remarkable malleability in generating believable fiction, essays, poetry, and even code. When it’s generating stories and essays, that is, relatively long-form writing, the credibility of LLM-authored literature at the sentence and phrasal level dissolves at the scale of the episode and page. The engineering team behind GPT-3 indicate that this is a complexity issue,3 but I suggest that it also points to something about the nature of what is consequential about literary work. In particular, the way that literature innovates to turn a concept or prose style—the uniqueness of which is detectable at the level of vocabulary and syntax—into an intellectual proposition by elaborating on the challenge that uniqueness poses, making it correspond or contrast with our experience of the world at large.

There are both some overlaps and key differences between experimental literature and the way GPT-3 is being deployed by the literature- and literature-adjacent community. Experimental literature and GPT-3 authored writing both generate value by interacting with the space of existing possibility of the literary field, often by corresponding to or contrasting with the perceived mainstream. GPT-3 produces its value in writing by making a version of what is already plausibly there, using the mainstream (or the statistical average it perceives in its data set) to orient itself. This is how much of the literary mainstream works also. Whereas experimental literature’s effects are more often derived from extending the conceptual space literature occupies. GPT-3 enumerates quantitatively on what is already present, whereas experimental literature develops qualitatively on what language does.4

Echoing existing literary tropes and styles, LLM authorship is incapable of breaking out of older literary habits and imagining a new kind of literature. This is darkly ironic, given that the Big Data techniques that LLMs rely on are environmentally unsustainable and socially damaging, and that it is cynically deployed to pollute the infosphere and undermine the economics of creative labour in ways that forgo responsibility or concern for our own futures. The particular use of GPT-3 I am concerned about in this paper is its literary writing and how its style is interwoven with the commercial emphasis of the technology (no matter how that style is filtered through the material it is originally fed with). In other words, what GPT-3 cannot do is precisely what we should look for in the radical differentiations offered by the field of experimental literature. However, I am not entirely cynical about the technology, and I want to suggest that GPT-3 can be a measure of what fruitful literary differentiation looks like.

Given the potential for AI-authored text to thoroughly saturate its own data-set (that is, contemporary literature and language production) and therefore undetectably shape the future versions of itself in a positive feedback loop, it is worth paying attention now to the nature and tendency of its output, and to how its believability interacts with the proposition of the literature we want and need today. I am particularly interested in the way that error, apparently tuned out of the GPT-3 system that seeks normative behaviour, is integral to the literary potential for future-writing.


GPT-3’s training set is composed of 45 terabytes of text data that is analysed using 175 billion different parameters. As these numbers suggest, there is a huge energy cost involved in the processing of the data that supplies LLMs,5 which gives large corporations an upper hand in their usage. Training GPT-3, for example, costs an estimated £3 million ($3.55 million) in computing energy each time it is updated. There are, of course, also exploitative practices involved in the gathering of that data. But I want to firstly pose a different kind of question: how much of GPT-3’s training set has literary value and how does this impact on how it writes?

Figure 1. Twitter user @ipod_video maligns the inherently poor quality of online text, as opposed to literature that has made it into print.

GPT-3’s expertise is highly dependent on the internet, which contains representative examples of language use, while much of literary and academic production is not freely available for use in this way. According to a paper published alongside the commercial release of the model,6 GPT-3’s training set of around one hundred and eighty-one billion English words is composed of seven billion Wikipedia articles, Amazon’s CommonCrawl7 archive of 12 years of the internet, tagged content from a dataset called WebText,8 and two archives of books called Books1 and Books2.

WebText consists of eight million documents from websites such as Reddit and employs user-sourced scores to cherry-pick ‘high-quality’ documents. The content of Books1 and Books2 is not widely known, though it is likely they are based on BookCorpus—a large collection of unpublished, free novels tagged with genre definitions9—or old novels, perhaps from the Project Gutenberg,10 an archive of 60,000 eBooks.

There is a chance that copyright restrictions have impacted the range of GPT-3’s training set. Texts that are currently open access online are probably covered by fair use law (though Reddit users have questioned their rights relating to their unknowing role in training GPT-3). However, the law does seem to preclude GPT-3 engineers from using any commercially available novels unless their authors have been dead for 70 years, which is the timeframe for books to fall out of US copyright law. Unless GPT-3 illegally makes use of ‘shadow libraries,’ such as file-sharing sites like Z-Library or Library Genesis, contemporary fiction is not especially well represented in its training set.

Academic papers also are usually stored behind paywalls and heavily protected by copyright,11 whereas undergraduate essays are much more freely available online. In fact, the implications for farm-like writing use by undergraduate students is potentially very disruptive to higher education, especially since the engine can produce very believable low-quality prose that would pass at this level, given an average prompt. The tests set for GPT-3 do not pre-dispose it towards diverging from its limited dataset. Its trainers test it on three kinds of task: completing a sentence, paragraph or story in a believable way against a measure of what a human has written; question answering according to established facts; and comprehension according to a pre-given interpretation of a sentence. That is, how it might perform in a way one might expect of rote learners.

GPT-3 Applications

GPT-3 is only accessible via paid platforms, but these offer a good return for people who are willing to apply them commercially. Anyword is one of the 300 commercial applications of GPT-3. It uses the model to produce marketing copy—including the company’s own. Anyword’s website claims that 30,000 marketeers are currently using this service (without suggesting how many of these are human). On the back of businesses like this, Forbes has suggested that ‘A Wave of Billion-Dollar Language AI Startups Is Coming’.12

Figure 2. A sample advert from Anyword, one the 300 commerical applications of GPT-3.

It is hard to imagine a writing tool we need less than one that swamps the infosphere like GPT-3, making slick, generic advertising coinages that algorithmically determined audiences will perceive as credibly normal. LLM generation also has unwanted side effects in other fields. For example, Meta’s LLM OPT-175B has been shown to have a ‘high propensity to generate toxic language and reinforce harmful stereotypes,’ which the engineers cannot unpick from the dataset its been fed.13

The Sudowrite story-writing assistant is another commercial setting of GPT-3. One of the central features is the ability to ape a style of writing and elaborate on it via its ‘Wormhole’ button, so writers can access the GPT-3 API to extrapolate from their own writing and thereby, in Sudowrite’s words, ‘bust writers block.’ Clearly, this also drastically reduces the effort involved in writing. The results are stultifyingly believable. Steven Zeitchik ran a test to compare Sudowrite’s extrapolation from the Gay Talese’s classic literary journalism essay ‘Frank Sinatra Has a Cold.’ The result? ‘Are [Talese’s] words inherently superior to the machine’s? It’s arguable.’14

The first book to be co-created with GPT-3 is Pharmako-AI, which was published by Ignota books in 2021. To create the book, K. Allado-McDowell uses a similar process to the Wormhole function that has since been commercialised by Sudowrite to produce a ‘fractal poetics of AI’ and ‘an hallucinatory journey into selfhood, ecology and intelligence via cyberpunk, ancestry and biosemiotics’ according to the book blurb. Allado-McDowell’s process was to prompt GPT-3 with diary entries, reminiscences, and ideas about consciousness and selfhood, thereby inviting the machine to reflect on its own relation to language and experience.15

Allado-McDowell suggests that in doing so, they subvert the commercial, ‘quirky’ aspect of GPT-3, tricking the engine into spiritual and philosophical self-reflection. The supposition is that collaborative concepts will emerge from the process, helping to articulate a world where human and AI thoughts are combined. If such a thing does emerge, it requires elaboration that is not found in the book itself. Unlike writers such as Erica Scourti, who has used experiential and highly personal performance writing practices with predictive text to expose the nature of the data-processes that augment them, Pharmako-AI gives no insight into how the authored text and interface relate. Instead, the book is (almost) faultless: glamourous, 16 even exotic at the stylistic level, and ultimately opaque.

Given the prompt ‘Dreaming is a kind of hallucination…’ by Allado-McDowell, GPT-3 comes up with a believable extrapolation: ‘When something is transformed from a dream into words, it loses a part of its spirit, but it also carries the dreamer into an orbit of subjectivity that is no longer bound to time and space, even while the body sleeps.’17 Parts of its response are recognisably derived from the same ‘underground’ literature on hallucination, drugs, and language, that Allado-McDowell is clearly influenced by, such as William Burroughs. But GPT-3’s prose lacks the Beat Generation’s syntactic esotericism, and is more likely to deploy versions of the ‘rule of three’ that is well known in marketing copy.18

Last in this same exchange with Allado-McDowell, GPT-3 writes: ‘Another use of language is as a viral vector for infecting human consciousness with messages, knowledge and consciousness itself. Language as a vector of information exchange can carry viruses, memes, ideas, consciousness.’19 As this quotation also shows, there is frequently redundancy in what GPT-3 writes—perhaps a function of the ease with which it was generated, and the author’s unwillingness to edit. In fact, most passages descend into tautology, their glistening ambiguity akin to billboard slogans: ‘Language is a medium, a transmitter and a gift. There are things that can’t be said any other way.’ The general feeling is that the AI is filibustering. The chapters of Pharmako-AI are advert objects, and the writing itself can only really deflate the sense that the reader should go out immediately and buy their own bot.

Dreaming, which is often cited in the book, is an appropriate metaphor for the writing that GPT-3 does—a kind of thought that occupies the mind with a language-like and experience-like thing that experts have learned (incorrectly?) to interpret as reflective of underlying thought structures. The result is oddly unreadable—in the same way a dream is not memorable—because of its lack of consequence or grasp. Repeatedly, as the writing grows away from Allado-McDowell’s prompts, I felt my attention sliding over the page; the words dissolved into my head’s static as I read, having no purchase on the ear or mind. Like dreaming a book, something virtual passes through, plummeting in value the moment it is read. As I have suggested, there is an affinity between this sensation of immaterial exchange and the culture from which GPT-3 emerges.

One of the most interesting moments of the Pharmako-AI arises when GPT-3 makes a spelling error: coining the word ‘meglanguages’ to refer to a ‘technique for a type of synaesthetic communication [which] uses direct reference to paint a language picture.’20 Containing a silent glottal-like glitch to my ear, the word ‘meglanguages,’ the GPT-3 author begins to suggest, is a description of how the phenomena of reality operate like a language to communicate things. I would love to know how this coinage came about because grammatically, ‘meta’ or ‘mega’ are the only proximate prefixes to language in English. So, the machine has either ‘slipped’ or discovered some potency in the ‘meg’ prefix that operates differently from that in English (for example, the Hungarian prefix, which adapts a verb into the present perfect tense) or it has wrongly picked the company name ‘Meglanguages’ as an operating noun.

However, the Pharmako-AI book gives no sense of what the word does, instead deploying pseudo-spiritual cereal box coinages: ‘It’s a language, but not a physical one. It’s an abstract object. It’s not a word. It is non-dual, a “divine light” in essence.’21 In this manner, the glamour and mystery of the book at the level of its vocabulary and its ideas are revealed to be poorly matched with a prose style that tends towards predictable sentences, and the potentially revealing proposition of the dialogue between Allado-McDowell and the AI becomes lost.

One of the reasons this is a literary first is Allado-McDowell’s access: they are, or were until recently, a Google employee.22 People who might usually be critical of Google’s approach to monetising our data and corrupting the infosphere, such as Bruce Sterling and Legacy Russell, have even been drawn into praising a product the corporation is selling. The experiment thus operates as an extended marketing project for services like Sudowrite, directed as it is towards an influential section of the US art scene. In that sense, it ‘works’ very well.

Authors who use GPT-3 are not able to explore the implications of the mistakes the technology makes, and therefore cannot make conceptual links between these errors and the circumstances that produced them. One of the reasons is that GPT-3 is so opaque. Neither authors nor engineers know how it works. There is a qualitative difference between GPT-3 and even its recent predecessor GPT-2.23 There is also even a difference with smaller language model AI approaches that are enabled by tools like Torch-RNN,24 which can be used in a manner akin to ‘hacker’ and glitch cultures to expose and work with the mistakes they make.

Put simply, the larger the language model, the more constrained it is by the language that has preceded it. So, the more opaque the relations between its outputs and inputs become. In the sections that follow, I contrast this with the kinds of ‘eco-systemic’ experiments that can emerge from working with error in literature. This is an idea I adapt from Elvia Wilk and bringing into dialogue with my concept of ‘glitch poetics.’


Historically, experimental literature defines itself against the status quo, which is what makes experimentalism so vital to the present moment when we’re in need of radical future-thinking—particularly, how we might extricate ourselves from the grip of corporate-sponsored growth. Among the most clear-eyed views on the role of linguistic experimentalism is Robert Sheppard’s 1988 statement ‘Far Language: The Education of Desire’ (2015), in which he defines ‘politically radical poetry’ as operating in opposition to the language of advertising. He suggests that much of the mainstream poetry that ‘wins competitions’ operates in the same way as the language of advertising to sell a product or idea, thus undercutting its radicalism by its inability to separate itself from the language-world of corporate ideology, or challenge the reader to new frames of mind.

This poetry, Sheppard observes, is easily digestible and produces passive readers inclined towards the status quo. Sheppard argues that it is in the difficulty of radically experimental literature that there is a space opened for readers to discover ways of thinking about the world that are outside the dominant discourse. A common argument in favour of ‘language poetry’ is the notion that linguistic difficulty can intervene in our own default, normative uses of language, and thus cause us to think differently. But this argument can apply to experimental literature more broadly. I am interested in connecting this to more recent ideas about error—how difficult and errant language can be used to re-connect writer and reader through a shared feeling or understanding. That is to say, an apprehension of what the world is like in an era of disruptive technologies, social and climate collapse.

In Glitch Poetics (2022), I push views on literary experimentalism into dialogue with the question of what ‘literary realism’ should become today. Experimental literature is the introduction of error and dysfunctionality into language, producing excess affect, surprising aesthetics and styles, as well as gaps into which critical and alternative thinking can take place. In that case, experiential literature of the glitch does this in a way that maps this alternative thought to the material challenges of people living in this world. My sense is that such a literature will contrast qualitatively with the kinds of literary experiment that machines trained on the status quo can make.

Critics Elvia Wilk (2021) and Amitav Ghosh (2017) both agree that the ecological and technological situation we find ourselves in undermines the literary ‘realist’ norms established during another age. Both observe that the centrality of the human in narrative conceit, for example, has been diminished by recent scientific advances and climactic conditions. The post-human literary situation gives previously fringe forms of writing a pointed relevance for communicating experience. Wilk has suggested that ‘eco-systemic’ fiction provides a model for post-human literary connection.

Eco-systemic fiction hypostasises the ‘systemic fiction’ of the post-modern era, letting the web of relations it instantiates as fiction bleed into entanglement with the non-human world, hybridising the human narrative with the effects of a recalcitrant planet and glitching techno-sphere. One error-like trope of this kind of post-human fiction (shown, for example, in the novel Annihilation by Jeff VanderMeer) is a collapse of what Wilk calls the ‘figure/ground’ structure of narrative, in which landscapes and environments that were nominally a context in which events happen become part of the action. In Wilk’s words, ‘[E]cosystemic fiction, aware of and responsible for its own reciprocal exchange with “reality,” tangles or flips the roles of actor and acted-upon.’25

Glitch poetics prioritises another form of entanglement, including and accounting for the way language shifts with current technical tendencies. I show that some authors render their experience of converging aspects of the world today through forms of textual corruption that are experienced by readers. This could take the forms of the ‘glitch ontology’ of existing through machines that crash and recover at frequencies below the parameters of our awareness;26 the sensation of wrongness that comes when we see ourselves reflected through ‘leaky-creepy’ networked and artificially intelligent technologies;27 or the psychic and material pressures of the Anthropocene.28 The act of coping with literary difficulty produces eco-systemic glitchfrastructures29 that connect us with our sensory capacity for language, in sympathetic connection with authors writing through a wider ecology of the material world.

Human authors articulate our present situation by glitching tropes of literary genres together with other aspects of the techno-linguistic apparatus. Autofiction authors, for example, seek to simulate the utterly irreal experience of being produced by the network, tweaking the biography form to suggest that when we live life partly through our phones—and, as such, as data—events only partially occur, or occur only to un-occur sometime later. Fiction and real life thus become blurred. The characters in novels like Ben Lerner’s 10:04 or Patricia Lockwood’s No One Is Talking About This are subject to the whims of crowd-sourced truth in social media and big-data analytics. As Lerner says: ‘My concern is how we live fictions, how fictions have real effects, become facts in that sense, and how our experience of the world changes depending on its arrangement into one narrative or another.’30

There are also multiple examples in autofiction where the protagonists feel like they are dissolving into the background in the manner of a malfunctioning image-compression codec. The authors use this trope to articulate the experience of being subject to the ‘microtemporal’31 processes of digital technology and turn it into an error-experience, through which—in reading, literally losing the edges of protagonist—we feel re-subjected to the vagaries of living the digital life. As readers, we are colluded into the linguistic ambiguities that arise, which are presented as a problem for reading that rubs up against our sense of literary and existential propriety.

The experiences of autofiction’s characters, however, do have a distinctively bourgeois (perhaps even a dreamlike) ease, which is in some ways akin to AI-written writing. Other literary experiments reflect more intensely embodied effects of contemporary phenomena, linking the digital age to its material manifestations. Keith Ridgway’s A Shock (2021), for example, renders the manifest claustrophobia of networked, queer and working-class urban lives with a terse, tight prose style. This is combined with a narrative conceit where our voyeuristic encounters with characters in its composite stories collapse into the central processing of overheard snippets, rumours, and sigil-like anecdotes told in a pub.

As Christopher Notarnicola observes in terms akin to the oscillation between function and disfunction we experience in our everyday lives, language and narrative discontinuity complement one another in Ridgeway’s work: ‘His language is realistic yet defamiliarised, balancing a fealty to the many flaws inherent in natural modes of expression and the writerly necessities of successful storytelling, rendering confusion with narrative clarity and imprecision with the utmost intention.’32 In more extreme ways, works like Keston Sutherland’s glitching lyric poem Odes to TL61P (2013), and Megan Boyle’s mangled txt-speak in LIVEBLOG (2015), operate as bulletins from the front line of the human-media boundary, forcing their way into our attention with aggressive torsions and gaps between what we understand and what we apprehend in the text. Beyond and underneath the events in these works, there is a coherence of style and form that makes language into a mechanics for feeling the consequence of immanent social transformation.

As with Sheppard’s ‘politically radical’ writing (and the examples I use in the final section of this essay), the kind of literature that disrupts expectations is not as easy to digest as the smooth sentences and narrative modes of GPT-3, or even the mannerisms of autofiction that we may already have already become familiar with. Instead, each author can derive their critical sensory and conceptual engagement with the world by subverting pre-existing readerly habits. In contrast to the automated writing of AI, experiential literature is experienced by the reader in a heightened way, and we understand the text as a method of apprehending what is disjunctive about life. But how do these differences manifest, exactly?

Using GPT-3 as a Case Study

One of the problems with the concept of glitch poetics is that there is never a right functioning literature with which we can measure literary disfunction. Below, I suggest that GPT-3’s boring, baseline normative writing behaviour makes it ideal for this purpose. I will now show how that baseline can be used to detect the nature of micro-judgements made by authors who are deploying experiential writing as a method. This knowingly appropriates the AI field’s own method of using supervised baselines in order to measure the effectiveness of automated texts. However, rather than using the baseline in a hierarchical way to determine what is ‘correct,’ I am more interested in the various degrees and kinds of error that are exposed in reading across automated and experiential versions.

In what follows, I push a GPT-3 and human-authored version of a style against one another, to enrich my sense of what innovation in literature is, in particular the kinds of post-human error-innovations made by these human authors. The glitches in these fictions make new kinds of language engines among the text, author and reader, allowing new ways of thinking about the post-human moment.

The pieces of experiential literature I have chosen for these samples are both translated fiction—Olga Ravn’s novel The Employees and Jon Fosse’s Septology series of novels. Partly this was happenstance because some of the most interesting examples of literary experimentalism are happing in languages other than English. But also, perhaps the way these books arrive with us via translation while still retaining their innovative edge gives less emphasis to the sentence and vocabulary as the mode by which they innovate. I have also chosen two books with modes of innovation that are particularly well suited to exploring how authors today engage with the challenges of individual conscious expression and experience. These books explore human qualities that appear to do something outside of the human experience per-se.

Olga Ravn’s The Employees

Olga Ravn’s novel The Employees (English translation published in 2022) takes the form of a series of statements by various human and machine (and possibly human-machine) protagonists. Ravn herself observes that the style of her writing looks for an alternative literary form to reflect on today’s situation, which suggests that ‘the realistic, bourgeois novel… focuses on a certain way of living, a certain psychology and a certain idea of reality—as the backdrop for our human drama.’33 Instead, Ravn hybridises science fiction and gothic ‘non-human voices’ and uses ‘different registers of language, like corporate newspeak and sentences that would come to me in the office’34 in a manner not unlike an automated writing system synthesising new from old. But the writing of The Employees was also based on an experience: Ravn began the novel by writing about the alien-like sculptures of Danish artist Lea Guldditte Hestelund.

I would look up and see a sculpture and I would think that it looked like a small dog. When I would look down to write and look up again, it would appear to be a wet newspaper or something entirely different. Even though they were set in granite and marble, they seemed to be organic forms, like shapeshifters. So I decided to make a book that had the same organic movement in it, that would have a different shape every time you would read it.35

The resulting book has a faulty movement that I think is inherently connected to transformative understandings that come from looking at ambiguous objects. The voice is accordingly hesitant. At times, it might seem like the book is AI-like in style, repeating and looping phrases and ideas, the narrative static, as though in a dream. But there is something cumulative in the effect, where error is used to evoke the mood of a larger space, outside of the statement. The book opens:


It’s not hard to clean them. The big one, I think, sends out a kind of a hum, or is it just something I imagine? Maybe that’s not what you mean? I’m not sure, but isn’t it female? The cords are long, spun from blue and silver fibres. They keep her up with a strap made out of calf-coloured leather with prominent white stitching. What colour is a calf, actually? I’ve never seen one.36

When I seed GTP-3 with this content, it writes:


I work closely with Cadet 08 and have got to know her quite well. Unlike me, she was born out of a human body and has walked on the planet, and when we talk she nearly always tells me she misses Earth. In the same place that she feels this longing for Earth inside her, I feel a similar longing to be human, as if somehow I used to be, but then lost the ability. I know I’m only humanoid and that it’s not the same. But I look like a human, and feel the way humans do. I consist of the same parts. Perhaps all that’s needed.


They tell me that each of them is different. Different from the others, that is. I’ve often admired the way they multiply. I mean how they fill the space like nobody else. I’ve often wanted to go inside. Just as I wanted to go inside them. It was as if I was born to do what they did. But I’ve never found them attractive. I’ve always hated them. It is they who’ve hated me. But I’ve gone in there anyway. I’ve had to. Especially now. It’s only in there that I can pick up what they’re saying. They speak in my ear, but they don’t talk to me.

The sentences are certainly credible in rhythm. But each is subordinated to same message: the first paragraph tells us that the author is not a human, unlike Cadet 08, though she sometimes feels and looks like one. Another aspect of this text, familiar from other GPT-3 generated works, is that the believability of many sentences emerges as semi- and actual truisms, such as ‘born out of a human body’ and ‘walked on the planet,’ which an attentive editor would cut. Reading the original in light of the AI-authored text, I begin to decern something about the efficiency of Ravn’s prose and the inter-relations between logical errors in the phrases:

What I find important now is the well-being of the objects in the rooms. I’ve become obsessed with regulating the ambient temperature and listening for their hum. I look at them and I see us. I name them one by one, and in each case I utter my own name. Homebase has turned its back on itself. What you call made is your own work. What you call found, discovered, is your own point of origin. I can see New Discovery from the panorama windows, the long stream in the valley that poisoned us with its happiness. Above the planet, the stars, whispering as if with a single voice, a name that pertains to us all.37

The sense of compulsive mystery, which I had thought of as being ambient, saturated through the entire text, is constructed by a disjunction and misfit between phrases. Each phrase of Ravn’s appears to push and pull between the inferences and music of the others: Ravn’s logical error in ‘listening for their hum. I look at them and I see us. I name them one by one, and in each case I utter my own name’ is very different to the machine’s directly contradictory ‘I’ve always hated them. It is they who’ve hated me’ and ‘they speak in my ear, but they don’t talk to me,’ for example. In contrast, the AI’s iteration of Ravn is unable to retain tension, showing really how difficult it is to achieve within a normative grammatical construct. In all the versions Sudowrite produced based on a style-prompt from The Employees, the statements are from a single persona, albeit one who is speaking from the collective point-of-view, and the gendered aspect of the work is more crude:

We’re like extras in a movie, but we’re going to have a starring role. The man who made me will be in the film, and he’ll be the one I kill. The man who made me will be the only man I’ll allow to live. He’ll be the only one I’ll spare, though there will have been more than one who helped to create me. It’s a long list of those who were involved. They were all involved.

The book’s narrative arc substantiates this difference in prose-error, as the humanoid crew unerringly come upon the decision (realisation? It is left unclear) that they will be collectively terminated, and a kind of collectively caring conscious emerges. The closing section of The Employees itself pushes convincingly, hauntingly, on our sense that rather than a staff of individuals—which has been inferred throughout—we have been speaking to individual voices from a singular entity. As readers, we have been caught in a logic error of our own, which has repercussions from what the ‘reset’ of these creatures portends.

One possible aspect of the difference in Ravn’s innovative approach is that she describes her writing process as an experience of language. Perhaps the error-like tensions in Ravn’s book cannot be composed at a word-by-word level, but rather are related to the musical anticipation and self-reflection of words, heard and felt.

My comparative analysis between the essentially, fundamentally average prose that GPT-3 writes and a distinctive voice in contemporary experimental literature is, of course, unfair (if one can be unfair to an algorithm?). But perhaps it also begins to illustrate the relevance of Ravn’s work beyond the ‘speculative’ proposition of a space ship populated by humanoid characters. It suggests that Ravn has derived a linguistic response that is particularly well suited to thinking through the connotations of a collaborative conscious, from a distinctive experience of language as a felt, unwieldy thing that ‘moves’ in relation to material objects, rather than something separate from our lives.

Jon Fosse’s Septology

Written in a literary style starkly different from Ravn’s, the Norwegian author John Fosse’s ‘Septology’ novels, The Other Name (2019), I Is Another (2020) and A New Name (2021), relate the inner discourse of their protagonist. Each of the novels is written in an almost unbroken single sentence, as the protagonist, Asle, an aging painter increasingly doubtful of his own identity—as a painter at least—drifts and sifts through his memories—particularly, the memories of the few people who have defined him. What results is a profoundly frictional text, with difficulty emerging, for example, in the confusion between the protagonist and his friend who has the same name: forms of ‘distributed cognition’ that only resolve themselves at the larger scales of the book sequence. The ambiguity formed by this difficult and almost insufferable tactic in the books means that it’s hard to distinguish when the author is praying, remembering or dreaming.

The punctuating moments of the book are given at the end of each chapter, when Asle falls asleep while praying, considering an unfinished painting, or looking at the dark sea out of his living room window. They are apposite books to choose for an experiment like this. It appears that Fosse’s central mode of difficulty is stylistic, a result of hybridising recognisable precursors (for example, narrators from T. S. Eliot’s ‘Gerontion’ to Samuel Beckett’s ‘Krapp’s Last Tape’), which should mean the run-on phrases are easily mimicked. However, there is something much more challenging happening with Fosse’s short-circuiting of personas, identities, awarenesses and memories as they emerge and dissolve into his prose.

Trained on some of the book and asked to continue writing from the sentence ‘I think and it was all unreal and at the same time real, yes,’ Sudowrite correctly notes that the protagonist is remembering while dosing:

not real and at the same time real, and I must have been holding the rosary because it feels warm to the touch and then I remember I used to pray the rosary with my mother, when I was little, now I think and then I pray the rosary again and I put it back under my pullover and then I get up and I walk over to the sink to wash my hands and again I’m reminded of the time when I was little and I used to wash my hands with my mother, but I can’t remember when we stopped doing that, so maybe it was when we left this

I also seeded Sudowrite on a one of Asle’s prayer reflections: ‘I never memorise mechanically because I can remember written things by seeing them, a bit like pictures, yes, but I try to only remember the written things I think are important to remember.’ It wrote:

an example is the Pater Noster, and another example is the Creed, and it’s the same with the other prayers I say in Latin, the Ave, the Salve, the Te Deum and so on, I think of them as pictures, I’m not trying to say that I think of this in the same way as a poet, I’m deliberately using a different use of language, but at the same time I’m not thinking in words, I’m just thinking, it’s like I’m trying to go beyond words, I don’t mean just beyond words, I mean beyond language, beyond language as it’s used…

In stylistic and content terms, the prose is believably Fosse-like. But contrast the Sudowrite version above to what Fosse actually wrote, and I can see, more strongly than when I read the real Fosse alone, that there is a deeper literary difficulty, which makes writing that is ‘both a dream and in reality’ a challenge to read.

all of that happened both in a dream and in reality and I sit there staring straight ahead into the darkness, now the darkness is blackness, it’s not just dark any more, and I just look into the blackness and I think that now I have to start making my way home, but I’ve thought that so many times already and now as if it’s the middle of a sunny day I see the two of them walking towards me, a young man with medium-length brown hair and a young woman with long dark hair, they stand out in the darkness, it’s as if a light coming from them stands out in the darkness, yes, they’re walking straight at me like.38

What is immediately obvious from comparing these passages is that the human Fosse uses visual experience to structure the rhythm and tensions in his monologue, as well as a mnemonic anchor for his character. On the other hand, the machine, though mentioning the visual, focuses on language as a kind of surface or fabric—which one can be within or beyond. In Fosse’s actual writing, rather than drifting into a language memory, there is a definite ambiguity flowing back and forth between ‘seeing’ and ‘remembering,’ which is played and iterated as a kind of melody. These boundaries between protagonists and other characters, between sleep and wakefulness, memory and prayer, are played in productive ways throughout the text, like lines dancing across a background. They interleave to form particular thought patterns, and, set off by the presence of ‘light,’ operate in various ways for the reader—as anchor, metaphor or friction, depending on the position in the larger movements of the work.

Although the book is not about an experience in the autobiographical sense, as with Ravn’s observationally inspired book, there is a felt intuition to Fosse’s composition method that correlates the kinds of errors we feel in life to a conceptual challenge for the reader. Fosse describes the real experiential quality of his writing as something that follows the process: ‘[M]y experiences are transformed through the writing,’ a process in which one experiences language as a felt entity comparable with the material world. ‘I wanted to let the language move slowly away, like long waves,’ and like Ravn, this is a sensory thing: ‘[T]o me writing is a kind of listening. I don’t know what I am listening to, but I am listening!’

It is the tension of the visual, aural, and cognitive with language—stretching the reader back and forth in their own experience of Fosse’s monologue as Fosse listens and responds to what his own writing is doing—that demanded something. That is what therefore kept me awake to the book, giving it a grip of my own conscious to the degree that I kept reading, despite the repetitive, almost punishing, quality of the novels.

After the Cut Up and Automatic Writing

Historically, the literary avant-garde have put a lot of emphasis on the fragment, and the forms of juxtaposition and disjunction that fragmentation allow. The cultural currents emanating from digitisation could be said to be based in similar potentials, re-thought as modular interchangeable affective units, and modulating layers of style or voice. GPT-3 is well attuned to this approach for transferring spoken and felt human language into believable prose chunks and lines. In this, it is reflective of a culture that has absorbed ‘radical language’ propositions of the twentieth century into the techno-linguistic mainstream, striking images off one another. Advertising copy, for example, operates below the threshold of attention by inducing sparks of pleasure in a fleeting audience.

AI can quite easily stylistically copy poetic images, sentence fragments, letters and literary bulletins, and the analogous autofiction trope of narrative non-sequitur. Ravn and Fosse, however, are inheritors of more recent and ambiguous literary twisting of writerly technique and character formation, characterised as a kind of struggle and entanglement. In both Ravn and Fosse in different ways, the vectors on which a text is split (narrative voice, character, collective and individual conscious, sentence and phrase) are difficult to manage in any programmatic way. They operate at a scale larger than sentence imitation can, or connect a reader to the experience of an author ‘listening’ to the inferences of language as they write. As a result, the syntheses Sudowrite produces depart from the spirit and challenge proposed by the originals in informative ways—ways which I hope I have managed to draw out in my brief experiments with this method of comparison.


Dreams are not inherently interesting. In fact, they are inherently boring. Nonetheless, professions with huge impact on our understanding of culture, interior lives and society have emerged from attempts to listen to them seriously. One of the important aspects of these professions is to compare and contextualise the dream world with the real life of the dreamer, and the broader conditions of the world in which the dream is had. In this essay, I have suggested similarly that the inherently boring, dream-like writing of GPT-3, and the way it can be pushed through the voice of contemporary writers, can be put in tension with literature of experienced, experiencing, authors who feel their way through automatic writing processes, in order to derive new levels of understanding. In this interpretation, GPT-3’s writing forms baseline language against which we can explore what new literature can be. 

The last point is the most important. It is one thing to say that GPT-3 is not a good novel. It is quite another to say that it is not a good novel for the reasons that it is not a good novel. But it is a bad novel for the reasons that it is a good novel, because GPT-3’s writing raises two questions. One is the question of whether GPT-3’s writing is what people mean when they say ‘good writing’. The other is the question of whether there is value in describing a good piece of writing as bad writing. The way the answer to the second is not ‘yes’ or ‘no’ is by making the connection between the two questions, or to see that they are the same question: when people say they do not like GPT-3 as a work of literature, what they mean is that they do not like the novel’s writing…

Nathan Jones

Nathan Jones is Lecturer in Fine Art (Digital Media) at Lancaster University. Exploring the relationship between media and literary practice, his work on unicode, AI, speed readers and distributed networks has appeared at transmediale, Onassis Foundation, Liverpool Biennial, and in the journals PARSE and Media-N. He is co-founder (with Sam Skinner) of Torque Editions, the publications of which include Artists Re:Thinking the Blockchain (2017), The Act of Reading (2015), and BiblioTech (2022).



↑ 1

As Elizabeth Gibney explains: ‘Large language models are algorithms that learn statistical associations between billions of words and phrases to perform tasks such as generating summaries, translating, answering questions and classifying text. Built using brain-inspired architectures known as neural networks, the models train through adjusting values, called parameters, by blanking out words and comparing their predictions with reality. BLOOM has 176 billion parameters, on a par with GPT-3, one of the best-known such models, which was created by the non-profit firm OpenAI and licensed by Microsoft.’ Gibney, Elizabeth, ‘Open-source Language AI challenges Big Tech’s Models,’ Nature, June 22, 2022,

↑ 2

Tom B. Brown et al., ‘Language Models Are Few-Shot Learners,’ arXiv (2020),

↑ 3

As Tom B. Brown et al. state: ‘GPT-3 samples still sometimes repeat themselves semantically at the document level, start to lose coherence over sufficiently long passages, contradict themselves, and occasionally contain non-sequitur sentences or paragraphs.’

↑ 4

Echoing Percy Bysshe Shelley’s Defence of Poetry: ‘Reason is the enumeration of qualities already known; imagination is the perception of the value of those qualities, both separately and as a whole.’ The full text is available at:

↑ 5

As Andrey Kurenkov puts it, even with ‘a supercomputer with 10,000 GPUs and 400 gigabits per second of network connectivity per server … [GTP-3’s model would take] months to train.’ The training of the model is reported to have cost in the region of US$10-20 million. Exacerbating this cost, GPT-3 has also sparked a flush of similar huge-data models to be trained. Andrey Kurenkov, ‘GPT-3 Is No Longer the Only Game in Town,’ Last Week in AI, November 6, 2021,

↑ 6

Brown et al. 2020.

↑ 7

See here:

↑ 9

A Github user gives a link to and outline of BookCorpus here:

↑ 10

See here:

↑ 11

Perhaps ironically, this might change this in the future, with a push towards ‘open-access’ publishing internationally as seen in UKRI’s open access policy released in 2021. UKRI will require that from 2024, monographs are expected to be published open access within 12 months of release:

↑ 12

Rob Toews, ‘A Wave of Billion-Dollar AI Startups Is Coming,’ Forbes, March 27, 2022,

↑ 13

Susan Zhang et al., ‘OPT: Open Pre-trained Transformer Language Models,’ MetaAI (2022),

↑ 14

Steven Zeitchik, ‘We Asked a Computer Program to Imitate Gay Talese’s Writing. Then We Asked Talese What He Thought.’ Washington Post. November 26, 2021.

↑ 15

Nora Khan, ‘Pharmako-AI.’ Tank Magazine (2021).

↑ 16

My use of glamour is influenced here by Mark Fisher in his K-punk blog. Particularly, the use of the term to describe ‘artificial interest in, or association with, an object, through which it appears delusively magnified or glorified.’

↑ 17

K. Allado-McDowell, Pharmako-AI (London: Ignota Books, 2021), p. 69.

↑ 18

For example, see Gigasavy 2014.

↑ 19

Allado-McDowell 2021, p. 70.

↑ 20

Allado-McDowell 2021, p. 71.

↑ 21


↑ 22

Allado-McDowell established the Artists + Machine Intelligence program at Google AI in 2016. According to an interview in WIRED, the opportunity to work with a pre-release of GPT-3 came from a friend working on the program.

↑ 23

See Mark Amerika’s My Life as an Artificial Creative Intelligence (2020) for an analogous experiment with GPT-2.

↑ 24

See Nathan Allen Jones, ‘my monstituces composer: Looking for the Pre-Emergent Social Consciousness of AI in Small Data Literary Synthesis,’ Media-N 16 No. 1 (2020),

↑ 25

Elvia Wilk, ‘All Systems Go,’ Artforum, January 7, 2021,

↑ 26

D.M. Berry, The Philosophy of Software (London: Palgrave MacMillan, 2011).

↑ 27

Wendy Chun, Updating to Remain the Same (Cambridge, Massachusetts: The MIT Press, 2016).

↑ 28

Rosi Braidotti and Maria Hlavajova, The Posthuman Glossary (London: Bloomsbury, 2018).

↑ 29

I pinch the term ‘glitchfrastructure’ from Lauren Berlant (2016) to refer to the mechanics by which writing and reading bodies are latched into the material world anew in moments of experiential literary innovation.

↑ 30

Quoted in Jonathon Sturgeon, ‘2014: The Death of the Postmodern Novel and the Rise of Autofiction,’ Flavorwire, December 31, 2014,

↑ 31

Wolfgang Ernst, The Delayed Present (Berlin: Sternberg Press, 2017).

↑ 32

Christopher Notarnicola, ‘An Interview with Keith Ridgway,’ The Paris Review, July 8, 2021,

↑ 33

Ravn quoted in Kurt Snoekx, ‘“The Employees”: Danish Writer Olga Ravn on the Space Oddities of Work,’ Bruzz, November 18, 2021,

↑ 34


↑ 35


↑ 36

Olga Ravn, The Employees, trans. Martin Aitken (Chicago: Book*hug Press, 2022), p. 1.

↑ 37

Ravn 2022, p. 178.

↑ 38

Jon Fosse, The Other Name, trans. Damion Searls (London: Fitzcarraldo, 2019), p. 56.