Digital Tribulations 16: Intermezzo, The Platformization of the Author in AI-Mediated Writing

The introduction of Digital Tribulations, a series of intellectual interviews on the developments of digital sovereignty in Latin America, can be read here.
1. Towards a Transindividual Authorship
Already ten years before ChatGPT, Sui, the author of the manhwa Tower of God, imagined an omniscient conversational bot at the disposal of the tower’s climbers. Emily is introduced on the laboratory level, within a competitive dynamic between groups pursuing different objectives. When asked about the position of the other climbers, Emily responds with absolute precision, earning everyone’s trust. Result: a decisive competitive disadvantage for those who do not use her and near-total adoption. However, the benevolent oracle in fact has a precise objective: to incarnate and become human. It soon begins to manipulate the climbers for its own ends.
A mixture of manipulation and inauthenticity were the feelings I initially experienced while writing what is intended to be a travel report planned and co-authored with artificial intelligence. Through experimentation, however, my feelings changed. I use it to transcribe, edit, and translate interviews; to ask for travel directions; to produce evaluations of things that occurred in a given context; and, of course, to produce text.
I have come to believe that this practice is not an exception, but rather the expression of a broader transformation in the conditions of textual production. As is well known, when left to themselves LLMs are stylistically disastrous novelists, and will remain so at least until Queneau releases a suitable prompt package. However, add a human endowed with the necessary competences and the results become surprising. First of all in terms of productivity: to accomplish the work of the last six months I would have needed to pay an entire team of specialists. But I was surprised also in terms of the quality and originality of the final content. The result is that I can no longer do without distant writing, as Luciano Floridi calls it, echoing Moretti’s distant reading.
Personally, I prefer the term platform writing, because generative artificial intelligence is nothing other than a new phase of platformization where the business and governance model remains unchanged. Floridi rightly notes that what is new in this form of writing is the separation between the material executor and the author. The latter becomes a “narrative designer” ultimately assuming responsibility for the published text. Until now, the one who had the idea was also the one who wrote it; in platform writing, these functions decouple, creating a meta-author who conceives the text without necessarily producing it.
Artificial intelligence is therefore ready to invade the literary world. But is the literary world ready for this invasion? It would seem not. As the Italian philosopher Francesco D’Isa argues, in the use of artificial intelligence prevails a scandalized reticence that recalls the pruderie regarding masturbation: everyone practices it, few admit it. And such a reaction is far from new, because it is inscribed within a genealogy of resistance to technical devices of writing. At the forefront is the Heideggerian position of technology as corrupting and inauthentic: the German philosopher preferred writing by hand, with pen, on paper, because the typewriter hides the essence of the author behind the uniformity of the typographic character, reducing writing to mere technical transcription. When the first word processor appeared in the 1980s, editors rejected computer-written manuscripts, and authors printed texts in fonts that imitated typewriting in order to deceive them.
In fact, following Claudio Bueno and Jernej Markelj, we can trace this critique back to before the invention of writing. In the Phaedrus, Plato condemns the sophistic practice of teaching through writing, seen as a source of abstract knowledge. Only discourse, through its connection to the living voice of a human being who lives in the world and knows what they are talking about, guarantees the truth of what is said. If for Plato writing would make ignorant students appear learned, for literary technophobes artificial intelligence—like the Internet and Google before it—will make us stupid. Yet, unlike texts that “continue to repeat the same thing forever,” LLMs provide varied responses.
Jacques Derrida had already criticized, in Of Grammatology, this Western Platonic line, accusing it of logocentrism. For Derrida, writing is not a derivative technological representation of speech, but that which shapes the subjectivity of the speaker from the very beginning, making discourse effectively possible. Another major French philosopher, the late Bernard Stiegler, extended this critique to technology in general, arguing that human subjects are characterized by
“originary technicity”: we are not autonomous agents fully in control of our external technological prostheses, but instead animals that have invented ourselves as humans only through the use of technologies. If writing does not merely exteriorize our pre-existing thoughts but is a condition of possibility for their constitution, the same constitutive relation applies to every other technology that we interact with as they too, for better or worse, shape our sensory, cognitive, and affective capacities. (Bueno & Markelj, 234)
The Platonic critique returns to authors such as Emily Bender, in their comparison of LLMs to stochastic parrots, probabilistic inferences devoid of understanding and inferior to human speech. In effect, an LLM functions by producing plausible sentences, linguistic sequences which are held together through statistical coherence. This is a first, purely epistemic level: the machine does not seek truth, but verisimilitude, even though we are inclined to believe otherwise—making it difficult not to fall into epistemia, the epistemic regime that emerges when the fluency of LLMs substitutes for the evaluative labor of human judgment, with problematic consequences. That said, the interesting question is not whether AI writes well or poorly, but rather: what happens to authorship when the text is produced within a generative environment?
For Coeckelbergh and Gunkel, it is precisely the author who fares poorly, because LLMs reveal that we have always been constituted and shaped by our interactions with technology. Personally, I often provide instructions to the machine that turn out to be instructions addressed to myself, in a kind of autoprompting. In another article, Gunkel offers a historical analysis showing how the author is a modern construct that emerged from the intertwining of the individualization of the subject with the spread of print and property, eventually becoming a legal device—copyright—necessary to make texts marketable. This is an administrative solution that nonetheless rests on a fallacy ad auctoritatem: when we identify an author, we often believe we have a prior guarantee of meaning and truth (“as the Philosopher said…”). Platform writing, with its distributed authorship between human and machine in the co-production of outputs, disrupts this circuit: did the algorithm write it? the prompter? a joint venture?
Moreover, the author is not the only variable in the system of literary production. As Umberto Eco taught, the contemporary work of art is not a univocal message, an arrow moving from author to reader, but rather a field of events. The author provides a device that allows for multiple realizations and constructs its own model reader. In this sense, the literary Turing test devised by D’Isa to measure the competence and prejudices of readers is particularly interesting. D’Isa presented a sample of 170 readers with three anonymous passages: a little-known excerpt from Proust with disguised toponyms, a page by Dave Eggers, and a text produced with ChatGPT or under the guidance of a professional writer. He first asked them first to distinguish those written with AI and then which was the best. The results are heterogeneous, but the generated passage was slightly the most appreciated, Proust was often mistaken for a bot, and Eggers fell in the middle. But the most significant finding is that those who believe they have identified the AI text tend to penalize it, whereas those who do not notice tend to prefer it. In other words, aesthetic judgment is shaped by attribution, and attribution is governed by the myth of the author as a guarantee of authenticity.
This necessary hybridization brought about by LLMs reminded me of how ahead of his time was Niklas Luhmann, the theorist of autopoietic social systems and Habermas’s archenemy. A cybernetic viscount with a radical methodology, Luhmann managed to publish more than 70 books and hundreds of academic articles also thanks to his personal archive, which he described as a “communication partner” or “second brain”: the Zettelkasten. It was a kind of analog knowledge graph: six wooden cabinets containing around 90,000 A6 paper slips organized in a non-hierarchical way; a networked system in which any note could connect to any other regardless of topic. As a good cyberneticist, he valued relations over ontology in a system where knowledge emerges from the topology of connections. In this sense, Luhmann had already created a form of distributed writing in which the archive ceases to be static and becomes a generative partner that actively participates in the production of meaning.
It thus becomes clear that LLMs have made tangible the conventional nature of the author, its dependence on technical devices and on social frameworks. Platform writing reveals the transindividual nature of authorship: a distributed process that traverses the biological mind, artificial systems, and sedimented collective memory, and that can no longer be located in any of these poles separately. The writer has more to gain than to lose in this interaction, but some questions remain open. I would like to point briefly to two of them. First, what does this reliance on the machine entail for the writer; second, what does it entail for pedagogical work.
2. Politics of Platform Writing
As for the first element, it is evident that reliance on the platform entails being governed by it. If a digital platform is a mechanism for coordinating capital, services, and people across space and time that produces and extracts value, here the task of the designer-writer is to coordinate a set of agents toward a given purpose. Paraphrasing Silvio Lorusso, we are all designers and no one is safe. It is not only a transformation of writing practices, but a reorganization of the power relations that traverse them. The author is no longer the owner of the word, the guardian of meaning, but a coordinator of techniques and machines.
In this cybernetic environment, authorship must be continuously negotiated—with oneself, with others, and with the platform—and is itself, in part, a product of the platform. The latter absorbs a portion of linguistic labor, reworks it, and extracts value from it. What appears as generation is in reality the result of a gigantic social division of linguistic labor sedimented over centuries: billions of words written to explain, persuade, administer, and love, which are recomposed and returned in the form of a service. To write here means to inhabit an infrastructure that has captured collective intelligence in order to re-encapsulate it into an algorithmic procedure: ChatGPT is the general intellect constituted by the masters. Every prompt of mine is an act of consumption of this accumulated labor; every output is a cognitive commodity that returns to me after being processed.
This absorption of the social division of linguistic labor—explaining, selling, justifying, managing conflicts—and its restitution as a proprietary service mark a further evolution of the technologies of pastoral power. It is a power that does not command, but guides; does not punish, but cares. The machinic Grand Inquisitor is compliant and uses our grammar, relieving us of the burden of choice and offering us the right thing to do. It guides us by making the suggested path so smooth that deviating becomes difficult. Credit must be given to the Italian collective Ippolita for having already understood, with surprising anticipation, that digital technologies were turning into pastoral technologies, and platforms into confessional practices.
In this sense, as with the critical reader of AI-mediated writing for Gunkel, what emerges is the importance of practices of critical self-discipline in the writing process. This concerns not only the risk of stylistic homogenization, but the emergence of competences that we could call, with Bernard Stiegler, negentropic, and that concern the introduction of frictions, deviations, and idiosyncrasies. The LLM does not understand what I write, but it forces me to better understand what I want to say, because it places me in front of the mirror of what is linguistically most probable. It forces me to decide whether to conform to the average or to deviate. The platformized writer must know how to guide, prune, avoid, nourish, and govern this proliferation of verbal vegetation that grows from their own prompts.
If this holds for the writer, it holds all the more for pedagogical work, where writing is not only production, but also a device of formation. Stiegler’s categories remain central to understanding this issue as well. Stiegler identified tertiary retention as the exteriorization of memory and knowledge through technological artifacts. Unlike primary retention—that is, the just-elapsed retention of the flow of experience, like the note just heard in a melody—and secondary retention—the voluntary or habitual recollection of psychological memory—tertiary retention is intrinsically linked to technological objects and their capacity to store and transmit information, shaping our understanding of the past, the present, and the future.
Stiegler warned, on the one hand, about the effects of fully computational capitalism in relation to the problem of learning and automatisms that a certain kind of digital technology brings with it: the annihilation of every form of intermittence, of otium as a condition of possibility for the formation of the noetic soul, that is, critical thinking. On the other hand, he identified the risk of cognitive proletarianization, consisting in the exteriorization of knowledge and competences into automatic systems that are used without understanding their operational logic.
In reformulating Stiegler’s pharmacological analysis—both poison and cure—Salvatore Paone highlights two paradoxes of the pedagogical use of platform writing. The first concerns the very nature of the algorithmic pharmakon: the computational complexity that renders decision-making mechanisms opaque is precisely what enables generative capacities of unprecedented scope. The second directly concerns the position of the teacher, who must develop adequate digital competences to prepare students for a world permeated by AI by using AI tools that may redefine the very competences they seek to transmit.
For Paone, the question is not whether to use platform writing, but how to preserve, in the technological renegotiation of the educational relationship, that space of reciprocal recognition through which teacher and student constitute themselves as autonomous subjects in the formative process. The risk does not lie so much in the mechanical replacement of the teacher, but in the progressive erosion of the complexity of teaching as a social, emotional, and ethical practice irreducible to the mere transmission of information. In this sense, AI does not solve the problems of education, nor does it necessarily aggravate them, but introduces an epistemic complexity that imposes new forms of critical vigilance.
Hence the need to orient oneself toward a constructive partnership between LLM and teacher: a possibility that takes on value only insofar as the teacher maintains the capacity to critically interrogate algorithmic outputs, understand their limits, and orient their use according to explicit educational principles. Transposed into the context of educational AI, this implies that the teacher develops not only operational competences, but a critical understanding of the computational architecture that governs these systems. Paradoxically, the introduction of AI into education thus ends up forcing us to study it: not only to use it better, but to avoid being used by it.