“Irony is an opportunity for ambivalence”: Interview with Maya Indira Ganesh about her Book Auto-Correct

In 2025, ARTez Press published Auto-Correct: The Fantasies and Failures of AI, Ethics, and the Driverless Car by Maya Indira Ganesh. I talked with Maya about the book — why and how technologies fail, the meaning of ethics within and outside technologies, and the ambivalence that comes with irony (as well as critique). The interview was recorded on April 15th in Zoom, automatically transcribed, and lightly edited for clarity.
In my PhD project, I keep thinking about how one can relate to the fact that algorithmic technologies err and fail all the time. All things fall apart and break down—that much is a truism. Yet, how we choose to make sense of it individually and collectively is a different matter. What initially drew my attention to Maya’s book is how she describes failures of self-driving cars as happening at different scales and moments in time. The idea of error-free technology is thus a dream, and yet not all failures are alike.
Dmitry Muravyov: You mentioned how long this project has been going. For those doing PhDs and turning them into further projects or books, I’m curious: What question or intellectual concern has driven you throughout this process? Was there a thought you kept returning to—like, I need to put this out into the world because it’s important?
Maya Indira Ganesh: There are two dimensions to that. First, in Germany, you have to publish to complete your PhD; it’s not considered ‘done’ until you do. I used that requirement as a chance to turn my thesis into a printed book. Second, since finishing my PhD at Leuphana University, I’ve mostly been teaching. Almost exactly four years ago as I was handing in my thesis, I was also interviewing for a job at this university. I was hired to co-design and co-lead a new master’s program in AI, Ethics, and Society.
Teaching AI ethics made me aware of what I put on reading lists—how to bring critical humanities and social science perspectives into conversations about technology, values, and AI. I noticed gaps in the literature. Not that I’m claiming to fill those gaps with my book, but there’s a standard set of citations on the social shaping of technology, epistemic infrastructures, and AI’s emergence. Teaching working professionals—people building tech or making high-level decisions—pushed me to ask: “How do I make theory accessible without diluting it?” They wanted depth but weren’t academics. So, I thought, “What can they read that’s not tech journalism or long-form criticism?” That became a motivation.
The other thing I’ve wrestled with is the temporality of academic research versus the speed of AI innovation. It’s about the politics of AI time. A big question asked of AI in general and driverless cars in particular is this question: ‘When will it arrive?’ You don’t ask that about most technologies because, say, a car is tangible—you see it, you know it’s here. But so much AI operates invisibly in the background. Its rhetoric is all about it always being almost here, ‘just around the corner’.
Credit: Maya Indira Ganesh
As an academic, though, timing doesn’t matter—unless you’re under the delusion your work will “change everything,” which, let’s be honest, few believe. But also, no one had written about driverless cars this way. Most books are policy or innovation-focused. I thought, “Why not a critical cultural study of this artifact?”
Dmitry Muravyov: When people talk about regulation there are so many times metaphors like “we’re lagging behind”.
I’m interested in how technologies fail, and obviously that’s a huge theme in your book—it’s right there in the title. I’ve been trying to make sense of one of your chapters in a particular way, and I’d love to hear your thoughts. You talk a lot about how driverless cars are kind of set up to fail in certain ways, and how all these accident reports are always partial, always uncertain.
But reading Chapter 2, I noticed you sort of map out why these crashes happen, and I think I’ve got three main patterns. First, there’s the human-machine handover failure—like when the human just zones out for a second and can’t take over when they need to. Then there are the computer vision gaps, where the car’s system just doesn’t ‘get’ what it’s seeing— objects just don’t register properly. And third, there’s this mismatch between the car and its environment, where the infrastructure isn’t right for what the car needs to work.
But then you also show how the tech industry tries to deal with these failures, right? For the handover problem, they push this whole ‘teamwork’ idea in their PR — making the car seem more human, more relatable. For the vision gaps, there’s all this invisible data work going on behind the scenes to patch things up. And for the infrastructure issue, they’re literally reshaping cities to fit the cars—testing them in the real world, not just labs.
Would you say these are basically strategies to compensate for the cars’ weaknesses? And do you think it’s mostly the tech industry driving these fixes?
Maya Indira Ganesh: Wow, yeah—that’s such a good summary, and you’ve definitely read the book! [laughs] You’re completely right; this is exactly it.
And yeah, these rhetorical moves are chiefly coming from the tech industry, because they’re the ones who really see these problems up close. But the way they handle it is interesting—it’s like they’re working on two levels:
Making it seem human. At one level, they’re saying, “Look, it’s just like a person!” Whether it’s comparing driving to human cognition, or even calling the software the “driver” for the car’s “hardware”—like the CEO from Waymo does. If you make it feel human, suddenly people are more forgiving, right?
Andrew Ng from Baidu, who says, “Hey, this tech is still learning, be considerate—cut it some slack”! Which, okay—but why should I feel concerned for a car? This works because cars feel familiar, cars are anthropomorphized anyway, and are distinctly gendered at that. Cars, like boats, are given monikers, are usually ‘she’.  We tend not to do this with an invisible credit-scoring algorithm.
The other move is the strategy of blaming actual humans. This isn’t new. Back when cars were first invented, jaywalking laws were invented to shift responsibility onto pedestrians for running out onto the street and disrupting the space for experimental automobility in city spaces, and new drivers. This was in the early days of automobility in the US before traffic lights existed, and people were unaware of how this new technology worked, and were more familiar with horse-drawn carriages. Rather than regulate cars and drivers, what happened was to blame the human for not crossing the road correctly. That’s why the car hit you.” There is a similar playbook now: “praise the machine, punish the human” as Tim Hwang and Madeline Elish put it—it’s this endless cycle of Oh, the tech’s fine—you’re the problem.
Dmitry Muravyov: This  process seems to be about adaptation. We humans are fallible beings, but in this context of coexisting with technology, it feels like our failures are the ones that need adjusting—we have to change to fit driverless cars, for instance.
Could we distinguish between more and less desirable types of failure? If we accept that neither tech nor humans can be perfect—that we’re all prone to fail in some way—does that open up new ways to think about these systems differently?
Maya Indira Ganesh: Good question. Actually, I touch on this in the book’s epilogue about the “real vs. fantasy” worlds of technology. When you focus on the real world, you have to confront failure—that breakdown is crucial for understanding how systems interact with human society. That’s why these technologies have to leave their controlled “toy worlds” and enter our messy reality, where they inevitably fail. That failure gives us valuable data about how the system actually works.
But here’s the tension: By dwelling in the fantasy of what the technology could be—that idealized future where everything works perfectly—we avoid grappling with its real-world flaws. The driverless car is interesting because it’s too tangible for pure futurism—you can’t pretend its failures are just “speculative risks” like you might with AI doom scenarios. Yet even with AVs, there’s still this tendency to say “Oh, the real version is coming later” to deflect from today’s problems.
So, in short: If we obsess over the technology’s potential, we don’t have to account for how it’s actually failing in material, accountable ways right now.
Credit: Maya Indira Ganesh
Dmitry Muravyov:  Is it possible to envision technologies that recognize their intrinsic fallibility and try to account for it? Maybe in certain ways, rather than others, as your discussion of existential risk shows.
Following up on that, you discuss ethics in the book so well. You interrogate the assumptions and limitations of machine ethics, showing how it localizes ethics within computational architecture, making it a design problem to solve. I love how you describe it: “the statistical becomes the technological medium of ethics”—and you contrast this with “human phenomenological, embodied, spiritual, or shared technologies for making sense of the world.” Could you talk more about this opposition?
Maya Indira Ganesh: I think machine ethics is really interesting because it’s such a niche field that people don’t talk about enough. But it actually does a great job of showing what people are trying to do when they try to embed values into machines—to make decisions that align with certain ethics. But the thing is, this approach works at small scales, not for complex systems like driverless cars in cities.
Of course, we want that in some cases—like removing violent extremism or child pornography online. That’s clear-cut. But then you get into nuances: What if it’s a GIF mimicking beheading, but with no real-world groups or ideologies attached? Suddenly it’s not so simple.
The problem is, machine ethics—and a lot of tech ethics—assumes technology can be totalizing, seamless. We don’t want to deal with breaks or failures, or messy systems talking to each other. Right now, every wave of digitization just gets called “AI.” For 15 years, we’ve had digitized systems working (or not working) in different ways—now AI is being patched on top, often in janky ways.
Take public sector AI in the UK—there are a number of projects trying to apply LLMs to correct doctors’ note-taking, to make casework more efficient. But this is just responding to earlier failures of digitization! We have PDFs that were supposed to make documents portable, but now we’re stuck with stacks of uneditable forms. Every “solution” creates new problems.
So maybe we shouldn’t even call it “ethics” anymore. What we really need is to ask: What values are driving our societies? Efficiency? Profit? Innovation? These are ideological choices that get normalized. The point of my book is that ethics can’t just live inside machines—we need to ask how we want to organize our cities and societies, with all their messiness. Maybe LLMs could help facilitate those conversations, rather than pretending to be the solution. But we’re still figuring that out.
Dmitry Muravyov: You position ethics in two ways. On the one hand, as something technological and localized within computational architecture (the machine ethics project), and, on the other hand, as something more embodied and societal.
You seem to criticize machine ethics for not being “ethics” in that fuller sense. But now I’m wondering—are you actually saying that machine ethics can serve a purpose, we just shouldn’t call it “ethics” to avoid confusion? Would that be accurate?
Maya Indira Ganesh: Yes, exactly. The framing of “ethics” hasn’t helped us reckon with what kind of society we want to build. It either gets reduced to designing machines that mimic human decision-making (as if machines could create the social through their choices) or becomes corporate self-regulation theater, which we’ve seen fail as companies discard ethics when inconvenient.
Now, I’ll admit: Terms like “ethics” do have power. When you call something unethical, it activates people—no one wants that label. But we’ve overused these concepts until they’re hollow.
But here’s the key point: People are remaking society through technology—just not with “ethics” as we’ve framed it. Look at the U.S., where companies can now ignore AI safety under Trump. This isn’t about not caring—it’s about competing visions of society.
The Elon Musks and Chris Rufos have very clear ideologies about the world they want. And that’s what we need to confront: Not “ethics” as a technical problem, but the values and power struggles shaping our technological future.
So yes—we need value discussions, just not under the exhausted banner of “ethics.”
Dmitry Muravyov: There’s a contrast in your reply between the ethical and the social that I want to explore further. Let me bring in my own experience. I teach technology ethics courses to engineers and computer scientists. I’ll play devil’s advocate a bit here, because while your book offers strong (and often justified) criticism of engineering ethics, I want to push back slightly.
That emphasis on individual responsibility you critique—it’s a weak point. Students tell me (or, more often, I imagine that this is something they can tell): “These ideas are nice, but eventually I’ll need a job, a paycheck, and I’ll have defined responsibilities within an organization.” Many so-called “ethical” issues in tech may be better addressed through labor organizing and unions rather than ethics courses.
But to defend ethics—even when we acknowledge how socially determined our positions are, there’s still an ethical weight to our decisions and relationships that doesn’t disappear. How do you see this tension between the social and ethical? Do you view ethics as having any autonomous space?
Maya Indira Ganesh: That’s a really good question, and it connects directly to what I was saying earlier. In teaching AI ethics to engineers, policy makers, even defense department staff, the core problem is treating ethics as something separable from the social, something we can formalize into machines. That’s why machine ethics fascinates me—it embodies this flawed approach.
Everything meaningful requires context. It resists automation. To your student’s dilemma—yes, we’re socially constrained, but there’s no substitute for personal reckoning. There are forms of social inquiry and ethical engagement that can’t—and shouldn’t—be automated.
This connects powerfully to Nick Seaver’s work about music recommendation algorithms. He studies these engineers who pride themselves on crafting “caring,” bespoke algorithms—until their startups scale. Suddenly, their intimate knowledge of musical nuance gets replaced by crude metrics and automated systems. What fascinates me is how they cope: Seaver finds that they perform this psychological reframing where the “ethical” part of their work migrates to some other more manageable domain so they can stomach the compromises required by scale.
Credit: Maya Indira Ganesh
Dmitry Muravyov: If ethics has to be somewhere it can be in many places. What is the place for ethics in this particular time and space?
The last thing I wanted to discuss was the irony you explore. The way I made sense of it was seeing the “irony of autonomy” as a type of technological critique. Often, the traditional critical move is one of suspicion—unmasking what’s actually going on behind the hood. In technology studies and humanities, we’ve seen rethinking of critique—reparative critique, diffractive critique, post-critique.
But irony seems different. When I first read your piece introducing irony in the book, I caught myself smiling—it sparked something in me. How do you see this use of irony in relation to the history of technological critique? Especially given your earlier piece with Emanuel Moss about refusal and resistance as modes of critique.
Maya Indira Ganesh: The “irony of autonomy” (playing on Lisanne’s Bainbrdige’s work (1983) about the irony of automation) was my way of historicizing these debates, showing how we’re replaying similar responses to automation today. We perform this charade of pretending machines act autonomously while knowing how deeply entangled we are with them.
Over time, I’ve struggled with that irony, albeit not in a bad way. It connects to a melancholia in my other writing about our embodied digital lives, especially around gender and technology. There’s a strong cyberfeminist influence here—this Haraway-esque recognition of how technologies shape gendered existence.
I don’t think we’re meant to resolve this tension. Like Haraway and cyberfeminists suggest, we need to sit with that discomfort. Disabled communities understand this deeply—when you rely on technologies for basic existence, you develop a nuanced relationship with them. There’s no clean ethical position.
A disabled colleague once challenged me when I asked if she wanted better functioning tech: “Actually, no—if it works too smoothly, people assume it always will. The breakdowns create necessary moments to see who’s being left out.” In our resistance and refusal piece with Emmanuel Moss, we were pushing back against overly literal critique. Resistance gets co-opted so easily—tech companies now use activists’ language! Refusal offers complexity, but isn’t a blueprint. You can’t exist outside these systems.
Irony is an opportunity for ambivalence, it is a politics of not turning away, while refusing to ever be fully reconciled with the digital.
Dmitry Muravyov: Sometimes I think when certain critical moves—like undermining or unmasking—are presented to audiences without humanities backgrounds such as computer science students you can get this response where it feels like you’re taking the joy out of their work.
What I appreciate about irony as an alternative is that it lets people chuckle or smile first. Maybe through that smile, they can think: “Hey, maybe we shouldn’t automate everything.” That moment of laughter might plant the seed for a more ambivalent attitude.
Maya Indira Ganesh: Actually, I think critique has become largely about exposing corporate capture—it’s tied up with legal/regulatory battles now. I get this from friends and colleagues sometimes, “You’re not being hard enough on this.” But why can’t computing be fun? It is fun for many people. It creates beautiful things too.
That’s why I want that ambivalent space—to sit with both the problems and possibilities. If we open up how we think about our relationships with technology and each other… maybe we can make something different.
Dmitry Muravyov: There can still be joy at the end!

Biographies
Maya Indira Ganesh is Associate Director (Research Culture & Partnerships), co-director of the Narratives and Justice Program, and a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence (CFI). She draws on varied theoretical and methodological genres, including feminist scholarship, participatory methods, Media and Cultural studies, and Science and Technology Studies to examine how AI is being deployed in public, and how AI’s marginalised and expert publics shape the technology.
Dmitry Muravyov is a PhD Candidate at TU Delft, working in the AI DeMoS Lab. Drawing on philosophy of technology, STS, and media studies, he currently focuses on the political and ethical issues of algorithmic fallibility, a collectively shared condition of living with technology’s breakdowns, failures, and errors.