Writing Against the Algorithm: AI, Narrative Bias, and Why It Matters

March 2026

At a recent webinar, I raised the topic of AI in writing. The reaction was something like serving a platter of roast beef to a room full of vegans.

The hostility starts with what I’d call the original sin. AI was trained on the words and thoughts of writers in this industry without their consent. Our legal system has largely ruled this fair use—but that ruling hasn’t settled the feeling. It doesn’t feel right, because AI didn’t simply learn grammar and syntax. It metabolized the combinations of ideas and the distinct voices of the people it learned from.

There’s also the fear that AI will eliminate the creative process entirely. That an artist’s voice will be stolen along with their identity, plastered on AI dribble (and we’ve seen it happen). Or that the writer will be replaced entirely. Someone could type a prompt like, “Give me a comedic romance, friends-to-lovers, with a love interest who is a mix of Tom Hanks in a biker gang,” and get a finished story back in minutes.

On the other hand, AI has been part of writing for a long time already. Spell check, grammar check, sentence rewriting suggestions — some form of computer-assisted writing has been with us for decades, and it will only grow more sophisticated.

I’ve played around with AI, and found significant limitations. Programs that suggest rewrites of paragraphs or long passages often produce results that are repetitive and lacking nuance—the kind of nuance a good writer brings to a choice of words or a turn of phrase. My current short story, The Death Ledger, relies on exactly that kind of layered, intentional language. Whether AI will ever be capable of that level of emotional and foreshadowing-driven craft is an open question, but right now it seems a long way off.

There’s another, deeper problem which concerns me more. AI was trained not just on published literature but on the internet broadly — a place full of misinformation, propaganda, and opinion that gets amplified by volume rather than validated by truth. Those who shout the loudest tend to shape what’s most prevalent, and AI learns from prevalence, not accuracy.

I’ve seen this firsthand while working on Hope and Madness. AI consistently tried to push my story toward the most familiar narratives: mental illness equals instability, every police shooting is either justified or punished, or a happily ever-after ending. These are the stories we’ve told ourselves so often that they feel like facts. But they aren’t. They’re incomplete pictures. And when AI encodes them into its defined reality, those biases become even more difficult to dislodge … or even scrutinize.

AI is not something separate from us. It is us—our words, our biases, our best thinking and our worst. The reason it encodes harmful narratives about mental illness and policing isn’t a flaw in the machine. It’s a reflection of which stories we have told most loudly and most often. 

That realization has given me greater drive to finish and publish Hope and Madness. If a more complete, rounded perspective on these issues can enter the conversation—even as a minority report—it matters. Not just for today’s readers, but for the future discussions that AI will increasingly shape.

There’s a strange friction in all of this. While AI can be a powerful tool to help edit and polish my writing, I’ve been warned: don’t put your best turns of phrase into AI prompts, because the model will “steal” them through absorption and distribution. And yet, there’s a compelling argument for deliberately writing for AI. Contributing a more complete, honest perspective to the cultural conversation so that it shapes how our biggest problems get framed in the future. Both instincts come from the same fear of a tool that feels out of our control.

The recent fight between Anthropic and the Defense Department makes the future implications feel urgent. OpenAI believes they can build boundaries to contain what they’ve created — a faith that Frankenstein shared, and that Asimov spent an entire book dismantling. In I, Robot, the robots were given three simple laws. The rest of the story is what happened anyway. There is no reliable way to ensure we get the outcomes we want when we try to encode a moral compass into a machine. It’s hard enough to do with thinking humans.

That’s why walking the tightrope between protecting creative work and expanding the cultural conversation is no longer just a personal instinct for writers — it’s a vital imperative. The stories we tell today are the data that shapes tomorrow’s thinking. Writing experiences that contradict well-worn truths, that challenge the narratives AI is trained to repeat, may be one of the few meaningful ways we have left to preserve the complexity of our humanity.