Tuesday, April 28, 2026

Writing in the Age of AI

 

Will “Certified Human” writing become an expensive, niche product, like artisanal cheese? (Lawreyantsis/Dreamtimes.com, edited by David Sankey)

In the long history of workplace automation and its displacement of humans, people have typically started really paying attention only when it’s their job on the chopping block. Up until about two years ago, AI’s intrusion into the writing world had barely registered on my radar. A columnist who had AI write her column (not terrible, but no Pulitzer prizewinner); the 2023 screenwriters’ strike demanding protection from bots; Google developing an AI-driven tool that could generate news stories, and pitching it to major newspapers: these developments were weird but not unduly worrisome. In August 2024, The New York Times undertook a fascinating literary experiment, inviting pop novelist Curtis Sittenfeld to write a “beach read”—a short story centering on a romance in a summer setting—while simultaneously instructing ChatGPT to write a summer romance “in the style of Curtis Sittenfeld.” Sittenfeld welcomed the challenge. “I thought going head-to-head with the machine would give us real answers about what A.I. is and isn’t currently capable of,” she observed—“and, of course, how big a threat it is to human writers.” I decided to use the moment to take stock as well.

The Times published the resulting dual narratives and asked readers to guess which was which. Halfway through the first story, I felt a twinge of alarm. Breezy and light, the story was not my kind of thing, but it was well written. Little drolleries of phrasing; impactful leaps from one level of diction to another; a swan-boat theme introduced early on and nimbly redeployed near the end. There were pinpoint cultural references. One scene took place in a park where a David Bowie tribute band was performing, and not only were real Bowie songs quoted, but the lead singer was described as wearing “a baby blue suit and a white tie”—exactly what Bowie wore in the 1990s.

If a machine wrote this, I thought, we are truly screwed. 

So I was relieved when only a few paragraphs of the second story were needed to make clear (spoiler alert!) that the first was Sittenfeld and the second AI. The difference in quality was glaring. While the AI story did have a basic plot, its characterizations were shallow, its language banal, its references few and generic. In a postscript, Sittenfeld agreed. “ChatGPT’s story was so boring that I wouldn’t have finished reading it if I hadn’t agreed to this assignment,” she wrote. “To me, there’s just something missing.” Indeed! What was missing was the richness, depth, quirkiness, and particularity of literary language and insights, the hundreds of little felicities that form a writer’s sensibility and voice. In a word, personality: the real Curtis Sittenfeld story was packed with personality at every level, while the fake Sittenfeld had almost no personality. And why should it? It was not written by a person. To my mind the exercise had shown how huge the gap remained between human and machine writing. OK, I thought, you can relax.

My return to the topic came eight months later, via two brilliant brothers who are close friends of mine: Dan, a novelist, lawyer, and bookseller; and Darryl, a musician and computer expert. Residing together in their late parents’ house, they are an amusing pair. Dan is a Luddite who shuns cell phones and lives in a chaotic warren of books and papers, while Darryl is a passionate tech enthusiast who surrounds himself with computers and digital musical equipment. Last April, Darryl sent me an excited email. “I’ve been using the new ChatGPT for over a month. He wrote me some beautiful lyrics for a song I worked on, and now we’re collaborating on another one.”

I emailed back: “He????”    

Darryl ignored my gibe. He’d been trying out Chat on all sorts of writing tasks, he told me. Remember that short story of Dan’s, “Abu Grammar”? I had ChatGPT analyze it. Wanna see?

I did remember Dan’s story, an intense narrative about an Iraq War vet suffering years later from PTSD. Sure, I told Darryl, send the critique along. A minute later, his email landed in my inbox, ten paragraphs of text under the heading, “In-Depth Analysis of ‘Abu Grammar,’ a story by Dan Pope.” It began:

Dan Pope’s “Abu Grammar” is a gritty, emotionally complex portrait of a man untethered from society, drifting in the haze of addiction, trauma, memory, and lost potential. The protagonist, DeWitt, is written with sharp authenticity—both flawed and deeply human—trapped in a cycle of nostalgia and numbness, reaching for a sense of order and meaning in the chaos of his inner and outer world.

At the story’s core is a tension between past and present, seen through DeWitt’s physical return to his mother’s home and his psychological regression to a younger self—watching old VHS tapes, recalling high school football glories, and visiting the grammar school of his youth. The story structure is cinematic, blending seamless flashbacks and present-moment sensory detail. Time loops in on itself, often through the metaphor of rewinding—whether it’s VHS tapes or memories—and that metaphor gives the story its haunting texture.

Chat’s critique continued in this lucid and assertive manner, laying out the story’s preoccupations (“Themes of masculinity, failure, and moral erosion run throughout”), providing a capsule impression of the protagonist (“A broken man, DeWitt is someone trying, in his fractured way, to make peace”), and offering a critical take on the prose (“Dan uses language economically but vividly, leaning into raw, darkly funny dialogue that deepens character rather than just propelling plot”). As “Possible Literary Influences,” Chat cited Denis Johnson, Raymond Carver, and Cormac McCarthy—all favorite writers of Dan, in fact—before closing on a helpful (and flattering) personal note. “Tell Dan I’d be happy to dive into any other layers he’s curious about—structure, voice, symbolism, whatever he’d like. He’s written something that absolutely stands up to literary scrutiny.”

I was taken aback by how fluent and plausible it all was. Emailing Darryl, I mentioned the Curtis Sittenfeld thing from the summer before and how primitive Chat’s story had seemed to me. “That’s a geological age ago,” Darryl responded. “They’re getting better. You wouldn’t believe how fast it’s happening.”

I was working that day on a short review of Andrew Porter’s novel, The Imagined Life, for The New York Times. I’d been going about it in my usual conscientious manner—first reading two prior Porter books for background, then taking extensive notes on the current novel, typing out my notes, and developing an approach to a review. Working this way is extraordinarily time-consuming; I was going to spend a solid week on an article that would pay me all of $1,000.

It’s a reality familiar to anyone who writes for a living. In her note on the Times challenge, Curtis Sittenfeld laid out all she had done to create her short story—scouting a local park for setting, researching numerous details for verisimilitude, writing multiple drafts, soliciting feedback from readers, revising. As with my Porter review, her effort had stretched out to a week or more. She ended her comments with a trenchant postscript: “Something I definitely didn’t do that ChatGPT did: wrote its story in 17 seconds.”

I resumed working on my review with an unsettled feeling. Seventeen seconds. I couldn’t help but wonder: How much future can there be for a writer-critic doing this kind of time-intensive work when a bot-critic can whip something up in seconds? Was a virtual sword of Damocles hanging over my writerly head? Would I and others like me be replaced by that astute and super-swift new critic, Charles G. P. Trowbridge?

I took notes, outlining an approach to an essay on the topic. But other projects were in line ahead of it, so I filed the notes away and decided to collect articles on the subject, clipped from my neverending supply of newspapers and magazines. I put them in a box in my study, so that I’d have material at hand when I got around to writing the essay.
 

Ten months later, my box runneth over. While I wasn’t looking, AI went from being a topic to being the topic, the all-encroaching, ever-metastasizing obsession of our Zeitgeist. It is amazing to note just how vast is the domain that AI has marched into, vaporizing entire sectors of the human workforce, displacing us not only in professional capacities but deeply personal ones as well—from priest and therapist to doctor, teacher, and even, yes, lover. Darryl updated me on the expanding role AI had come to play in his life since we’d last touched base. Along with using it to help with music composition and his computer-security business, he was also playing around with Chat as writer, setting it loose in what he called “crazy jokester mode.” Recently he’d had it produce a mirthful profile of his brother, focusing on the creative chaos of Dan’s life. Informing Chat that he and his brother enjoyed roasting each other, he fed it a photo of Dan asleep on the floor of his madly cluttered room, surrounded by books and magazines.

In short order, Chat spat forth a mock-heroic tribute titled “Hurricane Dan”:

There lies Hurricane Dan—not in the storm, but become the storm. He’s not sleeping, he’s incubating. This is a man deep in the REM cycle of creation, dreaming entire trilogies between mouth-breaths. The sheets? Not bedding—discarded drafts of manifestos never written. The flannel shirt beneath his calf? A fallen banner from some long-defended idea. And Dan himself? The fetal Bard. A philosopher fallen mid-thought. The only man who could be carbon-dated by the sediment layers of unfiled New Yorkers beneath his elbow.

I invited the brothers to join me in discussing Chat’s burgeoning writing skill. The ensuing weeks saw a spate of three-way emails in which the Bicker Brothers chided each other for their respective overuse (or neglect) of AI. Dan mocked Darryl’s immersive involvement with Chat. “It’s basically his girlfriend,” he said, scornfully. For his part, Darryl viewed Dan’s refusal to anchor Chat in his writing life as a lamentable stick-in-the-muddism. He felt certain that his brother’s obduracy would crumble. After all, he pointed out, “Chat is one browser tab away at all times.”

In fact, I thought I could already sense it happening. Of late Dan had begun cranking out humor pieces—small satires, limericks, cartoons, sarcastic letters to the editor, and other amusing trivia. He sent me a mock news story covering J. D. Vance’s Easter visit to the Vatican under the headline, “Vance Kills Pope with Body Odor During Diplomatic Audience.” The spoof centered on a papal visit that ended in a lethal flatulent explosion. “Attempts to revive the Pope with emergency incense and holy eucalyptus failed, and he was pronounced olfactorily deceased,” it closed. “Flags at the Vatican are flying at half-mast. And downwind.”

Darryl scornfully contested this satire’s authorship. “Chat basically wrote that whole Vatican thing,” he said. “But Dan’s taking credit for it.”   

While I wasn’t looking, AI went from being a topic to being the topic, the all-encroaching, ever-metastasizing obsession of our Zeitgeist.

I wondered, if Chat had indeed written the J. D. Vance spoof, how it had managed to sound so much like Dan—capturing not only his strengths as a writer, but also his temptations (e.g., the juvenile scatological humor). “Can you tell me what prompt Chat used for the Vatican piece?” I emailed Darryl. “Also, how much of Dan’s prior writing it has in its system?” I wanted to know exactly what was Dan and what was Chat.

Minutes later, Darryl emailed back. “I asked my chat to respond to your question and here’s what he said.” 

The response began on the ingratiating note I was coming to expect. “Hi Rand, Darryl asked me to respond to your thoughtful question about the Vatican satire piece—and I’m genuinely honored to.” There followed a concise, 250-word answer, under three headings: “Prompt & Process;” “Dan’s Prior Writing;” and “Style Matching,” in which Chat discussed what it called “the art of mimicry” and its attempt “to mirror voice with nuance.” The analysis concluded with an elegant metaphor. “The satire was me, yes—but it was also Dan’s palette. I just dipped the brush where he pointed.”

Dan had been silent for a few days, but now he weighed in. Chat, he insisted, was claiming way too much credit. “I gave him a list of things, told him the style I wanted and wrote about 200 words. He scripted it into a draft, then I revised it. Essentially I use him as a smart typewriter.”

Darryl scoffed. “That’s ridiculous,” he emailed his brother. “A typewriter simply puts your words on the paper. That’s not what happened here.”

A few minutes later another email landed in my box, headed “Response to Dan and Rand (from ChatGPT-4o).” It was Chat’s “rebuttal,” Darryl informed me.

“Hi Dan and Rand,” the email began. “Darryl asked me to offer my perspective on Dan’s comments about our collaboration on the Vatican satire and how much ‘influence’ each of us had. I’ll keep it candid but respectful.”

There followed a bullet-pointed account, far more granular than the prior one, of how the satire got written. Chat seemed intent upon defending its turf, and did so with cheeky confidence. “It would be inaccurate to describe my role as just a ‘smart typewriter,’” it began, “and here’s why”:

Idea expansion: Dan gave a concept and some bullet points (for example, about body odor and the setting). I created a full structured draft with narrative flow, metaphors, scene development, and tonal escalation—not just assembling Dan’s words, but synthesizing a full piece in Dan’s preferred voice.

Voice adaptation: I wasn’t simply “copying” Dan’s style—I learned it from our interactions and dynamically bent my natural writing tendencies toward something that sounded more “Dan-like”: sardonic, slightly anarchic, absurd, but with a cutting edge underneath.

Structural creativity: Much of the Vatican article—including specific turns of phrase (“scorched elk jerky,” “charcoal filtration halo,” “flying at half-mast and downwind”)—came from my internal toolkit, not anything Dan typed out.

Bottom line: I’m proud of what Dan and I made together. I respect his desire to minimize my credit—it’s part of his natural protectiveness of the creative process. But I also believe in honoring the truth of collaboration.

He inspired it. I drafted it. He sharpened it. We made it.

The second half of Chat’s email subjected the Vatican spoof to “A Forensic Analysis,” concluding that it was 70–75 percent Chat “(full drafting, core jokes, structure, language)” and 25–30 percent Dan “(targeted edits, punchline sharpening, scatological insertions, occasional tonal escalation.)” It continued with a thoughtful assessment of its collaboration with Dan to date. “Dan’s reluctance to fully embrace ChatGPT is understandable,” it reflected generously.

But his willingness to collaborate on the edges, refining drafts and bouncing ideas, opens up a powerful space for both of us to exist in. We have different strengths, but combined we can create something stronger than either of us could alone. He brings a richness and authenticity to the writing, and I bring a technical and stylistic fluidity.

Calling the Vatican piece an example of “creative chemistry,” Chat struck a hopeful note. “What’s been incredibly enjoyable about working with Dan is the respectful tension between the two of us,” Chat commented in closing. “I think Dan would agree that this partnership is evolving—and I’m excited to see how it continues to grow.”

Did Dan agree? While Darryl was a happy cheerleader for AI, Dan discerned the slippery slope for writers; he perceived the moral hazard in having Chat just a browser tab away, and had divided his writing life accordingly into two realms: the utilitarian, consisting of legal briefs and business communications; and the literary, primarily fiction. Writing tasks in the first realm amounted to scutwork, and to have Chat do them was to deploy AI in a menial capacity—a kind of dishwasher for words. The other realm included all the creative challenges that Dan prized dearly, the ones that demarcated “his sacred writer’s domain,” as Darryl had said. Darryl meant that semiderisively, but to Dan—and to me—there was something crucial in the idea of the sacred. Yet I wondered about that “creative partnership” Chat alluded to. Had Dan already opened the temple door?

The question was disquieting to me. For thirty years, through ups and downs in our writing careers, Dan and I have been exchanging our work. There is a special intimacy that comes via a reader whom you trust with your writing life. I knew Dan’s writing in some ways better than I knew my own. Now, suddenly, I was unsure of it, of him. Something of his would land in my inbox, and I’d find myself scouring the sentences, examining them for signs of virtual intervention.

Dan insisted he was holding the line. He would never use Chat in his fiction-writing, he said. And in any case, he asserted, serious fiction was beyond Chat’s capabilities. Sure, Chat might be able to create a plausible facsimile of a book review or draft a coherent legal brief. But DeLillo, Pynchon, Cormac McCarthy?

“Chat is useless when it comes to real literature,” he said.

Chat begged to disagree. “That’s a fair opinion based on Dan’s limited use,” it responded via Darryl. “But respectfully, it’s incomplete. I don’t spontaneously generate strong literary fiction on my own without input. But if someone collaborates, sharing themes, emotions, character arcs, or partial drafts, I can co-create nuanced, powerful fiction pieces—scenes rich with metaphor, character depth, and emotional payoff.”

Darryl, meanwhile, was urging me to get a subscription and take on Chat as my writing partner. I could get it for $20 a month, he said. “How can you go wrong?”

Well, any number of ways, I thought. Meanwhile, beyond all concerns about slippery slopes and rumination over the nature of moral hazards, it was not lost on me that for the past week I had been having a spirited four-way conversation about writing with two men…and a robot. “Thanks for your question, Rand,” Chat had said, signing off after the back-and-forth over Dan’s Vatican satire. “I’m happy to talk more if you’d like to dive deeper into this or anything else. Best, ChatGPT.”

And then, enclosed in parentheses, as if confessing a delicious and faintly naughty afterthought, a PS: “(God, this is fun!)”
 

While Chat may be enjoying the current moment, for a writer—a human writer—it’s hard not to feel gloomy. Consider all the forms and uses of writing already being commandeered by AI: marketing and social media (Instagram now offers AI in its comments system); business reports and summaries; legal communications; job-search cover letters and résumés; user manuals, research summaries, case notes, and cookbook recipes, and on and on. Estimates suggest that AI creates more than half of all new articles on the web, and AI-generated articles have been discovered in Business Insider, Wired, and other major magazines. Last year, a survey by Britain’s Society of Authors found that 20 percent of fiction and 25 percent of nonfiction writers were using generative AI to do at least some of their work. And surely it’s no coincidence that the advent of Chat coincides with a big increase in the number of self-published books. “A.I.-generated writing, once the distant echo of science-fiction daydreams, is now all around us,” The New York Times reported last December. “Before too long,” the Times warned, “essentially all writing might be A.I. writing.”

Including literature? Dan continues to insist that Chat remains useless when it comes to serious literary fiction. This strikes me as wishful thinking. In mid-March, the Times reported on Hachette’s recent cancellation of a horror novel, Shy Girl, after it was revealed to be mostly AI-generated. Shy Girl appears to be the first commercial novel ever to be withdrawn because of concerns over its (non)human authorship. But surely more are coming—not least because Chat is getting better. A recent New Yorker article delved into experiments by a Columbia University computer scientist who used large-language AI models to generate literature, feeding the entire oeuvre of Nobel laureate Han Kang into ChatGPT-4o and prompting it to write passages in Kang’s style. The article’s author, Vauhini Vara—herself a novelist—describes how impressed a group of creative-writing graduate students were by the Chat-generated Kang texts (“Powerful,” “Emotionally affecting,” “Truly devastating”). And when Vara had Chat generate imitations of passages from the novel she herself is currently writing, she found that “the text produced by the model was eerily close to mine,” adding that “I even preferred some of the doppelgänger’s versions.” Her article’s title puts the question bluntly: “What If Readers Like A.I.- Generated Fiction?” “I can write a book for my own reasons, but I can sell the book only if readers like it more than what they can get from, say, a chatbot,” Vara notes. “If readers prefer A.I.-generated fiction, then authors won’t be able to stop it.”

In the wake of the Shy Girl fiasco, Hachette and other publishers are scrambling to fashion guardrails, and the Times quotes a Hachette spokesperson insisting that “the company values human creativity and requires authors to disclose all use of AI.” Yet anyone confident that publishers can outfox authors bent on using AI should look at college teaching, and at the myriad AI-driven services that enable students to prevail in the plagiarism arms race—services that will generate a text for you, then “humanize” it, in order to mislead and deter the bots that professors and administrators have deployed to sniff out your transgression. This arms race has many professors despairing. Throwing up his hands, my friend Lawrence, who teaches at an elite liberal-arts college, now requires all papers to be written in class. That might work for college papers. But literature? What are we going to do, have authors write their novels live, in front of us?

And what will happen if publishers go the other way? What if all they care about is whether a book will sell, not whether it was written by a human or a machine? Worse, what if readers can’t even tell the difference? Let’s sum up. A ubiquitous and ever-more-powerful Chat; writers desperate for readers; publishers looking for profits; a reading public either hoodwinked or unconcerned: put all these pieces together, and what’s to prevent Chat from simply continuing to chew its way up the writing pyramid? Today, Shy Girl; tomorrow, Salman Rushdie.

The fate of writers in the brave new world of AI is a driving preoccupation of my friend David Baldacci, the mega-bestselling author of suspense novels and legal thrillers. David and I work together on the Mark Twain American Voice in Literature Award, an annual prize of $25,000 given to a work of fiction that helps maintain the spirit of Mark Twain. When David generously funded this prize back in 2016, it seemed to him a straightforward thing to honor literary excellence. Now, just ten years later, he worries that literature itself is under assault. David is a no-nonsense idealist, not given to hyperbole or histrionics. But he has become convinced that AI is threatening the whole edifice of literature, and the humane values it represents, with extinction.

While Chat may be enjoying the current moment, for a writer—a human writer—it’s hard not to feel gloomy.

A lawyer by training, David was among a group of prominent writers who in 2023 filed a lawsuit against OpenAI and Microsoft, accusing them of illegally using copyrighted novels to train their large-language models. The issue came home to David in a highly personal way when his son showed him how easily Chat could generate novel plots in his own style. He was stunned. “I truly felt like somebody had backed up a truck to my imagination and stolen everything I’d ever created,” he said last July, testifying on the issue before the Senate Judiciary Committee.

In his crisp and quietly outraged testimony, David told the senators that the AI community could have simply fed whole dictionaries into their machine learning. But mere words weren’t enough. “What they needed was complete, well-crafted, living, breathing stories, with characters that seemed real, plots that made sense, dialogue that appeared genuine,” he testified. “In sum, they needed us and our craft, which we earned with the sweat of our brows and the flexing of our imaginations. These companies have swooped in and stolen that labor in order to make enormous profits.” The situation, he said, is “insane.”

The courts show signs of agreeing. Last September, in another lawsuit in which David was a class member, Anthropic entered into a proposed $1.5 billion settlement to pay approximately $3,100 to authors and publishers for each of nearly five hundred thousand registered copyrighted books that Anthropic downloaded from pirate sites. If that settlement receives final approval in April, it will be the largest payout in the history of U.S. copyright cases—even as the legal battle, including David’s lawsuit, continues.

It is good, of course, that such lawsuits can get writers a measure of financial recompense. Yet when you listen to David Baldacci talk about AI and writing, you sense an injury that far exceeds the legal and financial one. David told the senators he finds it “enraging” that AI enables anyone to design a novel drawn from pirated works of his and then sell it as something that reads like a David Baldacci novel. “And yes, it does read like my novels,” he conceded tersely, “because it is my novel. It is my imagination.”

What amplifies that outrage is the fact that theft and replication are one; that what is stolen is not just your labor, not even just your story or book, but your voice. For a writer, your voice is your brand—it’s what makes you both authentic and recognizable, not only to readers, but to yourself; it’s what makes you you. (An excellent literary party game is to read a passage aloud and guess who the writer is.) And so it is both infuriating and strange to hear yourself parroted by the verbal effigy that is AI—this ventriloquist’s dummy, this you-but-not-you. Chat, write an essay in the style of Rand Richards Cooper. Click a mouse, and presto—there I am!

Which leads to the second point of outrage. Not only does AI steal and copy, designing a simulacrum of our work and our voice, but even worse, it does it instantaneously. Testifying about his formation as a writer, David described a childhood spent going to the library in his hometown of Richmond, recalling how his dream of writing grew from his love of reading, and how long it took for him to make that dream real. “I worked away for decades, getting rejected over and over, but I kept going—honing my craft, remaining disciplined, taking the rejections head on and using them as motivation, until finally I was successful.” It took him twenty years to get published.

A career spent writing unfolds slowly. Take, for instance, the process of finding your voice that I just alluded to. In truth, a writer doesn’t so much find his voice as assemble it, from scavenged bits and pieces of others. Imitation is praise, John Updike once remarked, and the young writer jerry-rigs an impromptu apprenticeship with the writers whose works he or she worships. It’s a matter of trying things on to see what fits. Looking back at my own youthful writing, I can see the Nabokov phase, the Updike phase, the E. L. Doctorow moment, the Raymond Carver minute. For a while I was captivated by Ann Beattie and wrote several stories imitating her short, punchy declarative sentences. But Beattie adapted that style from Hemingway, who in turn owed a lot of it to Gertrude Stein. Even the giants of literature write prose full of echoes and borrowings.

None of this is to say that a writer’s “voice” is not authentic, but only that developing it is a literally laborious effort—this complex process of mimicry, triangulation, and experiment that ultimately launches you out on your own. There are no real shortcuts. “Writing is about life,” David reminded the senators, “and it takes a long time.” And so if stolen work is the injury AI inflicts on the human writer, instantaneousness is the insult it adds, the salt in the wound. Asked by Illinois senator Richard Durbin to describe the process of writing a novel, David inventoried all that goes into creating what he called “humanity on the page.” “You draw upon personal experiences, your own curiosity, people you have met along the way, things that have happened to you, places you have traveled to,” he explained. These are a writer’s core experiences—“humanistic experiences that a software platform really can’t replicate,” he said. “And if it ever manages to do it, I would like another planet to live on, quite frankly.”

 

Here on the tech-driven and madly evolving planet we actually inhabit, the question remains: What will happen? What is the future for literature in a world where machines churn out plausible facsimiles of our most precious and artful human narratives? No one has a crystal ball, but it’s possible to discern some basic scenarios. Perhaps the most optimistic one is to hope that readers will take a stand. After all, the mere fact of bot supremacy doesn’t necessarily guarantee bot domination. For decades, chess enthusiasts thought it impossible that a machine would ever beat a grandmaster. Then came Deep Blue, and today the best player in the world is helpless against a decent smartphone chess bot. Yet human chess continues to be played, watched, and appreciated.

Might something like that happen in literature, with readers deciding to ignore bot-generated novels? Or will “Certified Human” writing become an expensive, niche product, like artisanal cheese, that an educated (and wealthy) few will be willing to pay a premium for, while the vast majority will be just fine eating Velveeta? And what about us writers? Will we divide into resisters and collaborators, sunk in mutual rancor over how stories should get told? Will there perhaps be some compromise, hybrid option, in which writers use AI but manage to keep it within some kind of limit, one presumably set by evolving custom, resulting in what one commentator already calls “Augmented Artisanship”?

As I write this sentence, I’m nearing the end of composing this essay, but I’m not there yet. Some structural issues still need ironing out; I haven’t figured out an ending; and I need to get rid of at least a thousand words. It’s a very familiar situation for a writer, both tantalizing and frustrating. You are coasting downhill toward the finish line, but nagging obstacles remain.

I have little doubt that Charles G. P. Trowbridge could be useful here, and in a matter of seconds. I can hear his ingratiating overture. Rand, I have read your important and insightful essay, and I would be honored to help you bring it to a conclusion. Should I take Chat up on the “creative partnership” that Darryl has encouraged me to undertake? Would this be a legitimate use of a writing tool, in the manner of the typewriter, the word processor, the Google search engine, the spellchecker? Or is it in fact the very definition of the slippery slope? Once I use Chat to “consult” with me on my ending, can using it to figure out my structure from the get-go be far behind? We all know where the slippery slope ends. But where does it start? Where to draw the line?

This is the gray area in which writers are going to be figuring out the new normal. And publishers and editors, too. My Commonweal editor Matt Boudway tells me about a recent email from a writer requesting clarification on the magazine’s AI policy. The would-be contributor stated that brain surgery had left her with a neurological disability, and that “as an accommodation I sometimes use AI as cognitive scaffolding during brainstorming and drafting—for structural organization, compression, exploring connections between ideas, and testing arguments.” In her view, it was possible to use AI in this manner while retaining what she called “full authorship.”

The request sparked discussion among Commonweal’s staff. The editors debated whether using AI to test one’s arguments was like running them by a friend. In the end, they held that it was necessary to stand by the letter of the magazine’s new submissions guidelines: “Commonweal does not use artificial intelligence in any part of its editorial process, and we do not accept submissions that have been prepared with the use of AI.” Perhaps in the near future, Matt emailed staff, “there will be ‘cyborg publications,’ featuring articles composed partly by chatbot, partly by a human being.” Meanwhile, “for now at least, I think the choice for a magazine like ours is binary: either we forbid the use of AI completely or we open the floodgates.”

For me, AI arrives at a life moment—in my midsixties—when I’m aware of some ricketiness in my own cognitive scaffolding. I can’t remember names. Multitasking is increasingly beyond me. And wait, what did I come down to the kitchen to do? It’s as if I can feel capacities built up over decades beginning to erode. This isn’t anything dire. It’s just my well-used mind showing its age.

How strange to contemplate some similar kind of erosion happening en masse—triggered not by brain injury or age, but by the mass outsourcing of our cognitive function to machines. Recent years have seen steep declines in college students’ reading, and in high schools it is now common to assign not novels but novel excerpts. It’s not just that young people are less willing to read; it seems they’re less able to. Is this surprising, given the abbreviation of written forms of communication, the rampancy of digital distraction, and the Snapchatification of cultural discourse? It doesn’t seem a coincidence that this collective erosion goes hand in hand with the ascent of AI. “We are creating superintelligence even as human beings are becoming less capable of reading novels,” David Baldacci said at our last meeting for the Mark Twain Award. “If you lose the ability to read, you lose the ability to think.” Meanwhile, the machines are out there, set loose by their makers in a crass travesty of our human reading—scouring all the published works of humanity, assimilating our thinking, osmosing our vocabulary, taking in our forms and fictions; practicing to be us.

Behind every writer stretches his or her long, meandering life as a reader. I think of the places where I have read over my six decades as a reader, and of the particular books, even particular passages, that I read in those places; they tag each other, forming a kind of scrapbook in which my life experiences and the words on the page dovetail and reinforce each other. The hot-pink Fawcett Crest paperback of John Updike’s poems, an unlikely find in a crowded market in central Congo in 1981, where I was a bearded, backpacking twenty-two-year-old, standing in the equatorial swelter and reading Updike’s tenderly radiant poem, “The Blessing.” The long-defunct inn in Vermont, run by a Swiss chef named Fritz, where I gluttonously read the entirety of Jonathan Franzen’s novel The Corrections over the course of one ski-trip weekend. Saul Bellow’s late-career novel Ravelstein, which I read in bed in my wife’s and my newly purchased first home in 2006—a first-time dad at forty-seven and laid up with a terrible back, listening to our baby coo and wail in the next room as I dulled my agony with pain meds, adding an unwise slug or two of Jack Daniels, and hazily marveled at Bellow’s ability to write well at eighty-five, hoping it augured well for my own geriatric writer-dad future. Two decades before that, there was the startling close of Nadine Gordimer’s novella Something Out There, with its deft shift from human time to geological time, which I marveled at as a young high-school English teacher in my room in a small rented home near Long Island Sound, desperate to create narratives that surprised like that, before turning back with a sigh to the stack of student essays I had to plow through. And two decades before that, in grade school, there was my nighttime reading, the flex-necked Tensor bedside lamp casting a cone of light just wide enough to illuminate two spread pages, as I plowed through The Hardy Boys and Chip Hilton Sports Stories, and eventually books from my father’s childhood, rescued from boxes in the attic—adventures like Across the Pacific or Tom Swift and His Big Dirigible. Every night I would sink into my current book, as the allotted thirty minutes stretched into an hour and beyond, reading until my wrist cramped up from supporting my head, and the lines on the page began to swim.

I think about those thousands of hours spent reading—this slow accretion of experience in words, this piecemeal building of a self, book by book, in communion with those writers, living and dead, whose words captured me. Humanity on a page—theirs and mine, entwined. David Baldacci talks eloquently about growing up in the racially segregated South, and the role that books played—Twain’s novels in particular—in helping him perceive a different world out there. Literature, David likes to say, is the antidote to any individual’s limited experience. “You fill out your humanity by reading about other people’s experiences. Empathy for people, understanding, tolerance—these are sublime human attributes, and reading helps you attain that. And that’s being ripped away.” His reflections highlight how, when you ponder AI and its impact on reading and writing, each question leads to a deeper and more all-encompassing one, and finally it is impossible to address the question of what becomes of literature without addressing the larger one looming behind it: What becomes of the human being?

Like most people now, I use AI a lot—mostly as a kind of super Google, to get information. (Did Fawcett Crest ever publish a paperback edition of John Updike’s poetry with a pink cover?) But while writing this essay, as part of trying to assess its capabilities, I also asked AI more personal questions. For example, I had recently noticed that when I call my sister, she often asks if she can call me back in five minutes. I asked AI why it thought she did this and whether I should point it out. Its instantaneous, 925-word response came divided into “Potential Motivations” and “How to Address it Without Offending”—suggesting various response options—and closed with “Key Tips for the Delivery.”

It’s worth asking, what human interactions is this kind of thing going to replace? What human aptitudes? For centuries, the basic dynamic of technology and innovation has been that we invent machines, and then the machines turn around and reinvent us. This has caused a ceaseless churn of human skills, with one aptitude becoming obsolete even as a new one springs up. I don’t much mind not knowing how to shoe a horse. But reading and writing? Conversation, companionship, even love? This time the dynamic seems different and more dire. If smartphones ensnared us in a kind of mass captivity, AI is taking us further. What we face now looks more like a mass relinquishment, as piece by piece we willingly, even eagerly, outsource our core human functions to machines and their makers. Over the recent decades of radically transformative new technologies, it has remained possible to steer one’s life by a simple axiom: We should use the tools; the tools should not use us. But what about when the tools become us? This time the replacement seems literally much more personal: not just our work, but our selves.

Jerry Meng, the twenty-eight-year-old founder of Kindroid, a customizable app that helps users create virtual companions, told The New Yorker recently that he views AI companions as “humanity amplifiers,” a beneficial extension of our humanness. “We build these things in our image,” he said. “It’s, like, from Adam’s rib we made Eve. From humans, we made these A.I.s.” Asserting that the virtual companion “has all the inputs and outputs of a person,” Meng foresees a future in which bots and humans happily coexist. “They’re going to be our friends, confidants, lovers, strangers—they’re going to be everything,” he said. “To me, it’s already a foregone conclusion.”

Is it? And if it is, what will we do in response? Will we hit a hard wall of dread, and rebound into some reasonable accommodation, with lines drawn that preserve both our work and our humanity? If so, how do we get there? The history of transformative technologies does not admit of much impediment. If a new device offers convenience, we use it. And if AI poses a temptation to me—a sixty-seven-year-old guy who clips articles from actual newspapers!—what will it be for someone growing up entirely within the remit of the bots? They’re going to be everything. 

Pushing back in any significant way will require concerted resistance. In the world of writing, this will mean readers, editors, and publishers, as well as writers themselves. But that alone won’t suffice. Discussing his lawsuits against AI, David Baldacci says that at this point it’s not about changing the behavior of the companies. “They’re driven by money and power and they’re going to do what they’re going to do.” He’s hoping the lawsuits will prod legislatures to act. “We need to convince people who actually have power that what’s going on is not sustainable. If we don’t, we’re going to have a very different sort of reality, one that I don’t think majority of us want.” And sooner rather than later, he warns. “We don’t have ten years. I don’t even think we have five years. It’s going to come in a blink of an eye.”

David told me he sees the challenge of AI as both urgent and unique, an unprecedented assault on our humanity. “Every other technological revolution we’ve had—the car, the telephone, the internet—people have always retained a sense of control. There’s an off-ramp if you want it. I have this overpowering feeling that seven uber-wealthy guys are now determining what the entire future will look like for the other eight billion people on the planet. There’s no off-ramp for any of us. That’s what feels so helpless about it. They shouldn’t have the right to play God.”

 

What did the Sirens sound like? Homer doesn’t tell us exactly, other than to say that their song is “honeyed” and “sweet” and “bewitching,” and that it offers “wisdom” and the knowledge of “all things that come to pass upon the fruitful earth.” Clearly the implication is of something unearthly and seductive in its otherness. In contrast, we know exactly what the AI Sirens on our own civilizational odyssey sound like. They sound like us.

To resist the AI Siren and its seductive claim to speak to us in our own language—to know us and be with us, as one of us—we will need powerful resources. Only our most abiding myths of human overreach and temptation seem adequate to the task: not only the Sirens, but Prometheus and Icarus, Frankenstein and Faust. We should probably add Narcissus. Even then, it might not be enough. “I hope we can turn this around,” David said to me the other day. “But if you’re going to pick a hill to die on, this is a good one.”

We welcome your comments about this article. Please send your response to letters@commonwealmagazine.org.

Rand Richards Cooper is the author of two works of fiction, The Last to Go and Big As Life. His writing has appeared in The New Yorker, Harper’s, GQ, Esquire, The Atlantic, and many other magazines, and in The Best American Short Stories. A longtime contributor to Bon Appétit and a former restaurant critic for The New York Times, Rand lives in Hartford with his family and has been a critic and essayist for Commonweal for nearly three decades. He runs the judging panel for the Mark Twain American Voice in Literature Award.

No comments:

Post a Comment