What if Readers Like A.I.-Generated Fiction?
Briefly

What if Readers Like A.I.-Generated Fiction?
"Chakrabarty and his collaborator Paramveer Dhillon, a University of Michigan professor, wondered if the A.I. output would improve if they fine-tuned GPT-4o, the model that powered ChatGPT at the time, by feeding it an author's entire œuvre. One night in April, Chakrabarty was in Tokyo on vacation, jet-lagged and bored in his hotel, and he pushed almost all of Han's translated writings into the model. But he purposely left out a passage from "The White Book." It depicts the death of the narrator's older sibling, two hours after being born."
"In that grim scene, Han describes how the narrator's mother reacts: "For God's sake don't die, she muttered in a thin voice, over and over like a mantra." Before the fine-tuning, the A.I. renditions had been overwrought: " 'Live,' she murmured, a chant that carried the weight of her being." But now, when Chakrabarty fed the fine-tuned model the summary, the language seemed to bloom: "She held the baby to her breast and murmured, Live, please live. Go on living and become my son.""
A researcher tested large language models by providing sample passages from a prominent translated corpus and asking the models to generate a withheld scene in the same style. Creative-writing graduate students judged initial LLM outputs as overwrought and inferior to human imitations in blind tests. After fine-tuning GPT-4o on nearly all translated material while withholding a specific passage, the model produced a more restrained, emotionally precise rendition of that scene with convincing domestic detail and emotional register. The experiment indicates that corpus-level fine-tuning can substantially improve stylistic fidelity and emotional subtlety in model-generated language.
Read at The New Yorker
Unable to calculate read time
[
|
]