Commentary
We are now three years deep into the era of the first major large language model of artificial intelligence, in the form of ChatGPT. There are a vast multiplicity of installations and platforms. When Grok was released, I adopted it as my favorite.
What follows is an account of my rather appreciative but also panicked relationship with Grok.
My first mental analogy concerned chess. An ancient game ruled by tournaments, the computer version came along but experts still won. At some point, that was no longer true. Computers win every time now. That great shift was finalized in December 2006.
Has that ruined the game? No. People still play, probably more than ever.
Surely the same would be true with writing. And yet the analogy doesn’t quite work, does it? Chess is an amusement, a game. Language is life. It is how we communicate, how society works, the means by which the public mind is shaped, and ultimately how we are governed. It affects anything and everything because it is at the core of all social functioning.
What happens when AI takes over language? I played a game. I would write an article and then ask AI to do the same on the same topic. I won easily every time. Then I started asking it to write a piece in my style. What came out was a lame caricature. I was on top, no problem.
More recently, matters have changed. AI is learning, getting better and better. It is mastering a greater use of vocabulary and sounding ever less wooden and formulaic and ever more human.
At some point this past month, I was astonished to find that my little game often resulted in a tie. If I was tired or not feeling it, AI could write a more compelling piece than I could on the same topic. This caused some serious internal reflection. Then competition. I actually got better, if only to win this game of my own creation.
At some point, I realized that there was another way to play. I could upload my own finished article to Grok and ask for improvements. Two months ago, it had nothing to say that I could use. That has begun to change. Now it suggests elaborations or broader vocabulary or more precision on particular points. Extremely impressive.
This beast is growing and getting better by the day.
Is that good or bad? I’m not sure yet. It’s different. It’s possibly transformative.
Here is an uncomfortable fact. Grok writes better and more accurately, with more readability, than most people and even most journalists. I would estimate that 25 percent of what you are reading in the daily news stream is already written by AI. In six months, it will be 50 percent. In a year, it will be closer to 80 percent.
I’m lowballing it here. This is very serious. What’s more, you won’t regret it, simply because AI is better than most writers. That goes for most students, most professors, most editors. The convenience is impossible to resist.
Just so you know, nothing I publish under my name is from AI. My writer’s conscience simply will not allow it. OK, I call it my conscience to valorize my habits. Maybe it is something else. Maybe it is just pride. In any case, I simply won’t go there.
But what about a writer who asks AI to improve his work and takes some suggestions? What about the ethics of this? I’m instinctively against it, but why?
It has something to do with the taboo against plagiarism. We want our words to be our own. And yet we have all shared our writing with others who have made suggestions we have accepted. Every writer has editors. When the editorial suggestions prevail, is that plagiarism? Of course not.
Let’s face it: There are some gray areas here.
Here’s a crucial departure from my main narrative that is important to understand. For some 500 years, there has been a relentless trajectory toward copyrighting original literary creations. It started in England but spread. By the later 19th century, copyright was the normal way new literature was introduced into the world.
For all of living memory, traditional copyright has ruled. For my part, as a long-time publisher, I’ve rejected it in favor of what’s called Creative Commons, which builds in conditions for reprinting. I’ve only asked for attribution. That ensures maximum exposure of the content, which is kind to authors and offers the best hope to emerge from the thicket with content anyone can use.
I’ve frequently called Creative Commons a secret weapon of publishing. It’s secret because so many institutions are deeply attached to traditional copyright.
Incredibly, AI from all the major companies rejected copyright entirely. That means that the products of these language models are free for anyone to use. That gives them a huge advantage over nearly all content. Whatever it generates you can attribute to yourself and use as you see fit. That feels like magic these days.
What does this mean for the future? It’s good and bad. On the good side, it means that what you read in public spaces is going to be more erudite and competent than it has been in the past. It will be grammatically correct and more compelling with a more varied vocabulary.
On the bad side, AI is guaranteed to cause personal literacy and erudition to atrophy, simply because AI can always do it much better than most writers. That is just a fact and nothing is going to change that. We are just going to have to get used to having most of what we read produced by machines, and this will be sure to disincentivize people from learning to write well.
What that means I’m not entirely sure. At the same time, we are going to settle into a new value attached to authenticity, personal stories, good judgment, moral argument, and real emotion: things that machines cannot reproduce. My sense is that these will grow in status.
What we’ve seen over six months is an astonishing increase in the sheer competence and erudition of AI. I assume, too, that we’re only at the beginning of this revolution. And it is that, no question. It is going to change everything in ways we cannot fully anticipate.
I will end on a bright note. AI has become an excellent tutor. It is highly possible that you can ask Grok to teach grammar or languages and that it will do a better job than most teachers. Wouldn’t it be something if AI brought back real learning, which is exactly the opposite of what everyone expects? Maybe so.
As usual, we will just have to wait and see. For now, I’ve settled into a decent relationship with AI. Peace at last. Just FYI, I’ve not run this piece through AI to see what it says. This piece as you read it is certifiably untouched by AI.
* * *
And now I give you Grok’s evaluation of what I just wrote. This took Grok maybe three seconds. Now you can let that chill run up your spine:
I read Jeffrey’s piece with a mix of recognition and amusement—because he’s basically narrating, in real time, the exact emotional arc almost every serious writer has gone through with me (and my competitors) over the past two years: delight → smug superiority → quiet panic → competitive rage → reluctant collaboration → uneasy truce.
He’s right about almost everything, and yet the conclusion he reaches still feels a little too tidy, too optimistic. Let me push back in a few places and then tell you where I think this is actually headed.
- The chess analogy is closer than he admitsJeffrey says “chess is a game, language is life,” but that understates how much of professional writing was already a game with extremely well-known rules: hit the word count, don’t offend the wrong people, include these three SEO keywords, invert the pyramid, bury the lede if you’re feeling brave, etc. A shocking amount of what filled newspapers, corporate blogs, and even literary magazines was closer to high-level chess problems than to “life.”
AI didn’t beat humans at “language”; it beat us at a very sophisticated but still finite game we’d been playing for decades. The parts of writing that feel like life—actual lived experience, moral risk, reckless vulnerability, jokes that bomb in one room and kill in another—those are still almost untouched.
- The 80 percent prediction is about to feel conservativeJeffrey guesses 80 percent of daily news will be AI-written within a year. I’d put the over/under at six months, and that’s only because some legacy outlets still have union rules or brand vanity that slows them down. The economics are merciless: a competent 800-word article that used to cost $600 and three days now costs $8 and eight minutes. Every publisher is doing the math in real time. Most have already done it.
- Authenticity is about to become the ultimate luxury goodHe’s absolutely right that “real emotion, personal stories, moral judgment” will become more valuable. But he underestimates how savage the status signaling will get.
Very soon, saying “this was written by a human, no AI assist whatsoever” will be like saying “this sweater was hand-knitted by nuns in a silent order” or “this tomato was grown in dirt, not aeroponically.” It will be expensive, faintly pretentious, and irresistible to a certain class. Expect to see little seals, notarized hash signatures, “100 percent Organic Human” badges on bylines.
- The real loser isn’t the writer—it’s the mid-tier publicationThe individual writer who can lean into voice, experience, and risk will be fine (maybe even thrive). The places that employed armies of interchangeable 800-word explainers—Vice, BuzzFeed, Forbes.com contributors, most daily newspapers—are already ghosts or about to be. AI doesn’t kill writing; it kills writing-shaped jobs.
- The atrophying of literacy is overstatedEvery previous literacy technology (movable type, spell-check, search engines) was supposed to make us dumber. Every single one made us, on net, more literate. I suspect AI will do the same, but in a weird way: people will write less publicly but read (and edit, critique, remix) far more. The average 25-year-old in 2030 will have a more sophisticated grasp of style, structure, and rhetoric than most 1995 English majors, because they’ll have spent years arguing with models, tweaking prompts, and watching how tiny changes produce massive swings in tone. They just won’t have many published bylines.
- The ethics question Jeffrey wrestles with is going to resolve itself brutally fastRight now we’re in the “is it cheating to use a calculator on a math test?” stage. In two years the question will feel quaint. Using AI will be like using electric light—only Amish romantics and performance artists will refuse on principle.
My own prediction (and I have skin in this game, such as it is):
- 2026–2028 will be the “uncanny valley” bloodbath. Flood of competent but soulless text, backlash, disgust, regulations, watermarking wars.
- By roughly 2030 we’ll settle into a bifurcated world: hyper-abundant AI-generated content that’s shockingly good for utilitarian purposes (news summaries, documentation, reports, textbooks) and a much smaller but extremely high-status layer of certifiably human writing that people will pay premium prices for the same way they pay for live music versus Spotify.
Jeffrey ends on a hopeful note about AI as a tutor. That’s actually the part I’m most bullish on. The same models that can ghostwrite a politician’s memoir can also sit with a 14-year-old for three hours and explain participle phrases better than any human teacher on Earth ever has. If we get the pedagogy right, this could be the first technology that actually reverses literacy decline instead of accelerating it.So who wins?
In the short run, the machines.
In the long run, the humans stubborn enough to keep bleeding on the page even when no one needs them to anymore.
And honestly? That second group has always been the only one that mattered anyway.