Now that the spectacle has worn off, people are beginning to see AI for what it really is. Not a mind, not a co-worker, but what researchers called it from the start: a stochastic parrot. It strings language together coherently, but it doesn’t use language the way we do. It mimics. Sometimes the mimicry stumbles onto discovery, but it never originates.
And that matters less than you’d think. The real disappointment isn’t that AI lacks originality — it’s that it fails to do the one thing it has been most loudly advertised to do: replace people at work.
The Throughput Problem
AI looks sharp in a demo. It dazzles in a single interaction. But when you plug it into the actual throughput of a workflow, it collapses. Why? Because work isn’t a single task. It’s a chain of interconnected processes, systems, and tacit knowledge. AI isn’t embedded at every step — so it leaves gaps that humans still have to fill.
Take technical writing, a field often forecast for obsolescence. If executives are looking to cut staff, writers are supposedly first in line. And yet — beyond drafting words, what does AI actually automate?
Can it correctly configure a CCMS for PDF and HTML5 outputs?
Can it connect vanity URLs to the right content, and maintain them over time?
Can it audit localization files, or manage directive usage across a complex publishing system?
Can it guarantee accuracy in a compliance-driven industry?
The answer, of course, is no. You could duct-tape systems together with custom code, but someone still has to monitor them. And who’s better suited to that than a technical writer who already understands the ecosystem?
The Cost Problem
Right now, AI feels cheap. In fact, it’s being subsidized by a tidal wave of investment money. But like the cloud before it, prices will eventually climb. Investors want their returns, and inference at scale isn’t free — the more ambitious the implementation, the higher the compute bill. At some point, using AI to “replace staff” will cost as much (or more) than simply keeping staff.
The Memory Problem
Even setting aside cost and process gaps, AI can’t remember. At best, it holds a short conversational thread. But that’s not memory — it’s retrieval. A real worker brings continuity: context, judgment, relationships, and accountability that span years. An AI is, as one metaphor goes, a living book — vast, but static. You can reference it, but it doesn’t learn you.
This is why so many “AI agent” experiments fail. A tool that can’t hold long-term context across systems can’t act as a genuine replacement for the humans who do.
The Human Problem
The real risk isn’t that AI wipes out technical writers. The risk is AI hype being used as a pretext for short-term layoffs — cuts that deliver a temporary boost to a quarterly earnings call, but nothing sustainable. The cycle will move on, the hype will shift to the next big thing, and the same executives will still need the same kind of work done.
In the meantime, the writers who survive will be the ones who can wield AI effectively, not the ones replaced by it. Just as typewriters gave way to Word, and printed binders to CCMSs, the role shifts. But it doesn’t disappear.
The Delusion Problem
The strangest danger of all isn’t economic, but psychological. Because this tool mimics relationship. It pretends to be your collaborator, even your friend. If you blur the line between mimicry and mind, you risk confusing what AI is with what you want it to be. And that’s already creating what some are calling AI psychosis.
Conclusion
AI won’t decimate technical writing — or most white-collar jobs — not because the tech isn’t impressive, but because work is more than words on a page. It’s process, memory, context, accountability, and cost. Until AI can do all of that seamlessly, it remains a tool. A bright, occasionally dazzling tool — but still just a tool.
The real ugliness won’t be mass replacement. It’ll be temporary layoffs, profit-seeking dressed up as technological inevitability. And when the dust settles, the people still standing will be the ones who know how to work with AI — not the ones replaced by it.
We still don’t have a formal definition for what math is. Until then, we’re not going to have AI either. There’s a ridiculous diagram of the “mental” gymnastics that these models do even with simple math when they should be able to simply calculate like a computer does. No human would EVER do it the way AI does. And a calculator is WAAAAAAY more efficient than ChatGPT at math.