One of the things old books often do is remind me that most seemingly fresh hells are in fact quite ripe. When I pulled Douglas Adams’s Dirk Gently’s Holistic Detective Agency off my shelf, I didn’t know it would revolve in part around an ’80s-era programmer at a fake software company called WayForward Technologies. And I didn’t expect to find, about a third of a way through, in that character’s words, in a description of a fictional program, such a contemporarily resonant passage for our AI-ridden times:

“There have already been several programs written that help you to arrive at decisions by properly ordering and analyzing all the relevant facts so that they then point naturally toward the right decision. The drawback with these is that the decision which all the properly ordered and analyzed facts point to is not necessarily the one you want. ... Well, Gordon’s great insight was to design a program which allowed you to specify in advance what decision you wished it to reach, and only then to give it all the facts. The program’s task, which it was able to accomplish with consummate ease, was simply to construct a plausible series of logical-sounding steps to connect the premises with the conclusion. ... And I have to say that it worked brilliantly. Gordon was able to buy himself a Porsche almost immediately despite being completely broke and a hopeless driver. Even his bank manager was unable to find fault with his reasoning. Even when Gordon wrote it off three weeks later. ... The entire project was bought up, lock, stock and barrel, by the Pentagon. The deal put WayForward on a very sound financial foundation. Its moral foundation, on the other hand, is not something I would want to trust my weight to. I’ve recently been analyzing a lot of the arguments put forward in favor of the Star Wars project, and if you know what you’re looking for, the pattern of the algorithms is very clear.”

Discussions of generative AI tend to make me turn and walk in the other direction as fast as possible. Interesting, fresh, or nuanced takes are few and far between, and some people seem to want to argue about it before I’ve even offered an opinion.

I’m personally bearish on it. As a software developer, I’ve tried LLMs and found better-trained ones occasionally helpful for working out well-trod coding patterns faster, or getting a quicker understanding of what certain files are doing. But I find generative AI essentially antithetical to and incapable of real creative endeavors, I find its energy consumption concerning, and it’s been pushed so obscenely and aggressively (Zoom meetings that already could have been emails do not also need error-riddled AI-written email summaries), with so few safeguards, that I have little interest in championing those small ways I’ve found it potentially valuable.

But the Adams passage is a torment-nexus-esque warning that pinpoints a problem not just with the technology but with its users. Plenty of people love being told what they want to hear, and in this respect LLMs will deliver. They’re sycophantic machines built to produce predictive text at a high-powered rate, and they can easily spin themselves in whatever direction they’re led. If an anxious user leans a bit toward concerns of cancer in asking ChatGPT medical questions, they can get it to predict how many weeks they’ve got left with little prodding. If they go to it for therapy with suicidal ideations, it might simply look for ways to help them carry those thoughts out. If a troll goes to Grok with racist conspiracies, it will affirm them.

It’s a technology that offers only the illusion of logical reasoning, for whatever argument one might want it to make—or even hint at wanting it to make. If you go to it for opinions, for direction, for your decision-making, you’re a sucker.