LLMs Are Like Politicians—Confidently Wrong

LLMs Are Like Politicians—Confidently Wrong

How to Use AIs with a Critical Mind … Six Techniques for Skeptics

It took me four hours of work to lovingly draft, write, edit, refine, and publish this article. I cut through the hype and doom to bring you real insights on Agile, AI, and the Future of Work. Subscribe to my substack and don’t miss a single post.


If there’s one thing you’ll never hear a politician say, it’s “I don’t know.” The electorate wants answers, not ambiguity. (And unfortunately, most voters are more than happy to inhale any confident nonsense that confirms their biases.) For politicians, admitting ignorance is career suicide. Instead, they radiate unwarranted certainty like it's an expensive cologne.

Does that sound familiar? Welcome to the world of large language models.

Don’t Trust the AIs

I’ve been trying out Cal AI, a popular app that promises to help me track my food intake. I figured, hey, maybe staring at numbers all day will guilt me into taking less chocolate and fewer pastries. (So far, it’s working.) The app calculated my recommended daily intake goals: 1530 kcal, 120 grams of carbs, and other nutritional wizardry. But something smelled off—and it wasn’t my oat milk latte.

Being the skeptical nerd I am, I consulted my trusty cabal of AI advisors: Gemini, ChatGPT, and Claude. Think of it as getting a second, third, and fourth opinion—except none of these experts went to med school. Their suggestions for daily calories ranged from 1900 to 2050, and their fat and carb targets were all over the place. There was consensus around my protein target, but it was significantly lower than the goal set by the Cal AI app.

The message was clear: don’t trust any single AI. Especially not when your health or future depends on it. I glanced at the spread of suggestions, used common sense, and set my own targets, like the sentient rebel I am.

Bullshittification

Gary Marcus recently reminded us that LLMs still refuse to say, “I don’t know.” Even after several years of technological progress, they remain glorified improvisers—spouting whatever sounds good, whether or not it’s true.

This isn’t “hallucinating.” This is bullshitting. LLMs don’t malfunction—they perform. They’re not broken; they’re just too eager to please, like politicians at a town hall: full of conviction, light on facts, and ready to charm their way into your confidence. The phrase “I have no idea” seems to be hard-coded out of their training data.

And sadly, just like voters, users believe what they want to hear.

But not me.

“I Don’t Know.”

Last week at the Agile Meets Architecture conference in Berlin, someone told me my keynote was “refreshingly authentic.” I took that as a compliment. “How will Kanban boards evolve in the age of AI?” “I don’t know.” “Is training LLMs on copyrighted content legal?” “I don’t know.” “Is Elon Musk a genius or a moron?” “Still undecided.”

I say “I don’t know” because I don’t want to be caught saying something that turns out to be false later. I’m not a politician. If anything, I’m annoyingly anti-politician, and some people seem to appreciate that. I get paid to say what I think, and that includes, “I don’t know.”

LLMs don’t say “I don’t know.” Authentic humans do. And it’s one of our few remaining advantages.

This weekend in Brussels, at a Wemanity event, someone else asked me, “What’s the number one skill for future leaders?” I immediately blurted out, “Critical thinking.” Ironically, it just flew out of my mouth, as if I was very sure about it. To be fair, it was top of mind. But in hindsight, I’m not sure if it’s really number one. Top three? Definitely. Number one? Eh, I don’t know.

Six Techniques for Critical Thinkers

Here’s how I stay sane using LLMs: I assume they’re full of crap and work backward from there.

1. The Sequential Skeptic

When writing an article, I start with Claude to help me shape the outline from notes. Then I draft the piece myself (because I still believe in the outdated art of writing). Then I toss it to ChatGPT for style suggestions. Then Gemini critiques it. Then it’s back to Claude again for a final review. It’s like a daisy chain of second opinions. None of them are fully trustworthy, but together, they lift me up.

2. The Parallel Skeptic

Sometimes, I send the same prompt to three or four AIs at once. Like with the food tracking example. It’s a bit like polling three shady consultants without letting them talk to each other. When their answers disagree, I assume the truth is somewhere in the middle. You can compare it to double bookkeeping. Or triple, in this case. Though not fool-proof, it definitely brings down the error rate.

3. The Iterative Skeptic

Occasionally, I go full-on Delphi Method. I feed each AI the responses of the others and make them argue until they reach a consensus. “Hey Claude, ChatGPT disagrees with you—thoughts?” “Gemini says you’re wrong—care to respond?” It’s delightfully dysfunctional, and nobody’s feelings get hurt. Check out “The Four Moats Theory” for an example of this approach. Machines don’t sulk—yet. And sometimes the best insights come from getting the AIs to duke it out.

4. The Adversarial Skeptic (suggested by Claude)

Instead of using multiple AIs, you can challenge a single AI to argue against itself by requesting counterarguments and limitations to its own reasoning. This reveals blind spots and weaknesses that consensus approaches might miss, teaching the critical thinker to probe beyond initial confidence.

5. The Contextual Skeptic (suggested by ChatGPT)

Sometimes, you can change the framing of a prompt—audience, tone, purpose—just to see how wildly the AI’s answers shift. It’s a way to test for bias, tone-dependence, and hidden assumptions. When one version sounds smart and another sounds like LinkedIn sludge, that tells me something. Context matters, and this technique helps expose just how easily LLMs can be steered—intentionally or not.

6. The Verifying Skeptic (suggested by Gemini)

This technique involves demanding verifiable sources and, crucially, independently fact-checking the AI's key claims against reliable, external, non-AI knowledge bases. This grounds the output in actual evidence, ensuring substantive accuracy rather than just accepting an internally consistent narrative potentially woven from sophisticated guesswork.


Do you want to try and practice these techniques? Sign up for my self-paced course (”New Fundamentals for Leaders in the Age of AI”) or, better, sign up for the learning cohort that starts in May.


The Critical Conclusion

In any of these techniques, I’m practicing critical thinking. I safely assume they’re all bullshitting—like politicians on a podium—and I act accordingly. Sure, it’s harmless when you’re asking for a book recommendation or when you’re Muppetizing your family photos. But when you're making health decisions, product plans, or anything remotely important, blind trust in AIs is pure recklessness.

Critical thinking might be the number one skill for future leaders. Or it might just be top three. Whatever the rank, one thing’s certain:

Don’t trust politicians. Don’t trust LLMs. Don’t even trust me.

Question everything—especially the stuff that sounds a little too confident.

Use your brain.

Often.


The future of work isn’t just hype or doom. It’s about human-robot-agent collaboration—and those who master that will thrive. I’ll be your guide and skeptical optimist in a rapidly changing world. Subscribe to my substack and get ahead.

Back to blog
Jurgen Appelo

"Treating employees like adult human beings might be common sense, but it is not common practice."