The Red Queen Says No to AI Agents

The Red Queen Says No to AI Agents

The paradox is that AI increases uncertainty, which breaks autonomous AI agents.


The World of Work Is Changing. What Now?

The future of work isn’t just hype or doom. It’s about human-robot-agent collaboration—and those who master that will thrive. I’ll be your guide and skeptical optimist in a rapidly changing world. 🚀 Subscribe now and get ahead.


How can we expect AI agents to achieve our goals when humans hardly know how to make sense of a regular day?

2025 is supposed to be the year of autonomous AI agents. And in 2028, AI agents will work eight-hour workdays, unsupervised.

But I don't believe it.

AI agents are not going to be autonomous soon. Real autonomy—the kind where an agent works a full day without babysitting—will require Artificial General Intelligence (AGI). And AGI, as far as I can tell, is more than a few years away.

Why?

Because the Red Queen says no.

Volatility

AI is hyper-accelerating innovation. Products, startups, and business models now rise and fall in the time it takes us to microwave our lunch. Industries get disrupted before anyone can agree on what the last disruption even was. Regulations are obsolete before they're distributed as PDFs, and business strategy is increasingly turning into continuous improv theater. Stability is a quaint memory; what we have now is Volatility.

On top of that, there's increased personalization—or hyper-personalization—of products and services, which kills any hope of predictable value streams. Soon, there might be no standard business processes anymore. No easy-to-understand Kanban boards. Just endless combinations of AI-powered user and employee experiences, configured in real-time, depending on context. Linear value streams transform into non-linear value networks.

Finally, AI has made it absurdly easy to flood the world with high-quality slop: deepfakes, Ghiblified news, fake experts with flawless credentials and zero reality. When truth becomes optional and geopolitics is in mid-meltdown, every trending rumor leads to a brief collective psychosis. The result is increased volatility: trust erodes, fiction spreads faster than fact, and reality becomes a choose-your-own-hallucination.

Problems get more wicked as AI increases volatility.

Ambiguity

AI creates output that looks smart, feels creative, and occasionally is smarter and more creative than your coworker Daniel. Machines can now brainstorm, write, code, compose, and paint. So... are the machines really intelligent? What is creativity? What is consciousness? Is algorithmic management good or bad? Is AI-powered gig work a curse or a blessing? Welcome to Ambiguity: the rules of work-life have changed for everyone.

Ask the same question to three LLMs and you get three plausible, polished, but different answers. Which one's right? Who knows? They all sound legit. The main reason, of course, is that the data these systems are trained on is ambiguous. It's scraped from a web filled with conflict, contradiction, and Reddit flame wars. What is even "ethical" in algorithms trained across thousands of cultures and a billion conflicting opinions?

And when AI makes life-impacting decisions—like hiring someone or recommending a promotion—accountability becomes a finger-pointing circus. Developers blame the data, users blame the UI, and the AI blames absolutely no one because it's not sentient and doesn't care. Truth is fuzzy. Facts are blurry. Reality is probabilistic.

Ambiguity is not a side effect—it's a feature of the age of AI.

Modularity

We now build work systems as if they're IKEA furniture: we mix-and-match AI tools with half-documented APIs, vibe-coded architectures, and LLMs in "research mode." Sure, it's all wonderfully flexible. Until one tool updates, breaks the stack, and your automations collapse like a Jenga tower on a fault line. Everything works great until it doesn't. And then no one knows why. It's Modularity to the max.

OpenAI now adopts Anthropic's new "standard" for connecting LLMs to data. Everything is becoming increasingly modular. You swap one AI for another and suddenly your chatbot has a God complex or your content pipeline starts sounding like a conspiracy theorist. Replaceable doesn't mean harmless. Every change ripples like a butterfly effect with a hangover.

Even companies themselves are going modular. More gig workers. More freelancers. More bots. Less cohesion. Soon, everyone is rowing in different directions. Strategy alignment becomes a myth, like inbox zero or ethical billionaires. Fragmented responsibilities and distributed stakeholders create a spaghetti map of decision-making.

Modularity creates flexibility—and accountability disappears like cryptos on Bybit.

Something Went Wrong

I could keep going with each of the six dimensions of the Wicked Framework.

Reflexivity: AI is changing the world while being trained on the world it is changing.

Intricacy: The tech gets so convoluted only few people understand how it all works.

Scalability: A single good idea can now have a thousand imitations before breakfast.

The future is so uncertain these days, I've stopped trying to predict how the day will end.


If you want to explore the Wicked Framework, check out my book, Human Robot Agent, or the self-paced course, New Fundamentals for Leaders in the Age of AI. Or talk with me directly: I have a cohort starting in April.


AI chatbots get updated mid-prompt, starting as your helpful assistant and ending as your judgmental therapist. Online meetings vanish from your calendar like socks in a dryer. Your smartphone reboots several times a week "to improve the user experience." Your Internet connection drops at the exact moment when you say, "Let me share my screen." One team member reports burnout, another joins a competing startup, and somewhere on your feeds you read that new US tariffs may apply to everything you sell. The only reliable constant is that government services are down because of ongoing strikes, and you suspect your coffee machine is mining Bitcoins.

Ironically, halfway through a brainstorm with my digital assistant, discussing the unpredictable effects of vibe coding on product experiences, Zed encountered an error:

Need I say more?

I remember a time when error messages contained useful information, such as, "insufficient memory," "no keyboard detected," or "Cannot find file: C:\WINDOWS\SYSTEM\CLIPPY.SYS." These days, the only thing we experience is, "something went wrong," and "computer says no," because nobody has any idea anymore what went wrong. Neither product teams nor product managers themselves know what problems their product's users face. Everything is connected to everything, and in the age of AI, the global digital network becomes ever more volatile, intricate, scalable, modular, ambiguous, and reflexive.

Here's the key point of this article, in case you were wondering:


How can we expect AI agents to achieve our goals when humans hardly know how to make sense of a regular day?


The Scope of Autonomy

Let's be honest: all software is, to some extent, autonomous. Your smartwatch does its work autonomously. The PlayStation games you play are autonomous. The Zapier and Make workflows that I create are mostly autonomous. Every bit of software we use follows its instructions, autonomously, until something breaks. And then, if it's programmed well, it tells you why.

The main difference between traditional software and AI agents is we give the agents instructions in human language. And we judge them by:

  1. How well they understand the intent of our instructions,
  2. How effectively they accomplish exceedingly vague goals,
  3. How long they can run without a faceplant and intervention.

Some say that by 2027 or 2028, AI agents will manage an entire day without human help. That's the forecast from people who may or may not have tried booking a flight using an airline chatbot.

What often gets overlooked is that autonomy isn't just a technical challenge. It's a socio-technological problem—a wicked problem. Agents must navigate conflicting human preferences, legal grey zones, cultural nuances, shifting goals, ethical disagreements, and incomplete data—all while volatility, intricacy, scalability, modularity, ambiguity, and reflexivity keep rising.

Good luck with that.

The Red Queen Effect

The Red Queen keeps her eyes on all of us, and most of the time, she says, “No.”

I thought AI would free up some of my time and I could finally finish that Netflix binge backlog. Instead, I feel I'm busier than ever. Other content creators report the same. Slop levels rise even as everyone's quality increases, and we all feel pressured to create more than ever just to stay visible. Has AI liberated me?

The Red Queen says no.

Likewise, companies hope that AI improves cybersecurity. But the bad guys have AI too. Nobody is safer. Everyone is just stuck in a neverending arms race. Meanwhile, many freelancers thought AI would mean less work and more money. Instead, in many domains, gig rates are sinking and hustle has become a survival tactic. Does everyone feel better now?

The Red Queen says no.

As users, we're feeling it, too. With AI-generated spam flooding each feed, people are forced to scroll faster, read less, and trust almost nothing. To not get caught in the AI-powered maelstrom, humans must outmaneuver algorithms designed to outmaneuver humans.

That's the Red Queen Effect.

"It takes all the running you can do, to keep in the same place." —Red Queen, Through the Looking-Glass

So, are AI agents finally going to our work for us?

The Red Queen says no.

Agents in the Red Queen Race

Sure, AI capability is doubling every 6–7 months. But what if the complexity of the world is doubling, too? What if volatility, intricacy, scalability, modularity, ambiguity, and reflexivity keep increasing with the very same tools meant to address them?

Autonomy isn't just about completing a list of instructions without intervention. It's about understanding and achieving a goal in a world that's increasingly uncertain and unpredictable. You cannot bet on how long it will take a player to complete a game when the system keeps changing the rules of that game.

Humans struggle through their days already. We're buried in "something went wrong" and "computer says no" moments. If AI agents are expected to match us, they'll need human-level problem solving. Or better. And that can only be AGI territory.


The paradox is that AI increases uncertainty, which breaks autonomous AI agents. Every technology improvement makes the system more unpredictable. The better AI gets, the wickeder the world becomes.


The Red Queen Says No

You can ask an AI agent to pick the best out of a thousand job candidates, but half of the candidates use their own AIs to fake their interviews, technologies get updated five times throughout the recruitment process, and by the time the agent is done, the company has pivoted to a new business strategy because a new batch of startups are stealing its customers.

You can ask an AI to plan your next product launch, but mid-way through, it decides to optimize for maximum virality by Ghiblifying your entire social media plan. Meanwhile, marketing trends have changed course twice, an intern has redesigned the logo with Midjourney, and your CFO now wants all payments done on the blockchain.

You can ask a digital assistant to organize your calendar, but it mixes up time zones, books your client meetings during your flight to Berlin, and the private alerts for your Tinder dates suddenly show up on your CEO's smartwatch.

Will AI agents work eight-hour days, autonomously, in 2028?

Your favorite skeptical optimist says “No, I don't believe it.”

The Red Queen says no.

AI agents are not going to be working autonomously for an entire day, not any time soon. They might do that when they achieve AGI and can deal with "something went wrong" and "computer says no" experiences just the way I do—with considerable effort.


No Fluff. No Panic. Just Insights.

It took me 7 hours of work to lovingly draft, write, edit, refine, and publish this article. I cut through the hype and doom to bring you real insights on Agile, AI, and the Future of Work. 🔍 Subscribe now and don’t miss a single post.



Back to blog
Jurgen Appelo

"Eighty percent of everything is noise."