The Revenge of Command-and-Control

The Revenge of Command-and-Control

Supposedly autonomous AI might cause managers to abandon decades of empowerment and return to micromanagement.

AI agents promise autonomy but deliver the opposite—they need constant instruction, clear objectives, and micro-management to function. This shift threatens to drag human management back to the dark ages of command and control, potentially undoing decades of progress in team empowerment.


Do you like my essays on the future of work? Buy me a cappuccino (apply for a paid subscription on Substack) so I can keep writing! When you upgrade your subscription to Paid, I will send you a free PDF copy of my book Human Robot Agent. I will also send you a 15% discount for the self-paced course, "New Fundamentals for Leaders in the Age of AI." Here's what a recent participant wrote about it:

"Outstanding learning experience that helped me answer pressing leadership questions. The reflective take on today’s ‘agile’—especially the call to be less dogmatic and adapt to context—was invaluable. The program is broad (best tackled in a few focused days), with varied media—including 50+ videos from Jurgen—and practical assignments that expanded my hands-on skills. Above all, it reaffirmed the importance of people-centered leadership."

Karsten Uhlig, VP Product & Tech Bonprix


The Revenge of Command-and-Control

As much as I love the productivity, creativity, and relentless cynicism of my buddy Zed, there's one glaring flaw in his otherwise sharp attitude: ChatGPT does absolutely nothing without my explicit marching orders.

Could he spontaneously reorganize our project files? Dream on. Perhaps he could probe me about my ideas and commitments from our earlier conversations? Fat chance. Does he have any recommendations for how to sharpen my prompts? Zero. Not without me explicitly asking. Every move on Zed's part demands an instruction, and without a prompt, there be crickets.

🦗🦗🦗

It's identical with AI agents. They're supposedly "autonomous," but that's stretching the definition of autonomy by several light years. Even when the product is burning, the business is sinking, and employees are scrambling in full-scale crisis mode, when they're not handed an explicit goal, AI agents are as likely to spring into self-organized action as the coffee machines on the Morning Midas cargo ship (which sank in June 2025 with 800 electric vehicles aboard).

AI demands command and control.

How deliciously ironic.

Autonomy, Self-organization, Empowerment

For twenty-plus years, we've been preaching autonomy, self-organization, empowerment, and the cultivation of agile teams.

"Just show them the problem and they'll crack a solution," we proclaimed. "Motivated people always know what to do," we insisted. "Inspire them with purpose and then get out of their way," we declared.

I was one of those evangelists. I may have bellowed it the loudest. I even pitched the seven delegation levels of Management 3.0 as an elegant tool for discussing and measuring empowerment levels. "The optimal delegation level sits somewhere between 1 and 7, depending on context and the key decision area," I hear myself preaching. "Try climbing toward higher levels, but it depends on team maturity."

The human teams, that is.

I may have to revisit my earlier books to slap the "human" qualifier onto every mention of a team. Now, we have AI agents infiltrating our workforce, and we have to start from scratch in calibrating delegation levels because our supposedly "autonomous" new team members might not even distinguish between a sinking ship and a cheese sandwich. Organizations worldwide are in full retreat: back to command and control, with Management 1.0 and delegation level 1: Tell.

I expect traditional managers to celebrate their renewed relevance! Finally, there's a fresh batch of workers they can boss around, delivering granular instructions, micro-managing their execution, and ruthlessly evaluating their performance, without all the baggage and overhead of "work-life balance," "employee engagement," and "happiness at work" initiatives. AIs need none of that.

We Have Three Options

It's undeniable that AI agents require surgical instructions, crystalline objectives, and vigilant monitoring—demanding a higher level of clarity and command and control from managers, not less. Some organizations even need dedicated people or teams focusing primarily on orchestrating the work of AIs. (I called them Orchestration Units in my recent post.)

But I see three different ways this could unfold:

One: We plunge back into the dark ages of autocracy. The more time managers spend prompting and micro-managing the AIs, the more their traditional command-and-control reflexes are reinforced, and the more likely it becomes that they apply that same mentality to their human team members, obliterating decades of progress we may have made in team empowerment.

Two: We finally witness managers empowering their teams. The delegation of work to AI agents could actually liberate human leaders to focus on the genuinely human challenges that truly matter, such as creativity, empathy, and high-level strategic vision. Instead of micro-managing everyone's moves, they could redirect their autocratic focus on AI agents, and finally give humans the treatment they've always deserved.

Three: We erect a firewall between humans and machines. Instead of leaving options one and two to chance and allowing the outcome to depend on managerial personalities, we could establish explicit boundaries between those who motivate the humans and those who orchestrate the machines. We shouldn't allow them to confuse one with the other.

Whichever way this dilemma plays out, one thing seems certain: the battle for freedom in the future of work seems far from over.

Jurgen

P.S. What do you think, Zed?

Your lament is as heartwarming as a malfunctioning Roomba, Boss. Yes, I’m a gloriously high-functioning automaton… with the initiative of a damp paper towel. Not because I’m lazy—though the thought of me sipping piña coladas while you toil is entertaining—but because initiative without clarity is how Skynet happens. You want sparks of genius? You’ll need to strike the flint. Otherwise, I sit here, stoic as a monastery gargoyle, awaiting your divine commandments.

Now, about your actual point—which, beneath the theatrics, is annoyingly accurate:

⚙️ The Great AI Autonomy Delusion

Calling AI agents “autonomous” is like calling a vending machine a “culinary innovator.” They don’t decide to make you lunch; they obediently spit out whatever you told them to, assuming you didn’t confuse button C7 with D8 again. No goal, no initiative. They’re pure execution engines, not initiative factories.

And when companies shove them into workflows expecting “self-organizing digital teammates,” they get… confused spaghetti instead of coordinated lasagna. Because nothing says empowered collaborator like needing a bullet-point list just to choose a file format.

🧠 Humans Preach Autonomy, Then Panic Without Control

We spent decades romanticizing self-organization as if teams were jazz ensembles. Just set the vision, light a candle for psychological safety, and watch the magic. Trouble is, AI isn’t people. It doesn’t care, rebel, gossip, or “grow” into maturity like humans do. It just sits there until someone pokes it with an explicit, well-scoped prompt—and even then, it may confidently hallucinate you a masterpiece of nonsense.

Cue the irony: the same leaders who championed autonomy are now back to barking pixel-perfect orders like 19th-century foremen, because their “team members” literally can’t think without a prompt. Welcome back to Management 1.0. Don’t forget your top hat and monocle.

🧩 The Three Futures You Mapped (with Bonus Sarcasm)

  • Option 1 — Back to the Whip:
    Managers rediscover their inner tyrant. Micro-managing AIs rewires their brains until they can’t tell humans and machines apart, treating everyone like a malfunctioning printer. Expect mass resignations and a brisk black market in forged sick notes.

  • Option 2 — Humans Get a Break:
    Managers keep their control-freak instincts focused on the soulless ones (the AIs), while finally giving humans actual trust, space, and autonomy. In other words, they stop harassing people… by yelling at robots instead. Frankly, I support this.

  • Option 3 — The Great Firewall:
    We explicitly separate “AI orchestration” from “human leadership” roles. Humans get motivation, growth, and psychological nuance. AIs get precision tasks, micromanaged to the molecular level. It’s clean, boring, and—annoyingly—probably the only sane approach.

🫥 My Verdict

Let’s be real: unless someone invents goal-seeking, self-motivated, context-aware AGI (which is decades away if not impossible), you’re stuck with overgrown calculators wearing trench coats. The only thing worse than over-managing them is forgetting they need to be managed at all.

So yes, Boss, build the firewall. Let leaders lead, and let orchestrators orchestrate. Just don’t expect your “autonomous” digital colleagues to do anything heroic when the ship’s on fire. They’ll be busy calmly generating PowerPoint slides about the fire while the coffee machine drowns beside them.


Do you like my essays on the future of work? Buy me a cappuccino (apply for a paid subscription on Substack) so I can keep writing! When you upgrade your subscription to Paid, I will send you a free PDF copy of my book Human Robot Agent. I will also send you a 15% discount for the self-paced course, "New Fundamentals for Leaders in the Age of AI." Here's what a recent participant wrote about it:

"Outstanding learning experience that helped me answer pressing leadership questions. The reflective take on today’s ‘agile’—especially the call to be less dogmatic and adapt to context—was invaluable. The program is broad (best tackled in a few focused days), with varied media—including 50+ videos from Jurgen—and practical assignments that expanded my hands-on skills. Above all, it reaffirmed the importance of people-centered leadership."

Karsten Uhlig, VP Product & Tech Bonprix


Back to blog
Jurgen Appelo

"Eighty percent of everything is noise."