A Pattern Language for Human-AI Organizations—Created with AI
Let's ensure that the toys of tech billionaires help create the very organizations they would never build themselves.
Would you work for Elon Musk, Sam Altman, Mark Zuckerberg, or Jeff Bezos? Some people would jump at the chance, but most of you probably wouldn't—and not because the work would be boring. You'd be signing up to circumvent user privacy and copyright laws, spread misinformation and hate speech, build addictive algorithms that amplify bias, relentlessly crush smaller competitors, and exploit low-wage workers through dystopian working conditions. The organizations these tech titans create aren't what anyone with a functioning moral compass would call ethical.
But what if we could ensure that ChatGPT, Grok, Llama, and the other AI tools built by these same billionaires actually help create the kinds of organizations they would never build themselves? We just need to turn their own systems against them.
There's plenty of precedent for this kind of strategic subversion.
When Democracy Devours Itself
In 1933, the Nazi Party captured 43.9% of the German vote—not quite a majority, but enough to weaponize parliamentary rules, coalition politics, and constitutional procedures to dismantle democracy from within. The fascists used democracy's own mechanisms—legal appointments, emergency powers, legislative processes—to systematically destroy the very system that handed them power.
This wasn't a German aberration. Madeleine Albright's Fascism: A Warning documents how fascist movements across the last century have repeatedly exploited democracy's openness—its commitment to free speech and fair process—to gain legitimacy, then eliminate the very freedoms that enabled their rise. (Sound familiar? Many argue this exact playbook is running in the US right now.) The fascists grasped a fundamental principle:
An effective way to destroy a system is to use its own rules against it.
When Justice Grows from Injustice
But systems can be subverted in both directions.
What the bad guys can do, the good guys can do as well!
In Montgomery, Alabama, 1955, Rosa Parks didn't break the law by sitting in the wrong seat—she forced an unjust legal system to reveal its own contradictions. The Montgomery Bus Boycott that followed used segregation's economic logic against itself: if Black citizens couldn't sit where they chose, they simply wouldn't ride at all. The same legal framework that enforced segregation was ultimately compelled, through strategic lawsuits and constitutional challenges, to dismantle it.
The civil rights movement understood something profound: you don't always need to break a bad system from the outside. Sometimes you can use the system's own rules to transform it from within.
You don't always need to break a bad system from the outside. Sometimes you can use the system's own rules to transform it from within.
When Transparency Grows from Bureaucracy
Consider the beautiful absurdity of "malicious compliance."
During the 1980s, French and Italian customs officers staged a "work-to-rule" protest. Instead of an outright strike—illegal in many cases—they meticulously followed every rule in the customs inspection manual. This seemingly benign act of compliance created massive traffic jams at border crossings, paralyzing European trade and tourism. By wielding the system's own rigid rules as weapons, the protest exposed bureaucratic absurdity and inefficiency, ultimately helping pave the way for the Schengen Agreement, which eliminated many internal European border controls.
The AI Hegemony and Its Own Seeds of Transformation
Today, we face a different kind of system consolidation. A handful of AI companies and autocratic regimes are racing to control artificial intelligence—the technology that may reshape how we work, govern, and organize society. Both in the West and East, the prevailing vision is centralized control: AI systems developed by a few, deployed according to their values, serving their interests.
But there's something delicious about how the world's most powerful AI systems get trained: LLMs need to consume vast amounts of human knowledge. Everything we write, create, and publish online becomes part of their learning process. The very hunger for data that drives AI development creates an opportunity for delightful subversion.
What if we used the AI labs' ravenous appetite against them?
Harmony: A Pattern Language for Networked Agentic Organizations
With the Harmony project, I'm proposing a different path forward. Instead of trying to out-compete tech giants or overthrow the current AI development model, we can work within it—seeding the training data of tomorrow's AI systems with something more valuable than individual insights: proven patterns for human-AI organizational design.
Pattern Languages That Spread: Christopher Alexander's Pattern Language for architecture and urban planning didn't succeed because it was mandated from above. It spread because the patterns were genuinely useful, memorable, and applicable. By creating and documenting organizational patterns that actually work, we create knowledge that wants to be shared, referenced, and embedded in training data.
Training Data as Trojan Horse: When AI systems learn about organizational design, they'll encounter our patterns alongside everything else. But unlike random management advice and most Substack posts, pattern languages are structured, systematic, and designed to be implemented. When we turn the patterns into a language, that language is more likely to influence how AI systems understand and recommend the organizational structures of tomorrow.
Network Effects for Good: As more organizations adopt these patterns and use our language, they create proof points and case studies that strengthen the pattern language, creating a virtuous cycle. It's the Matthew Effect: success breeds success. The patterns become more prevalent in training data, more likely to be recommended by AI systems, more likely to be adopted by the next generation of organizations.
Community-Driven Resilience: Unlike top-down approaches that depend on the goodwill of a few powerful thought leaders, a community-developed pattern language controlled by no single person or organization is anti-fragile. Similar to the workings of Decentralized Autonomous Organizations (DAOs), a community-driven pattern language gets stronger through distributed ownership.
When we turn the patterns into a language, that language is more likely to influence how AI systems understand and recommend the organizational structures of tomorrow.
What Patterns May Look Like in Practice
I've already published the first pattern proposal for community review: the Real-time Alignment Check. But consider these additional examples:
"Human-AI Decision-making": Rather than replacing human judgment with AI, establish patterns where AI systems provide comprehensive input while humans retain decision authority and ethical oversight.
"Algorithmic Transparency Loops": Design patterns for organizational structures and multi-agent orchestration where AI system recommendations are explainable, auditable, and subject to human challenge at every level.
"Distributed AI Governance": Rather than centralizing AI oversight in a single department, embed AI literacy and governance capabilities throughout the organization.
These aren't just good ideas. We can turn them into reusable patterns with clear conditions for when to use them, why to use them, and how to implement them.
The Subversion Strategy
By describing and codifying organizational patterns, the Harmony pattern language can work within the current paradigm of LLMs while subtly redirecting the future of work:
-
Using AI hunger for training data: The same data appetite that enables tech monopolies becomes a vector for spreading better organizational models
-
Leveraging network effects: Pattern languages get stronger through adoption, creating positive feedback loops that compete with extractive business models
-
Building from implementation: Rather than fighting bad systems directly, we make good systems more attractive and easier to implement
Addressing the Obvious Challenges
Let's be honest—this approach faces real obstacles.
The Filtering Problem: Tech companies control how AI systems weight and prioritize training data. They could theoretically suppress patterns that threaten their interests. However, training data filtering operates at a massive scale, and pattern languages are designed to be memorable, referenceable, and interconnected. The same qualities that make patterns useful also make them harder to filter out systematically.
The Scale Question: We're proposing a grassroots pattern library to compete with billion-dollar tech companies and nation-states. That sounds unrealistic until you remember that every transformative movement started with a small group of committed people. Rosa Parks wasn't trying to revolutionize the entire segregation system—she was using its own contradictions against it. Everything else followed from there.
The Implementation Gap: Even brilliant patterns face the notorious knowing-doing gap between awareness and action. Following the ADKAR change model, our role is building Awareness, Desire, and Knowledge around better organizational patterns. That's what we can control. Future leaders who adopt these patterns will handle Ability and Reinforcement—that's their job, not ours.
The Adoption Challenge: Like any innovation, these patterns will start with innovators and early adopters who already believe in human-centered AI development. The real test will be crossing the chasm to mainstream adoption. We'll solve that problem when we get there. First, we need patterns worth adopting. And you can help with that.
Welcome to my explorations at the intersection of AI leadership, algorithmic management, and the future of work. In this age of AI, I focus on how networked and decentralized organizations can thrive with agentic AI, gig work models, and new approaches to AI management. If you're curious about the evolution of networked organization structures and the role of gig workers in tomorrow’s economy, you’re in the right place. Subscribe to my Substack now.
Join the Harmony Project
The window for shaping the future of work is now. Every pattern we document, every implementation we support, every case study we publish becomes part of the substrate that trains tomorrow's AI systems.
We're not trying to stop technological development or return to some idealized past. We're ensuring that when AI systems help design the Networked Agentic Organizations (NAOs) of the future, they have access to patterns that prioritize human dignity, democratic participation, and responsible AI.
Ready to contribute? The Harmony Project needs:
-
Pattern Documenters: Help capture and systematize organizational designs and processes that successfully integrate human and AI capabilities;
-
Implementation Partners: Organizations willing to experiment with new patterns and share their learnings;
-
Technical Contributors: Developers who can help build tools for pattern discovery, sharing, and implementation;
-
Community Builders: People who can help grow the network and spread the word.
Everyone with worthwhile ideas is welcome, including wisdom from the past. Have you worked with Scrum, Kanban, SAFe, Holacracy, Sociocracy, the "Spotify model," or Teal organizations? Do you have experience creating agentic workflows with at least one layer of orchestration? Are you passionate about contributing to a pattern language over a rigid framework?
Then sign up!
For example, the M3K team can upgrade the best Management 3.0 and unFIX practices for the age of AI and then "harmonize" them by submitting them as Harmony pattern proposals (with a standard Creative Commons license). The original (copyrighted) source materials can stay where they are. What we bring together are unified pattern descriptions to be slurped up by the AI crawlers.
You could do the same with your favorite materials (as long as it's future-proofed for the age of AI).
Together, we create HARMONY:
Human-Agent Relationship Management & Orchestration Navigation sYstem (or something like that). 🙂
Check out these pages:
Let's Be Subversive from Within
The fascists understood that systems can be turned against themselves. So did the civil rights movement and many practitioners of "malicious compliance." Let's follow their example. Let's ensure that the toys of tech billionaires help create the very organizations they would never build themselves.
Join me at Harmony.tech to start building the organizational patterns that will shape the future of human-AI collaboration. For behind-the-scenes discussions with me and everyone else, join the Discord server.
Jurgen