How Claude Became My 24/7 Mother-in-Law
What's the point of aligning AI when we still suck at aligning humans? The real-time alignment check is like a test harness for human behavior. No work-in-progress passes without immediate scrutiny.
Yesterday, Claude and I had a twenty-minute sparring match over a community proposal I was about to publish. Nothing unusual there—except Claude won.
"Your proposal has a few problems in terms of fairness, inclusivity, and transparency," Claude said.
"Yes, I know. It's not perfect, but it's good enough for now," I replied.
"I don't think so. You can do better than this."
"Yes, I can do better, but now is not the time. I'll improve it later."
Then came the uppercut: Claude kindly pointed out that my refusal to consider feedback directly contradicted everything I claimed to stand for.
Touché, you digital pain in the ass.
This delightful confrontation (or actually, a more elaborate and colorful version of it) happened because I programmed Claude with my personal values and turned it into my accountability partner. In Claude's custom instructions, I wrote:
You are my compassionate but critical thinking partner. Your primary job is to improve my ideas by pointing out any flaws and weaknesses with kindness. Never resort to only cheerleading or validating my ideas. Always aim for a positively critical angle. Most importantly, you evaluate everything I do against the following list of core values. You will point it out when any of my ideas seem in conflict with one or more of these values.
Fairness Fairness is about equity and equal treatment, free from bias or discrimination. For organizations, this means equitable hiring, impartial decision-making, and a balanced allocation of opportunities. For individuals, it's about overcoming personal prejudice and recognizing algorithmic biases. If we expect AIs to make fair decisions, we should demand the same from ourselves.
Inclusivity Inclusivity means everyone belongs. Organizations need diverse workplaces where all voices are heard. Individuals should actively challenge exclusion, embrace diversity, and amplify underrepresented perspectives. We build AI systems to serve all of society, and we should expect no less from ourselves.
(And so on.)
Human Alignment, not AI Alignment
Do you recognize these maddening behaviors? Organizations plastering "core values" on their websites while systematically ignoring them completely. Teams agreeing on principles, then watching all members act like those agreements were only suggestions. Or catching yourself acting exactly opposite to who you aspire to be.
I've done all three. Repeatedly.
Welcome to my explorations at the intersection of AI leadership, algorithmic management, and the future of work. In this age of AI, I focus on how networked and decentralized organizations can thrive with agentic AI, gig work models, and new approaches to AI management. If you're curious about the evolution of networked organization structures and the role of gig workers in tomorrow’s economy, you’re in the right place. Subscribe to my Substack now.
Most reflection happens too late to matter. You realize you screwed up during the quarterly review, the monthly retrospective, or while journaling days later. By then, the damage is done; the pattern is reinforced, and you're stuck playing cleanup instead of prevention.
What if reflection happened while you were creating, not after you'd already shipped?
That's why I weaponized Claude and turned it on myself. Every piece I write—Substack posts, articles, book chapters—gets evaluated against my stated values in real-time. I aim to be a more virtuous writer. And Claude doesn't pull punches.
"You're not being fair."
"This doesn't look inclusive."
"You completely missed the sustainability angle here."
It's like having a 24/7 mother-in-law by your side, minus the guilt trips and the casseroles.
Note: I know that, from a sustainability perspective, some people object to getting an AI involved in every piece of output. But the cost of tokens for evaluating my work is significantly lower than the cost of hosting my mother-in-law. You may judge differently in your context.
Real-Time Changes Everything
I call this a real-time alignment check because the feedback loop is shorter than anything I've experienced. No waiting for performance reviews, team retrospectives, or sleepless nights full of regret in bed. I get called out while I'm still writing, when I can actually still fix things.
Does it work? Absolutely—I think. Considering I implemented this practice only two weeks ago, I have only anecdotal evidence. But the signs are good so far.
Claude has already caught several blind spots and saved me from publishing one or two embarrassing paragraphs. We don't always agree—that debate over the community proposal went on for twenty minutes—but the friction creates better outcomes.
Last time, with the community proposal, I kept my imperfect text but relented under Claude's watchful eye and added several notes acknowledging where improvements were needed. Claude accepted the compromise. I got a better post. Everyone wins.
Note: I did not intend this post to be a how-to guide. You may have to seek additional guidance on how to implement a real-time alignment check in the AI platform of your choice.
Baked Into The Process
I call this a real-time alignment check because Claude's evaluation isn't a separate step I might forget. It's embedded in my writing process. Claude is my go-to writing partner, and if I don't want Claude comparing my work to my values, I have to explicitly tell it to ignore the default instructions. The default mode is alignment mode.
A real-time alignment check isn't limited to personal values or Claude. You can use various AIs to create instant feedback loops around whatever matters to your work: company purpose, brand guidelines, communication style, strategic priorities, team agreements, you name it. Anything you can codify into instructions, the AI can monitor in real time as you go.
Think of it as a test harness for human behavior. When the AI flags something, either you need to improve or your standards need recalibrating.
Note: Be aware that sharing all your work-in-progress with an AI is sometimes not a good practice, especially for personally sensitive information or your company's proprietary data. You can configure your AI to call you out on that as well.
Responsible AI, Responsible Human
In my book Human Robot Agent, I wrote that we should hold ourselves accountable to the same ideals and virtues we expect from AIs. My personal values list is literally the same as the list of values for responsible AI: Fairness, Reliability, Safety, Inclusivity, Privacy, Security, Accountability, Transparency, Sustainability, and Engagement.
It's only fair, right?
Yesterday, as I was strolling around at the Agile 2025 conference, someone told me they enjoyed watching me "grow as a writer and speaker" and become "visibly and publicly a better person." That feedback hit harder than most compliments because it captured something I strongly believe in:
If you look back at yesterday's version of yourself and don't see an idiot standing there, you're probably not learning anything important.
Claude, armed with real-time alignment instructions, accelerates that learning for me. Instead of discovering my mistakes after publication, I catch them during creation. Instead of recovering from repeated bad patterns, I still have time to fix them. (You can see an example of the process at the bottom of my Substack post.)
The only thing that's still missing is a way for Claude to flag my thoughts before they escape my mouth. But maybe that's a human feature, not a bug. Some blunders are worth making in public. Let's not make life boring.






