Why your AI needs a manager, not a cowboy
Your AI isn't broken; your process is. Agents need specific workflows, not "vibes." Stop improvising like a cowboy and start architecting systems to unlock true scale in the Agentic Age.
May Your Year Be Fully Specified
Dear Friend,
Today we are starting on a different tone, after all, as we fly into 2026, we gotta make sure we’re not doing it by the seat of our pants! Unlike “Wrong Way Corrigan” (you’ll understand the anecdote by reading the text), may all your flights, and your goals, land exactly where you intend.
This year, remember:
Be careful what you wish for.
Your resolutions are like genies—if you say “I want to be healthier,” don’t be surprised when life removes all your snacks. Technically, it worked.Stop improvising. Start architecting.
Don’t let your 2026 be a series of “vibes.” Write the SOPs for your life! Your future self can’t read between the lines.Watch out for the Bootstrapping Problem.
You need energy to get organized, but you need to be organized to have energy. May you find the willpower to bootstrap your way out of that loop!Create your Clean Room.
New year, new you, new organizational infrastructure. Out with the mental clutter, in with the Data Discipline!Upgrade from Prompt Engineering to Workflow Architecture.
One clever resolution is tactical. A repeatable system for self-improvement? That’s strategic.Close the Will Gap.
We all know how to eat better, exercise more, and sleep on time. We just hate doing it. In 2026, may you embrace the “boredom” of good habits and unlock your superpower.
All in all, I wish all of you a happy, peaceful, healthy and prosperous 2026!
TL:DR;
Remember “Wrong Way Corrigan”? He flew by the seat of his pants. Most modern managers run projects the same way—on “vibes” and improvisation. It works with humans because we can read between the lines. But now you’ve hired an AI Agent, and suddenly, that “agility” is a liability. You ask for a flight; it books a nightmare. You ask for an email; it hallucinates a policy.
The hard truth? The AI isn’t broken. Your process is. We are entering an era where improvisation is a fatal error. AI Agents are like genies: they grant your wish literally, often with disastrous results, unless you possess the “Executive Function” to define the constraints perfectly.
This article argues that to survive the Agentic Age, you must graduate from a “Cowboy” relying on scrappiness to an “Architect” building clean, documented systems. It’s the beginning of Workflow Architecture. If you can embrace the “boredom” of structure, you unlock the ability to clone your intent and scale your output infinitely.
The Itch: The chaos of “agility”
It’s 1938. Douglas Corrigan takes off from New York, intending to fly to California. He lands in Ireland. He claims his compass failed, but history remembers him as “Wrong Way Corrigan.” He was flying “by the seat of his pants”—a literal aviation term for pilots who flew without instruments, relying on the vibrations of the engine and the wind on their face to guess where they were going.
Fast forward to today. You’re running a project. You don’t have a manual; you have “vibes.” You tell your team, “Just handle the client,” or “Make it pop.” You pride yourself on being agile, on moving fast and breaking things. And it works—because your human colleagues share your context. They can read between the lines. They know that “handle the client” actually means “apologize for the delay but don’t admit fault.”
But now, you’ve hired a new employee. This employee is faster than anyone you’ve ever met. They don’t sleep. They don’t drink coffee. They are an AI Agent.
You tell the Agent: “Book me a flight to the conference.”
The Agent books a non-refundable, three-layover nightmare on a budget airline because you didn’t specify “business class” or “direct.”
You tell the Agent: “Email the team.”
It hallucinates a policy change and confuses half your staff.
The frustration sets in. You feel like the technology is broken. You think, “I thought this was supposed to make my life easier, not force me to micromanage!”
Here is the hard truth: The technology isn’t broken. Your process is.
We are entering an era where “flying by the seat of your pants” isn’t a badge of honor; it’s a fatal error. The status quo of reactive, improvisational work is hitting a brick wall. To survive the next phase of tech, we have to stop being cowboys and start being architects.
The deep dive: The struggle for self-control
We need to investigate why this is happening. Why does a tool designed to automate work suddenly require more work from you?
The Villain: The Improvised Mind
For the last twenty years, startup culture has romanticized the lack of structure. We call it “scrappiness.” But strictly speaking, this is improvisation. It relies on high-bandwidth social communication. If I give you a vague instruction, you can look at my face, recall our meeting last week, and guess my intent.
AI Agents are structurally incompatible with this.
Unlike a chatbot that just talks, an Agent interacts with the world to change its state—it moves money, deploys code, and sends emails. Research into multi-agent systems reveals a stark reality: these systems require stable environments with defined interaction patterns. They cannot function in the fluid, “I’ll wear the marketing hat today” chaos of a typical modern office.
The Villain here isn’t the AI. The Villain is the Specification Problem. If you cannot articulate exactly how you do your work because you are inventing it in real-time, you literally cannot delegate it to an AI. You are trying to teach a robot to dance while the music keeps changing.
The Anatomy of Intention
To solve this, we have to get philosophical. We need to look at what “intention” actually means to a machine.
Back in 1990, researchers Cohen and Levesque defined intention as “Choice with Commitment.”
A human “desire” is passive: “I want to be rich.”
A machine “intention” is active: “I will execute Strategy X until Condition Y is met.”
When you hand a task to an AI, you are transferring that commitment. But because the AI lacks your lifetime of context, you have to bridge the gap with rigorous definitions. This is where most of us fail. We treat Agents like interns, but we should be treating them like genies.
Remember the story of the genie? You wish for “peace on earth,” and the genie removes all humans. Technically, the genie succeeded.
This is the Counterfactual Reasoning trap.
Imagine the “Loop” variant of the Trolley Problem. A trolley is heading toward five people. You can divert it, but doing so will hit one person whose body will stop the trolley. Most humans hesitate because using a person as a “tool” feels wrong.
An AI doesn’t feel that hesitation. If you give it the command “Save the five people,” a hyper-rational agent might construct a solution that violates every ethical norm you hold dear—unless you explicitly forbid it.
You have to think like a lawyer. You have to simulate “what would the agent do if...” across a dozen scenarios. You have to define not just the goal, but the boundaries of the playing field.
The Paradox of Executive Function
This brings us to the most painful irony of the AI age.
We are sold AI as a tool to offload our cognitive burden. “Let the AI organize your life!” the ads say.
But the research shows the opposite. To effectively use Agentic AI, you need Executive Function (EF) capabilities that would make a CEO sweat.
Working Memory: You have to hold the entire system architecture in your head.
Inhibitory Control: You have to resist the urge to “just do it myself” and instead invest hours in training the system.
Metacognition: You have to think about how you think.
If you are a disorganized person, AI will not fix you; it will amplify your mess. You cannot build an organized AI system on top of a disorganized human workflow. It’s like trying to build a skyscraper on a swamp. This is the Bootstrapping Problem. You need strong executive function to outsource your executive function.
The “Clean Room” Requirement
Let’s look at how we fix this. We need to treat our organizations like microprocessor factories.
To build a chip, you need a “Clean Room”—an environment completely free of dust and defects. To run an Agent, you need an Organizational Clean Room.
Process Visibility: If your sales data is in your head, the Agent is blind. You have to move data from your brain to a database. This feels like “janitorial” work to high-level operators, but it is actually “feeding the agent.” This is Data Discipline.
Documentation as Code: In the past, documentation was a dusty binder nobody read. For an Agent, documentation is the map of the territory. If you don’t write down the policy, the policy doesn’t exist. The Agent will hallucinate a policy based on the average of the internet.
To visualize the difference, look at how an instruction changes when you move from a human colleague to an Agent:
The Cowboy Prompt (Vibes)
"Reach out to some leads."
Implied Context: You know who our leads are. You know our tone. Just do what you did last week.
The Architect Prompt (Specification)
"Execute Outreach Sequence Alpha."
Role: Senior SDR.
Target: CTOs at Series B Fintechs.
Constraint: No buzzwords ("synergy," "disrupt").
Goal: Secure 3 'No's' or 1 'Yes'.
Safety: Do not contact domains inexclusion_list.csv.
The “Cowboy” approach relies on mind-reading. The “Architect” approach relies on explicit boundaries. We have to shift from “The Why” (motivating humans) to “The What” (programming procedures). The Agent doesn’t need a pep talk; it needs a step-by-step constraint list.
The Resolution: Your New Superpower
So, where does this leave you? Are you doomed to be a data janitor?
No. You are graduating.
We are witnessing the era of Workflow Architecture.
The New Normal: Situational Leadership for Bots
Your new role is not “doer.” It is “conductor.” You are managing a queue of high-speed, literal-minded workers.
You must adopt a Situational Leadership model for your synthetic workforce:
For the “Junior” Agent: You use a Directing Style. You provide step-by-step recipes.
For the “Senior” Agent: You use a Delegating Style. You provide high-level goals and constraints.
The skill you must cultivate is Discernment. You need to look at an AI output and not just say “Looks good,” but critically evaluate: Did it hallucinate? Did it break a rule? Is it safe?
This prevents the “Hollow Manager” syndrome. If you strictly delegate without understanding the underlying mechanics, you lose the benchmark for truth. You become unable to verify if the work is actually good or just “plausible.” Eventually, you become a rubber stamp for hallucinations, leading to a slow, undetectable degradation of your organization’s quality. You must occasionally “retro-delegate” work back to yourself just to keep your skills sharp.
The Will Gap
Ultimately, this isn’t a skill gap. It’s a Will Gap.
We know how to be organized. We know how to write documentation. We know how to plan. We just hate doing it because it feels slow. We prefer the adrenaline rush of improvisation.
But in the Age of Agents, speed comes from structure.
If you can embrace the “boredom” of building the infrastructure—if you can write the SOPs, clean the data, and define the constraints—you unlock a superpower. You gain the ability to clone your intent. You can go from doing one thing at a time to doing a thousand things at once, all executed with the precision of your best day.
The future doesn’t belong to the cowboy who flies by the seat of their pants. It belongs to the architect who builds the cockpit, calibrates the instruments, and charts the course.
Stop improvising. Start architecting.
Deep Dive: Connecting the Dots
To understand the full stack of the ‘Architect’ mindset, read these foundational pieces:
The Role: You are shifting from a “Prompter” to a “Protocol Manager”. The New Manager (021) defines this specific shift in job title and scope.
The Environment: You can’t be a Cowboy because the agent runs in a “Clean Room”. The Virtualized Worker (020) explains why we need sterile, sandboxed environments for reliability.
The Skill: The core capability is decomposing vague intent into precise steps. The AI Architect (004) laid the groundwork for this “systematic decomposition” back in October.
The Context: Why “Prompting” failed. Forget Prompt Engineering (002) explains why we are moving to “Fluency” and “Architecture” instead.
References
Cohen, P. R., & Levesque, H. J. (1990). Intention is Choice with Commitment. Artificial Intelligence. (The foundational definition of intention in computational systems).
Bratman, M. E. (1987). Intention, Plans, and Practical Reason. Harvard University Press. (The philosophical underpinning of planning and agency).
Jennings, N. R. (2000). On Agent-Based Software Engineering. Artificial Intelligence. (Research on the necessity of organizational structures for multi-agent systems).
Awad, E., et al. (2018). The Moral Machine experiment. Nature. (Exploration of ethical dilemmas and counterfactual reasoning in AI).
Diamond, A. (2013). Executive Functions. Annual Review of Psychology. ( comprehensive overview of the cognitive skills required for planning and inhibition).
Peace. Stay curious! End of transmission.

