The workslop tax - why your AI-assisted colleagues are costing you 2 hours a day
AI-generated "workslop" costs enterprises $9M annually in wasted time and eroded trust. 40% of workers received it last month. Here's how to spot it—and stop sending it.
The Itch: Why This Matters Right Now
You’ve felt it. It’s 2:00 PM on a Tuesday, and a notification pings. It’s a project update from a colleague you generally respect. You open the document, ready to dive into the next phase of the launch, but as your eyes scan the page, something feels... off.
The grammar is flawless. The bullet points are perfectly aligned. The tone is professionally upbeat. But as you finish the third paragraph, you realize you have no idea what has actually happened. You re-read it. Then you re-read it again, slower this time. Your brain starts to ache. It’s like trying to grab a handful of fog; you can see it, it looks substantial, but your fingers pass right through it.
This isn’t a mistake. It isn’t a typo. It’s workslop.
We were promised a revolution. We were told that generative AI would be the “Exoskeleton for the Mind,” a tool that would strip away the drudgery of the 9-to-5 and leave us free to do the “real” work. But for many of us, the reality is starting to look like a nightmare of our own making. Instead of being freed from the mundane, we are being buried under a mountain of “kind of right” digital detritus.
The status quo is officially broken. We are currently trapped in a cycle where one person uses a prompt to save five minutes of thinking, only to force their colleagues to spend two hours untangling the resulting mess. It’s a massive, invisible tax on our time, our sanity, and the very foundation of how we work together. If we don’t find a way to stop the “slop,” we aren’t just losing productivity—we’re losing the ability to trust the person on the other side of the screen.
The Deep Dive: The Struggle for a Solution
Chapter 1: The Villain in the Machine
In every great mystery, there is a villain. In our story, the villain isn’t the AI itself—it’s the illusion of completion.
For decades, we’ve used “polish” as a proxy for “effort.” If a report looked good, it usually meant someone had spent hours thinking about it. But the rise of Large Language Models has decoupled polish from thought. Now, a “villainous” piece of workslop can masquerade as a brilliant strategic memo. It’s a wolf in sheep’s clothing, or more accurately, a hollow shell in a tuxedo.
Researchers at Stanford and BetterUp have finally put a price tag on this villainy. For an average large company (~10k employees), this isn’t a rounding error; it’s a $9 million annual drain. This is the “burden shift.” The sender outsources their cognition to a bot, feeling efficient and productive. But that “efficiency” is a lie—it’s just been pushed onto the recipient. The receiver is the one who has to do the heavy lifting of fact-checking, re-writing, and deciphering what was actually meant.
So how did we get here? The answer lies in how companies responded to the AI boom—with enthusiasm, but without guardrails.
Chapter 2: The False Starts of the AI Gold Rush
When the AI boom hit, companies reacted with a gold-rush mentality. They bought licenses by the thousands. They pushed “AI-first” initiatives. They told employees to “innovate or be left behind.”
But they forgot one crucial thing: The Quality Control Loop.
We treated AI like a high-speed librarian—someone who could fetch facts and organize thoughts instantly. But current AI behaves more like an over-eager, untrained intern. It wants to please you so badly that it will make things up just to fill the space. It will parrot your prompt back to you in flowery prose because it doesn’t actually understand the why behind the task; it only understands the pattern of the words.
The false start here was the belief that more usage equaled more value. We saw AI usage doubled in two years, yet 95% of organizations are looking at their balance sheets and seeing no measurable ROI. Workslop isn’t the only reason. Implementation failures, integration challenges, and unrealistic expectations all play a role. But it helps explain why so much activity is producing so little result. We are spending billions on tools that are essentially producing very expensive noise.
Chapter 3: The Relational Poison
The struggle isn’t just financial; it’s deeply personal. When you receive workslop, you don’t just get annoyed at the document; you get annoyed at the sender.
Think about the psychology of a team. Trust is built on the belief that “I am doing my part, and you are doing yours.” When a colleague sends you an AI-generated memo stuffed with vague assertions and missing context, the kind of content that now makes up roughly 1 in 6 documents workers receive, they are sending a silent message: “My five minutes of convenience are more valuable than your two hours of correction.”
The data is heartbreakingly clear. Over half of people who receive workslop immediately view the sender as less capable. 42% view that person as less trustworthy. We are seeing a massive erosion of “social capital.” Once you realize your boss or your peer is just “phoning it in” via a prompt, you stop bringing your best self to the collaboration. You become defensive. You start double-checking everything. The gears of the organization grind to a halt because the oil of trust has been replaced by the sand of workslop.
The damage is real. But it’s not inevitable. The difference comes down to how we choose to use these tools.
Chapter 4: The Breakthrough—Becoming the Pilot
So, how do we fight back? The breakthrough comes from a shift in identity. We have to stop being “passengers” and start being “pilots.”
A passenger sits in the back of the AI car, looks out the window, and hopes the GPS takes them to the right place. If the car drives off a cliff, the passenger says, “It’s not my fault; the car was driving!”
A pilot, however, uses the instruments. They know the destination. They understand the weather conditions. They are constantly course-correcting. A pilot uses AI as a co-pilot, not an autopilot.
This means changing how we interact with these tools. Instead of asking an AI to “write a report on X,” a pilot provides the unique data, the specific constraints, and the strategic “soul” of the project. They then take the AI’s output and treat it with a “systematic skepticism.” They look for the “slop” and prune it away. They ensure that every sentence serves a purpose.
The breakthrough isn’t a better prompt; it’s a better human oversight process. It’s the “human-in-the-loop” model where we acknowledge that the AI provides the raw material, but only the human provides the meaning.
The Resolution: Your New Superpower
Imagine a world where the “slop” is gone.
In this “New Normal,” you don’t dread opening your inbox. When you receive a document, you know it contains the hard-won insights of your colleagues. When you send an update, your team knows it’s been vetted and refined by your unique perspective.
The companies that survive the “Workslop Era” will be the ones that stop chasing “AI adoption” and start chasing “AI Integrity.” These are the organizations building “Ethics Councils” and “Responsible AI” frameworks. They aren’t just giving people tools; they are giving them a new set of standards.
But what does this look like in practice? Here are three pilot behaviors you can adopt starting Monday morning:
The Read-Aloud Test: Before sending any AI-assisted document, read it aloud and ask yourself, “Does this actually say something specific?” If you can’t summarize the main point in one sentence, it’s slop.
The Clarifying Question: When you receive workslop, don’t silently fix it. Reply with a specific question: “What’s the actual recommendation here?” or “Can you clarify what changed since last week?” This creates accountability without confrontation.
The Team Norm: In your next team meeting, establish this rule: “AI drafts are starting points, not endpoints.” Make it culturally safe to say, “I used AI for this first pass—here’s where I need input.”
By recognizing workslop for what it is—a productivity drain and a trust killer—you gain a new superpower: Discernment. You now have the vocabulary to call out the “kind of right” content before it poisons your project. You have the permission to tell your team, “Let’s spend ten more minutes thinking so our colleagues don’t have to spend two hours fixing.”
The future isn’t about who has the most AI; it’s about who uses AI to be more human. The most valuable person in the room won’t be the one who can generate 100 pages of text in ten seconds. It will be the one who treats AI output as raw material and refines it into a one-page masterpiece that actually moves the needle.
The choice is yours. You can stay in the passenger seat and let the slop pile up. Or you can take the yoke, check your instruments, and fly. The $9 million is waiting to be recovered. Your reputation is waiting to be rebuilt. It’s time to stop the slop and start the work.
Deep Dive: Connecting the Dots
To truly eradicate workslop, you need to look at the broader system of how you manage and interact with AI:
The Management Shift: The transition from “Passenger” to “Pilot” directly mirrors the shift from “Prompt Engineering” to “Protocol Management” described in The New Manager (021).
The Root Cause: The “illusion of completion” is a symptom of the deeper issue discussed in AI Demands We Stop Pretending (011), where we prioritize output volume over cognitive depth.
The Mechanism: Moving away from the “chatty” interface that encourages lazy prompting is central to The Conversational Fallacy (018).
References
Harvard Business Review: “AI-Generated ‘Workslop’ Is Destroying Productivity” (September 2025) – BetterUp Labs & Stanford Social Media Lab.
MIT NANDA Project: The Generative AI ROI Gap – A study of 300+ enterprise AI initiatives.
Gallup Workplace Research: The State of the Global Workplace 2025 – Trends in AI adoption and employee engagement.
Accenture Research (2024): Finding that companies with fully AI-led processes nearly doubled from 9% to 16% during 2024.
BetterUp Labs & Stanford Social Media Lab: Research led by Kate Niederhoffer (BetterUp) and Jeffrey T. Hancock (Stanford) on AI-generated content and workplace trust.
Peace. Stay curious! End of transmission.

