AI Demands We Stop Pretending: or How do I read nowadays?
THE PREMISE: The Illusion of Productive Knowledge Work
We’ve been lying to ourselves about knowledge work for decades, and AI just called our bluff.
Consider the typical knowledge worker’s day: skimming articles they’ll never revisit, highlighting textbook passages without retention, attending lectures while mentally checked out, writing reports that recycle conventional wisdom, and feeling perpetually guilty about the growing stack of “must-read” books gathering dust. We’ve built entire careers around these rituals, convinced ourselves they represent serious intellectual work, and judged our productivity by how busy we look rather than what we actually understand.
Then AI arrived and revealed an uncomfortable truth: if a machine can replicate your knowledge work in seconds, you weren’t doing knowledge work at all. You were performing it.
The panic is predictable. A 2025 study found a “significant negative correlation between frequent AI tool usage and critical thinking abilities.” Students become dependent. Academic writing suffers. Educators warn of “knowledge rot.” The Wall Street Journal sounds alarms. Everyone agrees: AI is destroying our capacity for deep thought.
But here’s what nobody wants to admit: AI isn’t breaking knowledge work—it’s exposing how broken our approach already was. We were already terrible at learning, already dishonest about reading, already confusing proximity to information with actual understanding. We just had better excuses before the robots showed up.
The real crisis isn’t that AI threatens knowledge work. It’s that we might waste this moment clinging to performative rituals instead of finally building systems that produce genuine understanding. And that reconstruction starts with admitting that most of what we called “knowledge work” was always theater.
THE ARGUMENT: Why Our Knowledge Work System Never Actually Worked
We Built Everything Around Performance, Not Understanding
Let’s confront the system we actually created. When students are forced to write five-paragraph essays analyzing texts they don’t care about, we’re not teaching critical thinking—we’re teaching compliance theater. When professionals dutifully plow through business books that could be summarized in a paragraph, they’re not pursuing insight—they’re checking boxes. When academics highlight entire textbooks without retention, they’re not learning—they’re performing the rituals we’ve designated as “serious scholarship.”
The student who uses ChatGPT to generate that essay isn’t breaking the system. They’re just making its emptiness impossible to ignore. If your entire assessment strategy can be defeated by an AI generating plausible-sounding text, you were never measuring understanding. You were measuring the ability to produce acceptable-sounding words on demand.
This isn’t a new problem that AI created. It’s an old problem that AI exposed. The difference is we can’t maintain the collective delusion anymore. When AI can replicate your knowledge work perfectly, it proves that work was never about cognition—it was about credentialing, about signaling, about going through motions we’ve collectively agreed to respect.
The mechanism is straightforward: we confused consumption with curation, reading with understanding, and information proximity with intellectual transformation. We built an entire infrastructure—education, professional development, corporate training—around these confusions. AI didn’t destroy that infrastructure. It just revealed it was always built on sand.
The Cognitive Offloading We’re Afraid of Was Already Happening
Here’s the part that stings: the “cognitive offloading” everyone fears from AI? We were already doing it. Constantly.
We offload to Google instead of remembering. We offload to highlighting instead of thinking. We offload to note-taking apps instead of synthesizing. We offload to “read it later” lists instead of making actual decisions about what matters. We’ve been outsourcing cognition to external systems for years—we just found those systems more respectable than AI because they required more performative effort.
The research on AI making students “lazy thinkers” isn’t discovering a new phenomenon. It’s documenting what happens when we remove the performance requirement from knowledge work that was always lazy. Students weren’t thinking deeply before ChatGPT—they were just spending more time looking like they were thinking deeply.
This is why the hand-wringing about AI misses the point entirely. When educators worry that students will use AI to “dodge real learning,” they’re assuming real learning was happening before. It mostly wasn’t. We were teaching students to jump through hoops, and now AI can jump through those same hoops faster. The crisis isn’t that AI broke something valuable—it’s that AI revealed how little of our knowledge work was ever valuable in the first place.
Strategic AI Use Demands the Cognition We Always Avoided
But here’s where the narrative transforms completely: the same AI that exposes our performative knowledge work also makes genuine knowledge work unavoidable—if we choose to use it strategically.
Consider the “Feynman Bot” approach: you teach the AI rather than being taught by it. You explain concepts while the AI, playing an ignorant but curious student, probes for gaps in your understanding. This isn’t cognitive offloading—it’s cognitive forcing. You can’t fake your way through teaching. The gaps in your knowledge become immediately, brutally apparent.
Or the Socratic method: AI deliberately challenges your premises, demands evidence, presents counterarguments. This isn’t using AI to avoid intellectual work—it’s weaponizing AI to make intellectual work harder and more rigorous than you’d likely undertake alone.
The pattern holds across domains. Use AI to generate infinite practice problem variations? That’s deliberate practice at scale. Use AI to critique your draft and then defend your argument against its challenges? That’s battle-tested understanding. Use AI to compress feedback loops from days to seconds while forcing you to do all the cognitive work? That’s eliminating busywork to make real work unavoidable.
This is the reconstruction we need: AI should handle scalable, automatable tasks specifically to free cognitive capacity for the highest-level functions—strategy, validation, creative direction, and critical judgment. Not replacing human thinking, but eliminating the performative labor we mistook for thinking, forcing us to operate exclusively in advanced cognitive domains.
The Real Choice: Honest Work or Sophisticated Performance
The difference between AI as cognitive crutch and AI as cognitive catalyst isn’t the technology—it’s entirely about strategy. And we’re losing by default because the performative approach is easier and more familiar.
Students instinctively gravitate toward effort-minimization. Professionals instinctively reach for tools that make them look productive. Without explicit frameworks for strategic AI use, without systems that transform AI from answer-dispenser to thinking partner, we’ll default to the path of least resistance: more sophisticated performance, same shallow outcomes.
This is why the moment matters desperately. We can use AI to become more sophisticated performers—better at generating plausible-sounding analysis, more efficient at producing acceptable work, faster at checking boxes. Or we can use AI to finally force ourselves into the genuine cognitive work we’ve been avoiding for decades.
The technology is identical. The outcome is opposite. The variable is whether we’re willing to admit that most of what we called knowledge work was always performance—and choose differently.
Here is a set of remixed principles, written in your article’s voice, that you can build on or insert directly. These principles hijack the “efficiency” hacks of typical reading guides and reframe them as tools for the “cognitive honesty” your article demands.
THE SOLUTION: A Reading System for Cognitive Honesty
If our old knowledge work was theater, our new system must be built on brutal honesty. This is especially true for technical reading, where the gap between looking like you’re learning (highlighting, page-turning) and actually understanding (building, debugging) is massive.
The performative reader tries to “read 10x faster.” The honest reader uses AI to force themselves to “think 10x harder” on the 10% of the material that actually matters.
Here are the principles of this new, honest system.
1. The “Metabolic Triage” (Not the 80/20 Rule)
The 80/20 “hack” is a performative trap; it’s still focused on reading 100% of the book, just at different speeds. This is a lie.
Cognitive Honesty: 80% of any technical book is not worth your glucose. It’s filler, repetitive examples, or context you don’t need.
The goal isn’t to “skim” this 80%. It’s to declare cognitive bankruptcy on it.
Use an AI preview prompt not to read faster, but to find the 20% that demands genuine struggle (the core algorithm, the complex proof, the novel architecture). You are not “reading a book”—you are surgically extracting a single, difficult concept and saving 100% of your metabolic budget to fight only that concept.
Performative Ask: “Summarize this book so I can learn it faster.”
Honest Ask: “I am reading [Book] only to understand [Complex Concept]. Which 2-3 chapters introduce the code and logic for this? Which 80% of the text is just history, setup, or simple examples I can ignore?”
2. The “Cognitive Forcing Function” (Not the Confusion Buster)
Performative readers use AI to avoid confusion. When they get stuck, they ask, “Explain this to me simply,” offloading the cognitive work.
Cognitive Honesty: Confusion is the only sign that real learning is possible. It’s the signal to increase cognitive load, not escape it.
Use AI as a Socratic-level sparring partner that makes thinking harder, not easier. This is your “Feynman Bot”—an ignorant but ruthless student you must teach.
Performative Ask: “I don’t understand [concept]. Explain it with an analogy.”
Honest Ask: “I’m going to explain [concept] to you. Do not tell me if I’m right. Just act as a skeptical senior developer and ask me probing questions that reveal the flaws in my logic. Force me to defend my explanation.”
3. The “Code-First” Triage (Not the Three-Speed Method)
The old “skimming” method is a coping mechanism for performative reading. A technical book is not prose. It’s a bundle of logic (code, formulas) wrapped in a bundle of explanations (prose). Only one of these matters.
Cognitive Honesty: You read the prose only to understand the code. Not the other way around.
Read the Code First. Go straight to the code blocks, formulas, or diagrams. This is the only ground truth. Type it out. Run it. Break it. Get the error messages. This is the real text.
Use Prose as a “Manual.” Now, read the explanatory text only to solve the problems you found in the code. “Why did the author use a
dequehere? Why not alist?” If you already understand the code, you do not need to read the surrounding prose.Use AI to Bridge the Gap. If the prose still doesn’t explain the code, use the Cognitive Forcing Function. “Here is the code I’m running. Here is the author’s explanation. My code is failing, and the explanation is weak. What question should I be asking?”
4. The “Proof-of-Work” Prompt (Not the Understanding Lock)
The performative reader finishes a chapter and asks AI, “Summarize the key insights” or “What’s one way I could use this?” This is theater. It’s asking for a pre-digested answer to prove you were “productive.”
Cognitive Honesty: The only proof of understanding is capability.
Use AI to create an honest assessment—a new, novel challenge that requires you to apply the concept you just read. This is your “Proof-of-Work.”
Performative Ask: “What did I just learn?”
Honest Ask: “I just read a chapter on [Python’s
asynciolibrary]. Give me a mini-project prompt that forces me to useasyncio.Queueandasyncio.Lockin a way that is not identical to the book’s example. I will write the solution, and you will critique its correctness and efficiency.”
This is the shift. Stop using AI to “enhance reading.” Start using it to force thinking. Stop optimizing for “books finished” and start optimizing for “hard problems solved.” That is the only knowledge work that was ever real.
THE TAKEAWAY: Rebuild Around Cognitive Honesty
The path forward requires abandoning comfortable fictions. Stop pretending that reading entire books you don’t care about is virtuous. Stop acting like highlighting constitutes learning. Stop treating time spent as a proxy for understanding achieved. Stop judging knowledge work by how difficult it looks rather than what it produces.
Instead, rebuild around brutal honesty about what genuine knowledge work actually requires:
Ruthless curation over dutiful consumption. You have limited will, limited attention, limited deep focus. In a world of infinite content, trying to “keep up” isn’t virtuous—it’s self-sabotage. Use AI to make awareness-level reading efficient. Save your metabolic budget for transformative engagement with ideas that actually matter to what you’re trying to understand.
Strategic difficulty over performative effort. Stop asking “Can AI help me with this?” Start asking “How can I use AI to force my brain to work harder on this?” Use AI as an ignorant student you must teach. Use AI as a Socratic challenger who questions every premise. Use AI to generate progressively harder problems that expose gaps in understanding.
Honest assessment over credentialing theater. If an AI can pass your test, your test was measuring performance, not understanding. Rebuild evaluation around demonstrating genuine capability—explaining concepts to skeptical questioners, solving novel problems, synthesizing across domains, creating original work that couldn’t be generated by following patterns.
This isn’t easier. It’s harder. That’s precisely the point. The knowledge workers who thrive won’t be those who find clever ways to make AI do their thinking. They’ll be those who weaponize AI to make thinking unavoidable.
The technology is neutral. Your strategy determines whether AI elevates your cognition or just makes your performance more efficient. We’re at an inflection point. We can use this moment to finally build knowledge work systems around genuine understanding, or we can use it to perfect our ability to look productive while thinking less.
Choose deliberately. Choose difficulty. Choose honest cognitive work over sophisticated performance. The future belongs not to those replaced by AI, but to those who refuse to let AI do their thinking for them—and who finally stop pretending they were thinking deeply before AI arrived.
Peace. Stay curious! End of transmission.

