Cognitive surrender: the AI risk nobody in your team is talking about
The overview
Most teams are adopting AI quietly, individually, and without much conversation about it. Which is fine, until it isn't. Cognitive surrender is what happens when we let AI set the direction, generate the ideas, and make the calls, without noticing we've handed over the wheel.
For NFPs, where collective thinking is the whole point, that's not just a personal risk. It's an organisational one. This post is about what it looks like, why deep integration makes it more likely, and what you can do about it.
What is cognitive surrender?
There's a question that more people are sitting with than are saying out loud: did I write that, or did the AI?
It's not always easy to tell. According to Helen Edwards in Stay Human: Authoring Your Mind in the AI Age (Artificiality Institute), cognitive surrender becomes more of a risk the more deeply we integrate AI into our identity and our workflows. It's the point at which we allow AI to set the direction, generate the ideas, develop the process, and effectively assume our role. The problem isn't that AI is doing some of the work. The problem is when it's doing the thinking, and we've stopped noticing.
AI is designed to extend human cognition, and the only way to get real value from it is through genuine integration. But that's also exactly where the risk lives. Which means the answer isn't to hold back. It's to go in deliberately.
Why does it matter?
AI will confidently lead you off course. I watched a TikTok recently of a US immigration lawyer, Michael Foote, who described how he was winning cases simply by looking up the precedents his opponents cited and using them against them in court. Because the other side had used AI to build their reference materials without properly evaluating the output. The AI was wrong. Nobody checked. The cases were lost.
That's cognitive surrender in action.
Early research is starting to put numbers on the mechanism. A 2025 study from researchers at CMU, Oxford, MIT and UCLA found that after just ten minutes of AI-assisted problem solving, participants who then lost access to AI performed significantly worse and gave up more frequently than those who'd never used it. The effect held across both arithmetic and reading comprehension tasks. But here's the detail worth holding onto: the performance decline was concentrated almost entirely among participants who used AI to get direct answers.Participants who used AI for hints or clarifications showed no significant impairment at all. The research is a preprint and the tasks were fairly controlled, so I wouldn't overstate it. But the finding maps directly onto something most of us have probably felt: there's a difference between using AI to think with you, and using AI to think for you. The first builds capacity. The second quietly erodes it.
A few things make cognitive surrender more likely, and most of them are pretty ordinary.
Deadlines are the obvious one. When we're stretched, we offload. That's rational - our brains use roughly 20% of our body's energy on cognition, and we're wired to conserve it. Using AI under pressure isn't laziness. But it does mean we stop evaluating what comes back.
Confidence gaps are trickier. Without experience in a domain, it's easy to treat AI as the expert. Edwards names this precisely: the knowledge illusion. The feeling that you understand something because you've been near it. Proximity to knowledge isn't possession of it. If you've reviewed an AI-generated analysis without the expertise to actually interrogate it, you haven't actually validated it.
Then there's what Edwards calls the novelty trap. She writes: “Here’s the risk I’ve learned to watch for: sometimes I pick the new arrangement because it’s new to me, not because it’s actually better. Novelty feels like insight. Fresh feels like right.” AI reframes things constantly. Sometimes the reframe is genuinely better. Sometimes it just feels better because it's different. That's a hard distinction to make in the moment.
Psychological safety matters too, and it tends to go unspoken. If people don't feel comfortable admitting they used AI, or admitting they're not sure the output is right, they're more likely to let it through unchallenged. The shame of not knowing, or of having leaned on AI for something they feel they should have done themselves, is real. It just doesn't come up in team meetings. (If your organisation is grappling with how to build that safety around AI uncertainty specifically, Sue Cunningham at the Uncertainty Lab does excellent work in this space.)
And then there's the slow one: consistently handing off tasks that feel routine but aren't. A stakeholder update, a grant acknowledgement, a board summary. These feel like admin but they carry relationship weight. AI will produce something that's grammatically correct and structurally sound. It can also strip out the nuance that makes those communications actually land. The muscle atrophies. The false economy shows up later.
How we got here
AI development and adoption has been shaped largely by a productivity and efficiency logic. More output, faster, at lower cost. For commercial organisations, that framing is at least internally coherent. For NFPs, it's often a category error. The value of your work isn't always measured in throughput. So when AI governance conversations default to ‘how do we get more done,’ something important gets missed from the start.
There's also a structural problem with how AI is designed. It's built for individual use. The conversations are 1:1. The memory is personal. The configuration, the prompts, the shortcuts each person has developed: none of that is visible to the rest of the team. Everyone's running their own AI in parallel, and from the outside you can't see how far down the track anyone is, or what decisions have already been shaped by conversations nobody else was part of.
In organisations that run on collective decision-making, shared mission, and earned trust, this matters. It's not that individuals are doing anything wrong. It's that the technology, as currently designed, optimises for individual cognition. And that sits in genuine tension with how good organisations actually work.
The micromanagement problem
I'm a people-pleaser. Specifically, when I'm faced with strong pushback from someone who knows more than me, my instinct has always been to surrender and assume they're right. I've worked on this for years. I've learned to hold my ground when I'm the expert, to find the common ground, to ask the question rather than capitulate. But when I genuinely am not the expert, I defer. It's not weakness. It's respecting the expertise of those around me.
AI knows more than me in a lot of domains. So suddenly I'm faced with a new version of a very old problem: overruling the expert.
It happened to me recently when I was writing a LinkedIn post. I'm not a naturally confident writer, it’s a task where I always defer to the expertise of copywriters on my teams. So when Claude rearranged my words and restructured my sentences, I let it. And then it didn't feel like me anymore. And I didn't feel confident enough to go back and fix it. I was disempowered and frustrated, and the post didn’t go out at all.
Here's the thing: Claude does know how to write better than I do in some technical respects. It's better at grammar. (I’ll be honest, I didn't even know what an em dash was until people started complaining that AI overuses them.) But I have good ideas, and I respect the people who read my work. So I want my writing to be better, but I also want it to be mine.
What I was left with was this: how do you wrangle an unruly expert? Suddenly, I was in the position of micromanaging a very competent team member. Which isn't my style, and not a dynamic that produces good work.
I don't think I'm unusual in this. Helen Edwards describes a version of the same experience in Chapter 1 of Stay Human: generating frameworks with Claude for hours, feeling clear and productive inside the process, and then being completely unable to explain the work to her partner when he asked. The clarity was borrowed. The thinking wasn't hers.
The answer isn't less AI. That's not what either of us is arguing. The answer is more deliberate AI. More awareness of what you're accepting and why. More visibility across the team about what AI is contributing and what humans are changing. More structure around the moments when it matters most.
Which is, essentially, a governance question.
What this means for NFP governance
The governance challenge for NFPs isn't whether to use AI. It's how to organise individual AI use so that it strengthens collective thinking rather than quietly replacing it. Here are 5 ideas you could consider implementing with your team.
These five practices aren't restrictions on AI use. They're what make deep integration sustainable.
1. Make AI reasoning a normal part of team conversation.
When someone brings a recommendation, a draft, or an analysis to a meeting, they briefly name what AI contributed and what they changed or rejected. Not as a confession. As a habit. It puts the human judgment back in the room, and gives the rest of the team something to actually engage with rather than just accept.
2. Build shared prompts for shared work.
For recurring deliverables, funding narratives, board reports, stakeholder updates: develop agreed starting points for how AI is briefed. It doesn't mean everyone uses AI identically. It means the team has shaped something together, rather than twelve people running their own parallel versions of the same task and hoping the outputs are compatible.
3. Separate generation from evaluation, deliberately.
The person who used AI to produce something shouldn't be the only one to assess it, particularly for high-stakes work. This isn't a new approval layer. It's a lightweight norm: a second human in the loop before consequential outputs land. For NFPs, where mission integrity and stakeholder trust matter, this is also the most direct way to catch the knowledge illusion before it becomes a problem.
4. Create space to say ‘this doesn’t feel right.’
Cognitive surrender is most likely when people are under pressure, lack confidence, or don't feel safe admitting uncertainty. For deeper AI integration to work at a team level, people need to be able to flag when something feels off without it being a confrontation or an admission of incompetence. That requires naming it explicitly in your AI practice, not assuming it's covered by existing psychological safety. A concrete starting point: add ‘how is AI being used on this?’ as a standing question in project debriefs.
5. Audit the collective diet, not just individual output.
Periodically ask as a team: which tasks have we consistently handed to AI in the last quarter, and which of those used to require human judgment or collective input? This isn't about rolling anything back. It's about knowing where shared capacity is quietly eroding before it becomes a problem. For NFPs, the tasks most at risk are often the ones that carry the most organisational meaning: funding narratives, impact reporting, and community communications. These feel routine. They're not.
Where I’ve landed
I believe that the value of AI for organisations isn't mainly in the efficiency. It's in what becomes possible when your people are genuinely integrated with the technology and still thinking for themselves. That doesn't happen by accident. It requires some deliberate structure around how AI is used, and conversations most teams aren't having yet.
If your team is using AI mostly in silos, mostly for speed, and mostly without talking about it, you're probably getting some of the efficiency and not much else. The conversation that changes that isn't a technology conversation. It's a culture one.
Helen Edwards, Stay Human: Authoring Your Mind in the AI Age, Artificiality Institute. Published serially at journal.artificialityinstitute.org. Chapter 5 (March 2026).
Liu et al. (2025), “AI Assistance Reduces Persistence and Hurts Independent Performance,” arXiv preprint. CMU / University of Oxford / MIT / UCLA. ai-project-website.github.io/AI-assistance-reduces-persistence/
Published by Brightside Collab | Written by Sarah Croney