Kahneman: Leading the Two-System Organisation
Daniel Kahneman’s research on cognitive bias explains why rational transformation strategies meet irrational resistance.
A practitioner described a familiar scene. The transformation strategy was impeccable: clear objectives, phased delivery, measurable outcomes, executive alignment. The leadership team had spent months building the case. They presented it to the organisation with confidence. Within weeks, the resistance was everywhere, but nowhere anyone could point to. People nodded in meetings and changed nothing. Teams found reasons to delay. The strategy was logically unassailable and organisationally inert. The leadership team’s conclusion: the people just do not get it. Their actual problem: they had designed a strategy for a species that does not exist.
Daniel Kahneman won the Nobel Prize in Economics in 2002 for demonstrating that human beings are not the rational actors that classical economics assumes. His research, conducted over decades with Amos Tversky and later with Gary Klein, reveals that human cognition is dominated by fast, automatic, bias-prone processing that operates below conscious awareness. This is not a flaw in the species. It is the architecture of the species. And it means that every transformation strategy designed as a rational argument addressed to rational people is addressing an audience that shows up only intermittently, tires quickly, and defers to something far less analytical for the vast majority of its decisions. Argyris diagnosed the defensive routines that prevent learning. Dekker showed how blame activates them. Kahneman reveals the cognitive machinery beneath both: the automatic processing that makes defensive routines feel like clear thinking and makes blame feel like accountability.
1. Two Systems, Two Kinds of Learning
Kahneman describes cognition as operating through two systems. System 1 is fast, automatic, emotional, and always on. It handles the vast majority of our thinking. System 2 is slow, deliberate, analytical, and expensive; it requires significant energy and engages only when forced. Leaders design transformations using System 2: spreadsheets, roadmaps, business cases. Employees experience the change through System 1: fear, habit, social intuition, the automatic reading of whether this situation is safe or threatening.
The connection to Bateson is structural. System 1 operates at Learning I: it pattern-matches within existing frames, correcting errors without questioning the frame itself. System 2 is needed for Learning II: questioning the governing assumptions, changing the frame. But System 2 is lazy. It depletes under cognitive load. And transformation is, by definition, a period of sustained cognitive load. When the load increases, people revert to System 1, which means they revert to Learning I, which means they correct errors within their existing framework rather than questioning whether the framework is the problem. You cannot explain a transformation into existence because the explanation is a System 2 product addressed to a System 1 audience. The audience will construct a coherent story from the explanation, confirm that it makes sense, and continue operating exactly as before.
This is not the same as Dekker’s local rationality, but it sits alongside it. Dekker shows that behaviour that looks irrational to the planner makes sense to the practitioner given their constraints. Kahneman shows that the planner’s own confidence in the plan is itself a System 1 product: a feeling of coherence mistaken for evidence of correctness.
2. WYSIATI: The Coherence Illusion
Kahneman’s most dangerous concept for leaders is WYSIATI: What You See Is All There Is. The mind constructs the best possible story from whatever information is available, and it does so without any awareness of what is missing. Confidence is determined by the coherence of the story, not by the quality of the evidence. A simple, internally consistent narrative feels true regardless of how much it leaves out.
This is an information pathology. The leader who sits in a boardroom, reviews a partial dataset, and constructs a coherent explanation of why the transformation is stalling has not diagnosed the problem. They have constructed a narrative that feels like a diagnosis. The coherence of the narrative suppresses the awareness that critical information is absent. And the further the leader is from the work, the more gaps the narrative must paper over, and the more confident the leader feels, because the story is simpler and therefore more coherent. Proximity to the work is the antidote to WYSIATI, because proximity introduces the messy, contradictory details that prevent premature coherence. The leader who has sat with a team attempting to use AI on a real problem will have a less coherent story and a more accurate one.
Bourdieu deepens this. The biases Kahneman describes are not random cognitive errors distributed evenly across the population. They are structured by the habitus. The leader’s System 1 has been trained by decades of experience in a particular organisational field, and it generates the automatic judgements that the field has rewarded. The senior executive who instinctively reaches for governance, reporting, and control in the face of uncertainty is not exhibiting a generic cognitive bias. They are exhibiting a bias shaped by a career in which governance, reporting, and control were the behaviours that produced success. Their System 1 is not random. It is biographical.
3. Loss Aversion: Why Resistance Is Rational
Prospect Theory demonstrates that losses loom roughly twice as large as equivalent gains. Losing something you have feels approximately twice as painful as gaining something of equal value feels good. This is not a quirk. It is a fundamental asymmetry in how human beings evaluate outcomes, and it explains why transformation strategies that emphasise future benefits consistently fail to overcome present anxiety.
Change communications focus on what people will gain. But employees immediately feel the threat to what they already have: their status, their expertise, their daily routines, their sense of being good at their job. Even when the net outcome is objectively positive, the psychological weight of the loss triggers resistance that no amount of persuasion can overcome, because the persuasion is addressing System 2 while the loss is being processed by System 1.
Heifetz names this with precision. People do not resist change. They resist loss. And the losses are real: the developer whose craft is code and who is now told to write specifications is losing something they spent years building. Bourdieu would call it capital devaluation: the accumulated competence that provided professional identity, social standing, and daily meaning is being devalued by a shift in the field. The resistance is not irrational. It is a perfectly rational response to a genuine threat. The leader who dismisses it as “fear of the new” has failed Kahneman’s own test: they have constructed a coherent narrative (people resist because they are afraid) from incomplete evidence (they have not investigated what, specifically, is being lost). Heifetz’s prescription follows directly: name the losses. Acknowledge what is being left behind. Do not pretend the future will be costless. People can tolerate significant loss if it is named, honoured, and shared. They cannot tolerate loss that is denied.
4. The Planning Fallacy: Against the Inside View
Human beings systematically underestimate the time, costs, and risks of future actions while overestimating benefits. Kahneman calls this the planning fallacy, and it persists even when people have direct experience of previous projects that overran, because they take the “inside view” (focusing on the specifics of their case, which feels unique) rather than the “outside view” (looking at the statistical base rate of similar projects, which is usually sobering).
Weick’s sensemaking framework explains why the inside view is so seductive. Sensemaking is retrospective: we understand what we have done after we have done it. But planning requires prospective sense; making sense of what has not happened yet. The mind fills this gap with narrative, and the narrative is governed by WYSIATI: it is coherent, plausible, and dangerously incomplete. Mintzberg’s research on strategy provides the empirical evidence. Most realised strategy is emergent; it arises from accumulated action, not from the implementation of plans. The three-year transformation roadmap is a System 2 artefact that assumes a linear progression through a world that is not linear. It will be wrong. The question is whether the organisation has the capacity to adapt when it discovers this, or whether it has invested so much identity in the plan that deviation feels like failure.
Giddens adds the structural dimension. The plan, once committed to, becomes part of the organisation’s structure: roles are defined around it, governance is built to track it, careers are attached to it. The plan reproduces itself even after reality has moved on, because the structure that formed around it has its own momentum. Abandoning the plan is not just a cognitive challenge. It is an identity challenge (what does the programme director become if the programme is redesigned?) and an interaction challenge (the governance apparatus resists changes to the thing it was built to govern).
5. Expert Intuition: When the Habitus Decides
Kahneman’s collaboration with Gary Klein produced a valuable distinction. Expert intuition is reliable in “high-validity environments”: domains with clear rules, immediate feedback, and repeated practice (chess, firefighting, certain kinds of surgery). In these environments, System 1 can be trusted because it has been trained by thousands of cycles of action and feedback. But in “low-validity environments,” where feedback is delayed, ambiguous, or absent, expert intuition is statistically unreliable. Most business strategy operates in low-validity environments. Executive intuition about transformation is, on average, no better than a guess dressed in confidence.
Bourdieu makes this concrete. The leader’s intuition is not a generic faculty. It is the habitus: the accumulated dispositions formed through a career in a specific organisational field. In the domain where the habitus was formed, the intuition may be excellent. The CTO who has lived through three platform migrations has genuine intuitive expertise about platform migrations. But AI transformation is not a platform migration. It is a shift in the field itself. The habitus that was formed in the old field generates intuitions calibrated to the old rules. Trusting those intuitions in the new field is like trusting a chess grandmaster’s intuition in poker: the pattern-recognition is exquisite, but it is trained on the wrong game.
Stacey would push further. In complex responsive processes, the future cannot be predicted because it has not yet been constructed by the interactions from which it will emerge. Expert intuition, however well calibrated, cannot predict the outcome of interactions that have not yet occurred. The leader’s role is not to intuit the right answer but to create the conditions for the right answers to emerge from the interaction of people with the work. This requires tolerating not knowing; precisely the state that System 1, with its compulsive coherence-seeking, finds most intolerable.
6. The Cognitive Dimension of Every Barrier
Kahneman does not stand alone in this series. He provides the cognitive mechanism beneath the barriers that every other thinker has diagnosed. Argyris describes defensive routines; Kahneman shows that they operate at System 1 speed, which is why they are so difficult to interrupt. Dweck describes the fixed mindset; Kahneman shows that loss aversion makes the fixed mindset the cognitively cheaper option, because growth requires tolerating the losses that come with not yet being competent. Weick describes sensemaking failures; Kahneman shows that WYSIATI produces premature coherence that feels like sensemaking but is actually its opposite. Giddens describes structural reproduction; Kahneman shows that the cognitive defaults of the people within the structure are part of what reproduces it.
The practical implication is uncomfortable. You cannot remove cognitive bias. It is not a bug in the system; it is how the system works. What you can do is design the environment so that bias causes less damage. This means shortening feedback loops (so that the gap between action and consequence is small enough for System 2 to learn from), creating proximity to the work (so that WYSIATI has less room to operate), and naming losses explicitly (so that loss aversion is addressed rather than denied). It means, in Bateson’s terms, designing for Learning II: not correcting individual errors but changing the conditions under which errors are produced.
(An Organisational Prompt is something you can do now....)
Organisational Prompt
Take your current transformation strategy and identify the three strongest claims it makes about why the change will succeed. For each claim, ask two questions. First: what evidence would disprove this? If you cannot name specific disconfirming evidence, the claim is a WYSIATI narrative, not an analysis. Second: what has the base rate been for similar initiatives in your organisation or industry? If you are relying on the specifics of your case rather than the track record of comparable cases, you are inside the planning fallacy.
The point is not to abandon the strategy. It is to discover which parts of it are System 2 analysis and which parts are System 1 confidence dressed in analytical language. The distinction is usually more uncomfortable than leaders expect.
Further Reading
Daniel Kahneman, Thinking, Fast and Slow (2011). The definitive popular account of the dual-process research. Every chapter has implications for how organisations handle change, but the sections on WYSIATI, loss aversion, and the planning fallacy are the most directly relevant.
Daniel Kahneman and Amos Tversky, Judgment Under Uncertainty: Heuristics and Biases (1982). The original academic papers. Dense but essential for understanding why the biases are structural, not incidental.
Daniel Kahneman and Gary Klein, Conditions for Intuitive Expertise: A Failure to Disagree (American Psychologist, 2009). The collaboration that produced the high-validity/low-validity distinction. Freely accessible through many university repositories.
Philip Tetlock, Superforecasting: The Art and Science of Prediction (2015). The empirical evidence that most expert prediction is unreliable, and the techniques that improve it. The outside view, operationalised.
I write about the industry and its approach in general. None of the opinions or examples in my articles necessarily relate to present or past employers. I draw on conversations with many practitioners and all views are my own.






