Chris Argyris: The Trap of “Skilled Incompetence"
Why Smart Leaders Are the Biggest Barrier to Transformation
Your organisation says it is committed to AI transformation. It has published the strategy. It has funded the centre of excellence. It has hired the head of AI. It has sent senior leaders on courses and launched pilot programmes. And nothing fundamental is changing.
Chris Argyris spent forty years explaining why. While other management thinkers focused on strategy, structure, and process, Argyris studied something more elemental: how the people in organisations actually reason when they feel threatened. What he found is devastating for anyone leading transformation: the more successful and senior the professional, the worse they are at learning. Not because they lack intelligence, but because their entire career has trained them to avoid the very conditions that learning requires.
Argyris’ work has been extended, debated, and refined by a generation of researchers, most notably his long collaboration with Donald Schön. This short post focuses on the core insights that are most directly relevant to anyone leading organisational change today, particularly in the context of AI adoption.
1. Single-Loop and Double-Loop Learning: The Thermostat and the Question
Argyris’ most enduring contribution is the distinction between single-loop and double-loop learning. Single-loop learning detects and corrects errors within existing assumptions; like a thermostat that adjusts the heating to maintain a set temperature. It never asks whether the temperature setting itself is right. Double-loop learning questions the assumptions: is this the right temperature? Should we even be heating this room? Is the room the right shape?
Most organisations are single-loop machines in the view of their leaders. They optimise relentlessly within a framework they never examine. When AI-generated code arrives and works, single-loop learning asks: “How do we govern this? How do we test it? How do we make sure it meets our standards?” These are important questions. They are also the wrong first questions, because they all assume the existing framework remains valid.
Double-loop learning would ask: if code can be generated from specifications, do we still need the distinction between requirements and development? Do we still need the team structures built around that distinction? Is our entire engineering operating model based on an assumption that code is the bottleneck and is that no longer true?
This second set of questions is almost never asked. Not because people are stupid, but because asking them threatens established competence in the organisation. Drucker identified that the knowledge worker must define the task. Argyris reveals why that redefinition is so hard: defining the task differently means admitting that how we defined it before might have been wrong, and that admission threatens the identity of the people whose expertise was built on the old definition.
2. Espoused Theory and Theory-in-Use: The Gap Where Dysfunction Lives
Argyris drew a sharp distinction between two kinds of theory that govern behaviour. Espoused theory is what people say they believe; the values they articulate, the principles they put on slides, the culture they describe in town halls. Theory-in-use is what actually governs their behaviour; the mental models and decision rules that operate in practice, often without conscious awareness.
The gap between the two is where organisational dysfunction plays out.
Watch this play out in AI adoption. The espoused theory: “We are committed to AI transformation. We encourage experimentation. We welcome innovation.” The theory-in-use: “We will adopt AI in ways that do not threaten any existing structure, hierarchy, or career path.” These two positions are fundamentally incompatible, and the inability to discuss this incompatibility is the primary obstacle to organisational learning.
This is not hypocrisy. The people who espouse openness to change genuinely believe they mean it. Argyris’ insight is that theories-in-use operate below conscious awareness. People are “skilled” at not seeing the gap. A CIO who says “we embrace AI” and simultaneously requires every AI-generated artefact to pass through the same governance process that was designed for handwritten code is not lying. They are enacting a theory-in-use - “change must not disrupt the structures I control” - that they cannot see because seeing it would require them to question their own role.
Kahneman provides the cognitive mechanism for this blindness. System 1 constructs a coherent story - “we are transforming” - from the available evidence and suppresses awareness of contradictions. WYSIATI (What You See Is All There Is) means the CIO literally cannot see the gap between their espoused commitment to transformation and their actual behaviour of protecting existing structures. The story is too coherent, and coherence feels like truth.
3. Defensive Routines: The Undiscussable and Its Undiscussability
Argyris discovered that organisations develop defensive routines; patterns of behaviour that prevent embarrassment or threat but block learning. These routines have a characteristic property - they are self-sealing. The routine itself cannot be discussed, and the fact that it cannot be discussed also cannot be discussed.
Consider the senior architect who has spent twenty years building distributed systems. AI-generated code arrives and works. It does not handle every edge case. It does not meet every governance requirement. The architect points all of this out correctly and accurately but the valid criticism functions as a defensive routine: it prevents the deeper question from being asked. If AI can generate 80% of the code that this architect currently writes, what is their role now? That question is undiscussable. And the fact that it is undiscussable is also undiscussable. Anyone who raises it will be met not with argument but with deflection: “You don’t understand the complexity of our systems.”
Mintzberg (to be discussed in a few days time) showed that strategy emerges from the daily decisions of the people doing the work, and that suppressing emergent patterns is suppressing your strategy. Argyris explains the mechanism by which that suppression happens: defensive routines make the emergent patterns undiscussable because discussing them would require admitting that the existing strategy — and the people who built it — might be wrong.
The defensive routines in AI adoption are predictable and pervasive. “We tried AI and it wasn’t enterprise-ready”; a conclusion reached after testing AI on tasks designed to showcase its limitations rather than its strengths. “Our domain is too complex for AI” is an assertion that protects existing expertise by framing it as irreplaceable. “We need to train our developers in AI” might be a framing that preserves the existing role structure while adding a new skill to the existing job description, rather than asking whether the job description itself should change.
Each of these statements contain truth and that is precisely what makes them effective as defensive routines. Argyris’ point is not that the criticism is wrong but that the valid criticism is being used to prevent a more important conversation from happening.
4. Model I and Model II: Two Ways of Reasoning
Argyris mapped these patterns into two models of behaviour. Model I - the default for almost everyone, especially professionals - is governed by four values: maintain unilateral control, maximise winning and minimise losing, suppress negative feelings, and be rational. These sound reasonable. They can also be a recipe for organisational paralysis.
Under Model I, a leader confronted with evidence that their AI strategy is failing will reinterpret the evidence (”it’s too early to judge”), blame external factors (”the vendor didn’t deliver”), suppress the emotional reality (”morale is fine, people just need time”), and reframe rationally (”we are on track against the revised milestones”). At no point do they question the governing values that produced the strategy. This is single-loop reasoning.
Model II operates under different governing values:
use valid information as the basis for action,
ensure free and informed choice,
generate internal commitment to decisions.
Under Model II, the same leader would share the evidence of failure openly, invite genuine challenge to the strategy, acknowledge the emotional toll on the team, and create conditions where people could disagree without career risk.
Model II sounds like what every leadership book advocates but is rare in practice. Why? Because Model II requires vulnerability. It requires the leader to say “I may be wrong” in a culture that rewards certainty, to surface conflict in a culture that rewards harmony, and to admit that they do not know the answer in a culture that promotes people who appear to know everything.
Weick (also being discussed shortly) showed that understanding follows action; people act first and make sense later. Argyris reveals the constraint on this process: defensive routines prevent people from acting in ways that would generate new understanding, because the actions that produce learning are precisely the actions that Model I reasoning forbids. You cannot learn from an experiment you are too afraid to run, and Model I ensures that the experiments most likely to produce learning are the ones most likely to be killed.
5. Skilled Incompetence: Why the Best People Are the Worst at This
Argyris coined the term skilled incompetence to describe what happens when highly capable professionals apply their considerable skills to avoiding learning. They are not incompetent in the ordinary sense, they are extraordinarily competent at behaviours that prevent the organisation from learning. They smooth over conflict before it can produce insight. They send mixed messages to avoid commitment. They design meetings that produce the appearance of agreement without core alignment.
This matters because the people most central to AI transformation like senior engineers, architects, domain experts, technical leaders, are exactly the people who have spent the longest perfecting these skills (this is me!) They have risen precisely because they learned how to navigate organisations without exposing themselves to situations where they might fail publicly. And now AI adoption demands exactly what they have spent their careers avoiding; admitting that their existing expertise, while still valuable, is no longer sufficient. That the frameworks they have mastered need revision. That the roles they have built may need to change. As an exercise, think about the advice you would give a young engineer starting out now… and apply it ‘backwards’ for those at the end of their careers.
Senge described this as the discipline of surfacing Mental Models — making the implicit explicit so it can be examined. Argyris goes further than Senge by explaining why surfacing mental models is so personally threatening: because our mental models are not just cognitive maps of the world, they are load-bearing structures of our professional identity. Questioning them feels like questioning the self.
This connects directly to Mintzberg’s configurations. In a Professional Bureaucracy, and most enterprise technology organisations are exactly this, power derives from expertise. When AI changes what counts as expertise, the entire power structure of the organisation is at stake. Skilled incompetence is not a character flaw, it is a rational response to a real threat. Giddens (coming in the next day or so) would recognise this as structuration in action - professionals reproduce the very power structures that their defensive routines protect, and in reproducing them, they make the routines harder to change.
6. From Diagnosis to Action: Creating the Conditions for Double-Loop Learning
Argyris was not merely a diagnostician. He spent decades working with organisations to develop the capacity for double-loop learning. The core requirement is deceptively simple: people must learn to make their reasoning explicit and genuinely testable. Not “I think X” but “I think X because of Y and Z, and here is the evidence that would change my mind.” For those of a philosophical bent, recall the work of Karl Popper on Falsification.
This is Model II in practice, and it is hard to do. It requires that leaders publicly state the assumptions behind their AI strategy and specify what evidence would cause them to revise it. It requires that architects explain not just what they recommend but the reasoning chain that produced the recommendation and genuinely invite others to find flaws in that chain. It requires that the organisation create forums where disagreement is not merely tolerated but structurally required. I have seen communities work this way - it is possible. We will discuss the role of trust in a future article too.
Argyris found that teaching Model II cannot be done through lectures or training programmes. It requires role-modelling where leaders must demonstrate the behaviour themselves. If the head of AI cannot say “I was wrong about our approach to specification governance, and here is what I learned,” then no amount of psychological safety workshops will produce double-loop learning in the organisation. The signal that matters is not what the organisation says about learning. It is what happens to the first person who publicly admits they were wrong.
This is where Dekker’s thinking on just culture becomes essential. If the response to honest admission of error is blame, however subtly expressed, then Model I is the only rational choice. Argyris and Dekker converge on the same point from different directions; you just cannot mandate learning. You can only create the conditions in which learning becomes safe enough to attempt. And the most important condition is what actually happens, not what the policy says will happen, when someone surfaces an uncomfortable truth.
Seligman adds a further dimension: when people have experienced repeated failures of organisational learning when they have raised uncomfortable truths before and been punished for it. They develop learned helplessness and stop trying. Not because they lack insight, but because experience has taught them that insight is not rewarded. The defensive routines win not because they are strong, but because the people who might challenge them have been trained, through repeated experience, that challenge is futile.
This is the practical test. Not the espoused theory about openness and learning, but the theory-in-use that governs what actually happens when someone surfaces an uncomfortable truth. Does the organisation learn from it? Or does it punish the messenger and preserve the routine?
I have introduced a number of additional thinkers that help to triangulate Argyris’ thinking. Hopefully I have done this enough to create hooks for future thinking and also to keep you interested in reading the future articles about them!
(An Organisational Prompt is something you can do now….)
Organisational Prompt
The Espoused Theory Audit: Pick one statement your organisation makes about AI adoption e.g. “We encourage experimentation,” “We are committed to transformation,” “We value innovation.” Now find one concrete example where the organisation’s actual behaviour contradicts that statement. Write both down side by side. Show them to a trusted colleague and ask: “Can we discuss this contradiction in a leadership meeting?” If the answer is no, or if you already know the answer is no without needing to ask, you have found a defensive routine. That defensive routine is more important than your AI strategy, because it determines whether your AI strategy can learn.
The big idea here is that a big ignored obstacle to AI transformation (and, frankly, any other new idea being introduced) is not (only) technology, funding, talent or strategy. It is the organisation’s inability to examine its own reasoning. Argyris showed that this inability is not a weakness to be overcome but a skill to be unlearned and that unlearning is the hardest work any organisation will ever do.
Further Reading
Chris Argyris: “Teaching Smart People How to Learn” (Harvard Business Review, 1991) - The single most important article on why success-oriented professionals resist learning. Read this before anything else if you are short on time.
Chris Argyris: Overcoming Organizational Defenses - The most accessible entry point to Argyris’ thinking. Short, direct, and full of examples that will make you uncomfortably recognise your own organisation.
Chris Argyris and Donald Schön: Organizational Learning: A Theory of Action Perspective - The foundational text on single-loop and double-loop learning. Academic in style but essential in substance.
Chris Argyris: Knowledge for Action - Where Argyris shows how to move from diagnosis to intervention. The practical companion to the theoretical work.
Disclaimer
I write about the industry and its approach in general. None of the opinions or examples in my articles necessarily relate to present or past employers. I draw on conversations with many practitioners and all views are my own.

