Why Smart Leaders Make Predictable Mistakes
November 2025
Most leadership failures are not caused by a lack of intelligence.
They are caused by certainty.
I’ve spent much of my career around senior leaders who are exceptionally capable, deeply experienced, and genuinely well-intentioned. Yet I’ve repeatedly watched the same pattern unfold: decisions that felt obvious at the time later revealed themselves to be avoidable errors.
What’s striking is not the failure itself, but how predictable it often was.
In many cases, the warning signs were present early. Assumptions were untested. Claims were persuasive but thin. Dissent was muted. Momentum took over. And once a decision had emotional or reputational investment attached to it, reversing course became almost impossible.
The problem is not stupidity.
It is how intelligent people behave under pressure.
Confidence is not the same as clarity
Senior roles reward decisiveness. Over time, this creates a subtle bias: leaders begin to equate confidence with correctness. The more assertively a view is expressed, the more weight it appears to carry—particularly in time-constrained environments.
But confidence is not evidence.
And persuasion is not validation.
Many poor decisions begin with a simple, unspoken assumption: “This feels right, therefore it probably is.”
Silence is rarely agreement
One of the most dangerous moments in any leadership meeting is when no one challenges a proposal.
Silence is often interpreted as alignment. In reality, it is more commonly a signal of uncertainty, fatigue, or reluctance to be seen as obstructive. Senior teams rarely lack intelligence; they lack psychological permission to pause momentum.
When silence replaces challenge, risk accumulates quietly.
Better decisions require structured pauses
The antidote to predictable failure is not more data or longer meetings. It is the discipline to introduce a brief, deliberate pause—long enough to surface assumptions, invite dissent, and distinguish belief from fact.
This is the role of frameworks like Veritus: not to slow progress, but to improve decision quality before commitment.
The most effective leaders are not those who never make mistakes.
They are those who recognise patterns early—and intervene before certainty hardens into error.
AI Doesn’t Fix Bad Decisions — It Exposes Them
February 2026
Artificial intelligence is often positioned as a solution to complexity. In practice, it tends to reveal it.
Over the past few years, I’ve seen AI initiatives fail in remarkably similar ways across very different organisations. The technology varies. The vendors change. The outcomes, however, are often predictable.
The root cause is rarely technical.
AI does not fail because algorithms are flawed.
It fails because decision-making around it is.
Technology amplifies behaviour
AI initiatives magnify existing organisational habits. Where governance is weak, AI accelerates confusion. Where incentives are misaligned, it reinforces the wrong outcomes. Where challenge is discouraged, it codifies assumptions at scale.
In environments where speed is rewarded and scrutiny is seen as friction, AI becomes performative rather than purposeful.
The decision to proceed often precedes clarity on:
the problem being solved
the evidence supporting the approach
the operational risks involved
the incentives shaping the recommendation
Once that happens, the initiative is no longer being evaluated—it is being defended.
The danger of persuasive certainty
AI proposals are frequently delivered with confidence and urgency. Time-bound discounts, competitive pressure, and fear of missing out all play a role. These forces encourage leaders to move quickly, often before assumptions have been tested.
What’s missing is not intelligence.
It’s a mechanism to separate persuasion from proof.
Frameworks matter more than tools
The organisations that deploy AI successfully tend to share one characteristic: disciplined decision-making before commitment.
They slow down just enough to ask:
What must be true for this to work?
What evidence supports that belief?
What risks are we accepting, explicitly?
What would cause us to pause or reverse course?
These are not technical questions. They are leadership questions.
AI does not fix poor decision-making.
It exposes it—faster and at greater scale.