For most of my career covering the space industry, I assumed that the loudest voices at engineering reviews and investor pitches were also the sharpest minds in the room. I conflated fluency with mastery, certainty with correctness, polished delivery with deep knowledge. It took watching several confident founders crash programs into the ground (sometimes literally) before I started questioning that assumption. The research on the gap between confidence and competence turns out to be more complicated, and more unsettling, than the popular narrative suggests.
The Story We All Believe
You’ve heard some version of the Dunning-Kruger effect. It’s become cultural shorthand for a specific kind of person: the incompetent blowhard who doesn’t know enough to know what they don’t know. The concept has been popularized through various interpretations, often summarized as the idea that those with the least skill are the least aware of their limitations.
The concept has become a weapon. We deploy it against political opponents, annoying coworkers, and anyone we’d like to dismiss. It gives us a tidy explanation for why confident idiots seem to run so much of the world. And it flatters the person invoking it, because the implication is always: I’m not like that.
But the actual research paints a different picture. And the differences matter for how we evaluate people, build teams, and make decisions about who deserves authority.

What Dunning and Kruger Actually Found
In the 1990s, David Dunning and Justin Kruger at Cornell University conducted studies asking undergraduates to estimate their performance on logic tests in two ways. First: how many questions did you get right? Second: how did you do compared to everyone else?
The raw self-assessment wasn’t wildly off. As math professor Eric C. Gaze of Bowdoin College explained in his analysis of the original study, the lowest-scoring participants overestimated their performance while the top-scoring participants underestimated theirs. Both groups were off by roughly the same magnitude in absolute terms.
Not great, but not catastrophic either. Both groups made similar-sized errors.
The dramatic results came from the second question, the peer comparison. The lowest-scoring students estimated they performed much better relative to their peers than they actually did. This is where the “unskilled and unaware” narrative was born.
The Math Problem Nobody Talks About
Here’s where the story gets uncomfortable for anyone who has ever smugly referenced Dunning-Kruger at a dinner party.
Gaze and his colleagues demonstrated that this effect can be reproduced using randomly generated data with no humans involved at all. They created fictional people and randomly assigned each one a test score and a self-assessment ranking. Then they divided these fictional people into quarters, exactly as Dunning and Kruger did.
Because the self-assessment scores were randomly distributed, each quarter averaged a self-ranking around 50. But the bottom quarter’s actual performance averaged around the 12.5th percentile. Random data, zero cognition, and you still get a massive “overestimation” for the bottom group. The effect appeared without any human psychology operating at all.
This is a mathematical artifact, not a discovery about human cognition. The bottom group will always appear to overestimate the most because they are farthest from the ceiling. The top group will always appear to underestimate because they are closest to it. This is what happens when you subtract actual performance from self-assessment and sort by performance. The shape of the graph is baked into the method.
What’s Actually Going On
The real effect Dunning and Kruger captured is simpler and far more universal: most people think they’re better than average. Studies show that the vast majority of Americans consider themselves better drivers than average, and similar percentages of professionals in various fields think they’re more skilled than their peers. This pattern repeats across domain after domain.
It is, of course, mathematically impossible for most people to be better than average.
This better-than-average effect is genuine and well-documented. But it applies to nearly everyone, not just the least skilled. The incompetent aren’t uniquely deluded. They’re subject to the same bias as the rest of us, amplified by a statistical design that makes their error look enormous.
When researchers designed different kinds of tests, where students rated their confidence on each individual question rather than ranking themselves against peers, the results were striking. Among students who scored in the bottom quarter, the majority were fairly good at estimating their real ability. Only a small percentage significantly overestimated their skills, while a few actually underestimated them.
Low-skilled people, it turns out, mostly know they’re low-skilled. They’re not walking around in a fog of self-congratulation. The popular narrative has this almost exactly backward.
Why Confidence Wins Anyway
So if incompetent people aren’t uniquely overconfident, why does the most confident person in the room so often turn out to be mediocre?
The answer has less to do with cognitive bias and more to do with incentive structures. Confidence is rewarded in almost every social and professional context. Job interviews, pitches, meetings, performance reviews: these are environments that select for people who project certainty. The person who expresses uncertainty or says they need more time to study an issue is almost never the one who gets promoted.
This creates a sorting effect. Over time, organizations and social hierarchies tend to elevate people who perform confidence rather than demonstrate competence. The confident and competent rise too, of course. But confidence alone provides a significant boost, and the absence of confidence imposes a significant penalty, regardless of actual ability.
I’ve seen this play out across the space industry for years. Companies led by supremely confident founders who could mesmerize investors but couldn’t solve basic thermal management problems. Programs that sounded brilliant in boardrooms and fell apart on the test stand. The confidence was never the problem exactly. The problem was that nobody built a system to check whether the confidence was warranted.
The Confidence-Competence Trap in Practice
This dynamic gets especially dangerous in high-stakes environments. Overconfidence in decision-making has been linked to failures in trade policy, military strategy, and organizational management. When leaders are rewarded for projecting certainty, the institution loses its ability to course-correct.
Consider how this connects to something we’ve explored at Space Daily before: the terror of being average that haunts former “smart kids” into their careers and relationships. People who built their identity around being the most impressive person in the room develop a specific kind of fragility. They’re not necessarily the most confident performers. They might actually be less likely to project false certainty because the stakes of being wrong feel unbearable to them.
The most confident person in the room often isn’t the former prodigy. They’re the person who has never been forced to seriously reckon with their own limitations. Not because they’re stupid, but because no system has ever required it of them.

AI Makes This Worse
New research is revealing an interesting twist on overconfidence in the age of AI tools. Studies on AI-assisted decision making have found that when people use AI tools, they can fall into a reverse Dunning-Kruger trap, where the fluency and confidence of AI-generated outputs makes users overestimate the quality of their own work. The AI sounds certain, so the user feels certain.
This is a new version of the confidence-competence gap. Instead of incompetent people being unable to recognize their incompetence, you now have averagely competent people being unable to evaluate AI-generated outputs because the outputs project more certainty than the underlying knowledge warrants. The confidence is borrowed, but the consequences are still real.
In my recent piece on people who compulsively correct others, I wrote about how precision becomes a survival mechanism for people who grew up in environments where being wrong carried real penalties. AI creates the opposite environment: an experience of effortless fluency where nothing feels wrong, even when it is.
What This Means for How We Evaluate People
If the Dunning-Kruger effect is mostly a statistical artifact, and the real finding is that everyone thinks they’re above average, then the practical implications shift. We can stop using “Dunning-Kruger” as a label for people we consider stupid. That was always more satisfying than accurate.
What we should focus on instead is building evaluation systems that don’t reward confidence as a proxy for skill. This is harder than it sounds.
Structured interviews with standardized questions perform better than unstructured conversations for predicting job performance. Blind auditions improve hiring in orchestras. Code reviews catch errors that confident programmers miss. The pattern is consistent: when you build systems that evaluate outputs rather than impressions, the confidence-competence gap narrows.
But most organizations still select for confidence. Most meeting cultures reward the person who speaks first and speaks loudly. Most pitch environments reward the founder who projects absolute certainty about their timeline and technology readiness level.
There’s a related piece Space Daily ran about people who always volunteer to go first, and the insight there is revealing: going first often isn’t about bravery, it’s about managing anxiety. The same could be said for projecting confidence. The most outwardly certain person in the room isn’t necessarily the most capable. They may simply be the person least able to tolerate the discomfort of uncertainty.
The Better Question
The original Dunning-Kruger finding has been cited thousands of times. It’s become common sense, the kind of idea people reference without questioning because it confirms something they already believe. Confident people are often wrong, therefore the most confident people must be the most wrong. It’s clean, intuitive, and appears to be largely a product of how the math was designed rather than how the human brain works.
The real question isn’t “why don’t incompetent people know they’re incompetent?” Most of them do.
The better question is: why do our institutions keep selecting for confidence over demonstrated ability? Why do we build hiring processes, promotion systems, leadership pipelines, and investment structures that treat projected certainty as a reliable signal?
The answer probably has to do with speed and simplicity. Evaluating actual competence is slow, expensive, and ambiguous. Evaluating confidence takes about thirty seconds. We default to the easy signal because the hard signal costs more to extract.
But the cost of getting this wrong is enormous. In organizations, it means the wrong people end up making decisions. In public discourse, it means we mistake fluency for expertise. In technology, it means we build tools (and now AI systems) that amplify confidence without anchoring it to reality.
Recalibration, Not Debunking
None of this means overconfidence isn’t real. It is. The better-than-average effect is one of the most replicated findings in psychology. People systematically overrate themselves across nearly every domain, from driving to teaching to logical reasoning.
But this is a universal human tendency, not one concentrated among the least skilled. As research has shown, the people at the bottom of a performance distribution are roughly as accurate at self-assessment as everyone else. The majority of them know where they stand.
So when you look around a conference room, a boardroom, or an investor meeting and see someone radiating absolute certainty, the right response isn’t to assume they’re an example of Dunning-Kruger. The right response is to ask: what’s the evidence? How do they know? What system is checking their confidence against reality?
Because confidence is cheap. Everyone has access to it. The people who have earned their certainty can usually tell you exactly why they’re confident, what the failure modes are, and where their knowledge gets thin. The people who haven’t earned it will just project it harder.
That gap between performed confidence and earned confidence is where most of the damage happens. And you won’t find it by running a personality test or invoking a catchy psychological concept. You find it by asking the second question, then the third, then the fourth, until the confidence either holds or it doesn’t.
The most confident person in the room is rarely the most competent. But not because incompetence creates overconfidence. Rather because our systems keep confusing one for the other, and we’ve been too confident in a psychological theory to notice.
Photo by Vlada Karpovich on Pexels
