• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

What is Dunning–Kruger effect and why it is just noise

Ex-User (9086)

Prolific Member
Local time
Today 7:56 PM
Joined
Nov 21, 2013
Messages
4,758
-->
MC G is here with da truth for y'all.

The original Dunning–Kruger graph is wrong and misleading.

It's possible to get the Dunning-Kruger effect without gathering any data at all. Just random noise results and assuming everyone has equal confidence. Here's an article explaining why the effect is autocorrelation over mutually dependent variables.
pic1.png
It plots quartile score on x axis, percentile score on y axis and self-assessment of results is also a function of score x which everyone gives as better than average.

STOP and think for a second what are they doing? Let's say results are represented as variable R. So on x axis they plot R as quartile, on y axis they plot R as percentage and they look at self-assessment of R as Rs(R self-assessed) as a difference of Rs minus R. That's just a lot of one variable plotted everywhere, should ring alarm bells in your heads.

Just assume that everyone thinks they scored 60%. We would get a y=60% line (we don't gather any data, just input arbitrary 60%). Marked in red. This alone can be used to incorrectly show that even if everyone has identical confidence levels the incompetent ones are "overconfident". First of all that's obvious without any research, because by definition low R and high Rs equals "overconfident". Except the level of underconfidence or overconfidence depends only on the arbitrary test's difficulty. Increase the difficulty or make the answers random and everyone will be shown as overconfident.

In plain english the graph is saying that there has to be a group of people whose confidence is higher than their competence. But what the graph shows is that incompetent people are the least confident and the competent people are the most confident.
The "overconfidence" is caused by high average confidence of every human being. Honestly high average confidence is an absolute necessity for doing anything, otherwise people would not attempt to tackle any problems at all for fear of failure or due to their perception of poor results and they would never get better. Humans should be grateful for this positive bias.

If there is anything useful to say about the graph it is that:
1. Everyone has a similar amount of above average confidence, regardless of competence. This is well explained by the "better-than-average-effect" https://en.wikipedia.org/wiki/Illusory_superiority
2. Predicting results requires competence and scoring high requires competence. There is some effect competence has on increasing the confidence even further.
3. It took more than a decade for criticisms of this effect to start appearing. A lot of wrong, useless research is cited and abused by scientists without verification. This bad misleading research circulates in the public and leads to misinformation.

This thing is made-up, created in paint.
68489025-b9c3ff80-025f-11ea-81c6-3d505246174d.png
 

Cognisant

Prolific Member
Local time
Today 7:56 AM
Joined
Dec 12, 2009
Messages
10,593
-->
So you're saying the Dunning–Kruger effect can be attributed to the Dunning–Kruger effect? :D
 

Ex-User (9086)

Prolific Member
Local time
Today 7:56 PM
Joined
Nov 21, 2013
Messages
4,758
-->
So you're saying the Dunning–Kruger effect can be attributed to the Dunning–Kruger effect? :D
XD

1652627971413.jpeg


pic2.png


I plot the same variable on X and Y axis. X/Y is always 1. Red line above blue means "stupid" and below blue line means "smart". Sample size 8 billion people.

Dunning–Kruger effect replicated right there folks. Can I get a 5 million$ research grant? Thanks
 

Attachments

  • pic2.png
    pic2.png
    33.1 KB · Views: 135

Haim

Worlds creator
Local time
Today 9:56 PM
Joined
May 26, 2015
Messages
781
-->
Location
Israel
Dunning kruger is for idiots to think they are smart and for them to tell smart people that they think they are smart because they are idiots.
So basically it is to call anyone dumb.
 

Daddy

Making the Frogs Gay
Local time
Today 2:56 PM
Joined
Sep 1, 2019
Messages
463
-->
The "overconfidence" is caused by high average confidence of every human being. Honestly high average confidence is an absolute necessity for doing anything, otherwise people would not attempt to tackle any problems at all for fear of failure or due to their perception of poor results and they would never get better. Humans should be grateful for this positive bias.

Agreed. But I think it's misleading that Dunning-Kruger focuses on "overconfidence" when really it's a lack of experience. I don't think lacking experience should make someone automatically overconfident. They are only overconfident if they believe they can replace experience with theory. Another reason why I'm not a fan of stats; they are too freely interpreted.
 

dr froyd

Prolific Member
Local time
Today 7:56 PM
Joined
Jan 26, 2015
Messages
1,119
-->
interesting case for sure. I agree with the conclusion of the article, but they severely butchered nearly all statistical aspects of it (a case of dunnig-kruger perhaps?) . First of all, "autocorrelation" has nothing to do with it - autocorrelation is a completely unrelated concept from timeseries modeling. Secondly the effect has nothing to do with plotting y - x vs x in general. The problem arises in the specific context where the numerical values are bounded by an upper and lower limit - as is the case with test scores and quantiles. Since someone who gets 100% on a test can only underestimate his abilities and vice versa, you get a negative slope if you draw test scores and self-assessments randomly.

it also looks like the original Duning Kruger paper was well aware of these issues. Quote from the paper:

"At first blush, the reader may point to the regression effect as an
alternative interpretation of our results. After all, we examined the
perceptions of people who had scored extremely poorly on the
objective test we handed them, and found that their perceptions
were less extreme than their reality. Because perceptions of ability
are imperfectly correlated with actual ability, the regression effect
virtually guarantees this result. Moreover, because incompetent
participants scored close to the bottom of the distribution, it was
nearly impossible for them to underestimate their performance.
Despite the inevitability of the regression effect, we believe that
the overestimation we observed was more psychological than
artifactual. For one, if regression alone were to blame for our
results, then the magnitude of miscalibration among the bottom
quartile would be comparable with that of the top quartile. A
glance at Figure 1 quickly disabuses one of this notion. Still, we
believe this issue warrants empirical attention, which we devote in
Studies 3 and 4"
 

Hadoblado

think again losers
Local time
Tomorrow 4:26 AM
Joined
Mar 17, 2011
Messages
6,614
-->
Yep. IIRC it was contested pretty early on, and the evidence wasn't there, but the concept was so appealing for framing opposition it caught on regardless.

I don't know if the evidence has caught up with it yet.

I do think it's real, based on anecdote, it makes too much sense. But it's not some sort of epistemic canon. I take issue with how it was treated as scientific fact when the evidence hadn't progressed much past Socratic wisdom.

I think of it as "inexperienced people lack the knowledge to make accurate metacognitive statements". I struggle with Dunning Kruger all the time when I move on to something new. Funnily enough this is the equivelent of simultaneously having meta-metacognition but a metacognitive deficit.

One belief I have regarding this is that it's important to become an expert in something. Expertise in one thing allows you some metacogitive awareness about how much you don't know about others.
 

BurnedOut

Beloved Antichrist
Local time
Tomorrow 12:26 AM
Joined
Apr 19, 2016
Messages
1,315
-->
Location
A fucking black hole
I felt so stupid after reading that article given how easily I put DCE at the back of my mind. The autocorrelation part simple actual common sense.

(y-x) will always be correlated with x. So if y is a random number from a uniform distribution, (y-x) is always going to be much more pronounced in the 1st quartile than the 4th quartile where (y-x) will definitely be smaller. That is simply because x is smaller in the first case and larger in the second. The growing size of x correlates with (y-x) and hence when x becomes bigger and bigger, the difference between expected scores and test scores clamps down.
 

BurnedOut

Beloved Antichrist
Local time
Tomorrow 12:26 AM
Joined
Apr 19, 2016
Messages
1,315
-->
Location
A fucking black hole
I do think it's real, based on anecdote, it makes too much sense. But it's not some sort of epistemic canon. I take issue with how it was treated as scientific fact when the evidence hadn't progressed much past Socratic wisdom.
As a matter of fact, I have yet to see DCE in action. Like Glaerhaidh said that nobody would even touch the test if they don't believe that they will score above 0. If you plot those expectations, it is common sense that 50% is the mean expectation and hence even if anybody genuinely unwittingly scores low in the test, they will be thought as overconfident despite the fact that it is actually autocorrelation.

I have never seen unskilled people being overconfident about their skillsets. Most kids I have seen are self-aware about how much they would score. It looks DCE is a social phenomenon more than a heuristic because people will only lie about getting good scores to secure a better image.

In a scenario where metrics is used to measure performance, I cannot imagine anybody having the balls to be unnecessarily overconfident. Why is that the low scorers are constantly bullied, harassed, insulted and also have low self esteem? If low g is related to unhappiness, I am pretty sure people with less skills are likely to be a thousand times less confident than an average person.
 

Hadoblado

think again losers
Local time
Tomorrow 4:26 AM
Joined
Mar 17, 2011
Messages
6,614
-->
You've never seen people over-confident about their skill sets?

I see people overestimate themselves all the time. I do it all the time. I did it today. That seems bizarre that you never see it. Maybe your culture is very different.
 

BurnedOut

Beloved Antichrist
Local time
Tomorrow 12:26 AM
Joined
Apr 19, 2016
Messages
1,315
-->
Location
A fucking black hole
It sounds quirky but the people who are overconfident, as far as I have observed, are the ones who are actually mediocre but expect more but they are not very bad at what they do. In an academic setting, nobody prefers to broadcast their intentions.
 

dr froyd

Prolific Member
Local time
Today 7:56 PM
Joined
Jan 26, 2015
Messages
1,119
-->
many probably have had the experience that in the second year at university you feel you know everything there is to know. Then you learn more and realize things are more complicated and that there is even more things to know. I remember feeling very smart the second year, slightly stupid the third year, and after finishing my masters I felt like I didn't know anything.

that's a different style of DCE I guess, it's harder to quantify and it's not easily condensed to a test score + a questionnaire.
 

Hadoblado

think again losers
Local time
Tomorrow 4:26 AM
Joined
Mar 17, 2011
Messages
6,614
-->
many probably have had the experience that in the second year at university you feel you know everything there is to know. Then you learn more and realize things are more complicated and that there is even more things to know. I remember feeling very smart the second year, slightly stupid the third year, and after finishing my masters I felt like I didn't know anything.

that's a different style of DCE I guess, it's harder to quantify and it's not easily condensed to a test score + a questionnaire.

Very relate. I wouldn't say I felt like I knew everything second year, but the amount of stuff I became aware of that I didn't know multiplied 1000fold by the time I finished honours.

In your first years at uni you're not learning what experts know. You're learning the tools to understand what the experts know and how they know it. But you're still being spoon-fed. There are layers of understanding and you're building the foundation to get through later ones.
 

scorpiomover

The little professor
Local time
Today 7:56 PM
Joined
May 3, 2011
Messages
3,106
-->
MC G is here with da truth for y'all.

The original Dunning–Kruger graph is wrong and misleading.
I am interested in the consequences of this.

What people would be considered very competent according to Dunning-Kruger, who would be considered incompetent according to this?

What people would be considered incompetent according to Dunning-Kruger, who would be considered very competent according to this?
 

Hadoblado

think again losers
Local time
Tomorrow 4:26 AM
Joined
Mar 17, 2011
Messages
6,614
-->
Alright fuckos, I'm throwing down the gauntlet.

The Dunning-Kruger effect is a tautology.

The fact that they fucked up is proof of concept. These absolute cloudbarking morons thought they could categorise all competence, as if from Mt. Uberbensch they could see what everyone else in the world had failed to see.

But as Blar correctly points out, they were so incompetent that they couldn't even statistics.

If the Dunning-Kruger effect did not exist, this could not have happened.

Ergo, the Dunning-Kruger effect exists and both Dunning and Kruger are founder-martyrs ¯\_(ツ)_/¯¯\_(ツ)_/¯¯\_(ツ)_/¯¯\_(ツ)_/¯¯\_(ツ)_/¯
 

EndogenousRebel

mean person
Local time
Today 1:56 PM
Joined
Jun 13, 2019
Messages
1,722
-->
Location
Narnia
I think it only really works when you are an expert in the same bracket of specific competency as the other person. Meaning you have authority over such a thing and can point out how much shit someone is full of with their words.

Most people I have met do overestimate themselves, but when compared to fucking god like me they know their place.

But seriously, we're all stupid. Hubris is detectable.

Chances are, the more someone boasts about their ability the more they are betting that they will perform flawlessly. This will create a bias in our head, and even if they perform moderatly we will likely see it as them performing poorly.
 

Hadoblado

think again losers
Local time
Tomorrow 4:26 AM
Joined
Mar 17, 2011
Messages
6,614
-->
Boasting is gambling with social capital as the stake. It makes stuff more fun.

The best all-male social dynamics I've experienced have always been ones where boasting relates to testable claims, which means there's almost a rotation of social capital and contextual hierarchy as people succeed and fail. Sports and board games are great for this. Within these contexts, criteria for exclusion is primarily based on whether you can adjust to the context as it shifts. People who feel the need to always be on top are ripped apart.
 

scorpiomover

The little professor
Local time
Today 7:56 PM
Joined
May 3, 2011
Messages
3,106
-->
It plots quartile score on x axis, percentile score on y axis and self-assessment of results is also a function of score x which everyone gives as better than average.

STOP and think for a second what are they doing? Let's say results are represented as variable R. So on x axis they plot R as quartile, on y axis they plot R as percentage and they look at self-assessment of R as Rs(R self-assessed) as a difference of Rs minus R. That's just a lot of one variable plotted everywhere, should ring alarm bells in your heads.
Suppose we take 100 people and put them into 4 groups of 25. Then we reorganise the arbitrary x-values so one of the lines (actual ability) is rising.

If you reverse the order, so it starts with the top quartile and you go downwards, you get a very different result that makes the point even more.

Just assume that everyone thinks they scored 60%. We would get a y=60% line (we don't gather any data, just input arbitrary 60%). Marked in red. This alone can be used to incorrectly show that even if everyone has identical confidence levels the incompetent ones are "overconfident". First of all that's obvious without any research, because by definition low R and high Rs equals "overconfident". Except the level of underconfidence or overconfidence depends only on the arbitrary test's difficulty. Increase the difficulty or make the answers random and everyone will be shown as overconfident.
Yes, of course.

Those in the bottom quartile to the top quartile, have 49, 21, 8 and -15 respectively. Raise the bar by 10 points, and you have 39, 11, -2 and -7 respectively.

So you're really only interested in the diffferences between them:
The bottom quartile have 27 points more than the 3rd quartile.
The 3rd quartile have 13 points more than the 2nd quartile.
The 2nd quartile have 22 points more than the top quartile.

So the differences diminish, until you reach the last quartile.

In plain english the graph is saying that there has to be a group of people whose confidence is higher than their competence.
Well, as you already explained, we can make the tests as hard or as easy as we want. So we can easily make a test where everyone performs better than their confidence, so everyone's confidence can be lower than their competence.

But what the graph shows is that incompetent people are the least confident and the competent people are the most confident.
Yes, the graph clearly shows that.

The "overconfidence" is caused by high average confidence of every human being. Honestly high average confidence is an absolute necessity for doing anything, otherwise people would not attempt to tackle any problems at all for fear of failure or due to their perception of poor results and they would never get better.
There are a lot of people who have low confidence in areas, and who don't attempt many efforts in those things, e.g. many young INTPs and INTJs have low confidence in their dating skills and so don't ask out lots of women.

Hope is necessary for motivation. This is why it's so important to instill hope in your population for fulfilling the basic human desires that nearly all humans want.

Humans should be grateful for this positive bias.
The humans in this study were psychology students at Cornell university. I suspect that most of them chose to go to university because they thought that having a degree would net them a salary that was far higher than average. So I suspect that most of them think they're better than most people at tasks that require only giving answers using their brains.

Ask them to perform physical tasks, like changing a tyre, or beating a carpet, and their perceived scores would be much lower.

If there is anything useful to say about the graph it is that:
1. Everyone has a similar amount of above average confidence, regardless of competence. This is well explained by the "better-than-average-effect" https://en.wikipedia.org/wiki/Illusory_superiority
In real life, confidence differs markedly, especially in dating. There are plenty of young men in their 20s who seem to be very confident with the ladies. There are also plenty of young men in their 20s who seem to be completely lacking in confidence with the ladies. The same can be seen when it comes to maths, and many other things.

Rather, what this shows, is that universities select for confidence, not competency.

1) Imagine a high school with a sporting genius, and an academic genius. NFL teams will send a scout to visit the high schools to see the sporting geniuses play. Even if the sporting genius doesn't think he's good enough, the scout will assure him he is and convince him to sign.

If the scout doesn't find players of sufficient calibre, he will tell the coach "Maybe next year."

Sporting scouts reject those who are not competent, and seek out those with competency but who lack confidence.

So for sports teams, competency is proritised over confidence.

To attend MIT, the academic genius will have to apply to them. If he doesn't think he's good enough and doesn't apply to MIT, his seat will simply go to a less academically-talented student.

Universities select for those with the conficence to apply, and only of those that have the confidence to apply, do they attempt to select the most competent.

So for universities, confidence is proritised over competency.

2. Predicting results requires competence and scoring high requires competence. There is some effect competence has on increasing the confidence even further.
What this indicates, is that if you group people into demographics, like students of the same university, and then group them further into sub-demographics, like the quartiles, then you see that each sub-demographic's level of confidence (16) has a much smaller spread than their actual competency (80), which here is roughly 20%.

The sub-demographics have been grouped. So any outliers within each quartile have already been countered for, to give an average score.

But then why would we see such a massively greater disparity between confidence and competency?

Moreover, universities give students entrance exams. So everyone on the same Psychology course should be of similar competency.

Answer: Their confidence and thus predicted result is based on their previous experience, i.e. their schooling. The course material and entrance exams for Cornell university allow for a much smaller spread of results, than the random test they were given allows for, and thus their experiences in real life.

3. It took more than a decade for criticisms of this effect to start appearing. A lot of wrong, useless research is cited and abused by scientists without verification. This bad misleading research circulates in the public and leads to misinformation.
Yes, but why NOW? It's not like you're saying that someone else did the same experiments and got a different result. This is just another way of looking at the same paper, that could have been done in 1995.

We've just been through a pandemic that severely affected the economy and the job market. Right now, we need to be boosting confidence, not reducing it.

In popular views of DKE, confidence is inversely correlated with competency. If believed, then self-promotion would become inversely correlated with success.

So from a certain POV, if you discredit DKE, then you can think that it can appear to incompetent self-promoters that there's nothing to stand in their way, from becoming a success, and thus will boost people's confidence in themselves to find work, and thus fix the economy.

This thing is made-up, created in paint.
Well, imagine if you correlated confidence with competency. Then you'd believe everything you heard from people that sounded sure of themselves. What do you think would happen?

You'd give $1,000 to a different Nigerian prince every day, that's what! You'd get scammed so many times, you'd be broke!

Those who managed to stop themselves before they went broke completely, would become uber-sceptical of anyone making claims.

Attitudes like DKE meant that computing firms like Google had no qualms about giving prospective candidates their own tests, so they could test for competency without having to rely on confidence as a measure of competency.

But if this is a move to shift to anti-DKE in order to change the economy, that is bound to change too, and then eventually, we'll see a jobs market crash, when it becomes clear that many of the people who were hired because of their confidence were just pretending. Then people will worry that other firm's employees will con them as well, and then their firm will also be looked upon as incompetent, and then no-one would want to hire them. So then employers would be very distrusting of employees and make candidates sit through a battery of tests.

That would put off the uber-competent who are extremely lacking in confidence, as they'd think they would not pass the tests at all.

That would also screen out the incompetent confident people. But since they're incompetent, and are only pretending so one of the firms they apply to will be fooled, they might as well try to fool the same companies anyway, as that's the only way they'd get a good-paying job. So they'll probably apply anyway.

So it just means more testing to get a job, while the people those tests are supposed to find, are even less likely to sit such a test.

Pretty picture.

What your graph shows, is people's "inner confidence", the thing that you believe about yourself, that show to almost no-one else.

The novice thinks he knows nothing.
The amateur thinks he knows everything, then quickly despairs as he realises that he knows very little compared to the expert he aspires to become.
The expert knows he knows a lot.

That would be shown clearly, by asking people to sit in an MRI or a Pet scan, while they are asked to think about how confident they are in their ability to perform adequately. The more confidence they experience as they think about their own personal evaluations of their ability, the more that certain parts of the brain will light up.

However, the test was asking questions of them. So we aren't talking about "inner confidence", but "outer confidence", the confidence that people show to others.

The novice expresses zero confidence.
The amateur expresses massive confidence.
The expert expresses median confidence. He knows that his uber-competency impresses people a lot more than any confidence he can show.

That's the part of real-life, that is the reason why so many people agree with the DKE.
 
Top Bottom