Human beings tend to be much too overconfident. Evidence of our overconfidence can be seen in how readily we assume more than we can reasonably infer from the size of the sample used.
Here’s what I mean: Consider a gumball machine that is opaque and filled with thousands of gumballs that are either green or red. Green are good, red are bad. A nickel is put in and out comes a green gumball. The question is: Based on this one-for-one achievement of a green gumball on the first try, what can we reasonably infer about the proportion of green and red gumballs in the gumball machine? If another nickel is put in and the next gumball out of the machine is also green, now what can we reasonably infer about the proportion of greens to reds? Are all of them green? Most of them?
The only thing we can know with certainty in this context is that the first two gumballs we got out of the machine were green. Everything beyond that is inference, and the susceptibility of our inference to error is inversely proportional to our sample size. The smaller the sample size, the higher the susceptibility of our inference to error, and vice versa. We do not know if we got the only two green gumballs in the machine or if all but two of the gumballs in the machine are green or something in between. If our sample size were 5,000, we could predict the ratio with a fairly high degree of reliability.
Two key observations should be understood from this (Kahneman, 2013):
- Large samples are more precise than small samples.
- Small samples yield extreme results more often than large samples do.
A diminutive sample size should caution us against assuming too much.
- Kahneman, D. (2013). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.