The term 'confidence' is often used in ways that demonstrate a common misunderstanding.
It is not uncommon to hear someone state that they want some type of 'statistical confidence'.
Something like "I want to be 95% confident in [a particular measurement]."
Statistical confidence generally refers to the confidence intervals for a calculated statistic. Since a 'statistic', by definition, is an estimate of a population parameter that is calculated from a random sample, it has what is called 'sampling uncertainty'.
We use a sample to make an inference about (or estimate) a population parameter.
A 'confidence interval' represents our uncertainty about the estimate of the population parameter.
This sampling uncertainty, and the resulting confidence interval, requires an estimate of the population variance (usually the sample standard deviation).
To calculate variance (and a sample standard deviation), we need a sample size of no less than two (n = 2).
Therefore, you get a level of "confidence" from a sample as small as n = 2. The fly in the ointment is that the confidence interval can be really wide at such a small sample.
There are other methods to calculate a required sample size that are based on the concept of Statistical Power.