Taking a random sample of a population and using it to estimate a parameter of the whole population is problematic because it may not be representative. This problem can be partially solved by producing a range of values and stating, using confidence levels, the probability that the true value falls within this range. This is the main principal of using confidence intervals. Most commonly in research, the confidence levels are stated as either 95 or 99 %. These mean that the estimated range would bracket the true population parameter in approximately 95 or 99 % of the cases, respectively. In papers the confidence interval will be state the confidence level and then the range of values (e.g. 95% CI: 28.9, 39.2). In this case the authors have calculated that there is a 95 out of 100 chance that the true value in the population as a whole falls between the values of 28.9 and 39.2. This range would have been calculated from a smaller sub-population.
The use of confidence intervals is also common is method validation work. For example, a paper published in the American Journal of Clinical Nutrition in 2011 reported on the validation of a method to estimate changes in energy intake from an equation against actual study data of energy intake changes in free living subject1. In their results the authors stated that ‘by applying the method to our simulated free-living virtual study subjects, we showed that daily weight measurements over periods >28 d were required to obtain accurate estimates of energy intake change with a 95% CI of <300 kcal/d.’ What this means is that the authors were confident that there was a 95 out of 100 chance that after 28 days of measuring body weight they would be able to predict changes to energy intake to within 300 kcal. Of course conversely, this means that there was a 5 in 100 chance that their method could not achieve this accuracy.