Two terms commonly reported in scientific articles. We believe that most readers are acquainted with these concepts already. Let’s brush them clean. What we seldom realize is that SD is one of the most commonly used statistical concepts in day to day life. Consciously or not we imagine a tolerable variation of different biological phenomena - age, weight, height, BP, anger. We understand that there are no hard and fast absolute limits and allow for some variation. This variation, that all individual values deviate from the mean value, is statistically measured using different measures.
Let us say we have the Hb values of children in a school and their arithmetic mean. Each individual child’s value deviates/ varies from this mean by some value. The sum of these values of deviation would be zero. Hence we have two choices- either sum the absolute values of these deviations and that would be the absolute mean deviation which would give an idea of the absolute magnitude deviation of the values from the mean, but no sense of the direction of deviation. The second option is to square the values of individual deviation (to get rid of the sign) and calculate their average. This again gives us the idea of on an average, how much are the individual values away from the mean? This is what you call the variance. But variance is not what you commonly see in descriptive tables alongside the mean. The reason is that as we have squared the values, the units are squared and hence it is difficult to interpret the value of variance. So we take a square root of it and lo we have our standard deviation- in the same units as the variable measured and thus easier to interpret.
Next, based on the mean and SD one can derive reference ranges which flank the mean. They can be 95% or 90% or whatever you may want to understand about the sample. You want to see the limits that contain 95%/ 90% etc of the observations. So when a mother thinks her child weighs lower than his peers, she has subconsciously calculated the mean and standard deviation of the sample of the children who are his peers and thinks that he is probably below what 95% or 90% weigh – whatever reference range she had constructed.
What then are standard errors? Let us zoom out of the sample to see the population from which it came- a population that is composed of infinite number of such samples. Now the sample is to the population what the individual was to the sample. So earlier we thought of individual values distributed around a sample mean. Now we shall imagine sample means distributed around a population mean. This is what you call a sampling distribution. Now the mean of this distribution is the population and the SD of this distribution is the standard error. So an SE tells you, on an average, how much the sample means are away from the population mean. The mathematical expression of an SE (calculated as SD of the sampling distribution) comes down to SD divided by sqrt of sample size. Thus, SE depends on the variability in the sample (SD) and sample size (n). So for a sample mean to be a good estimate of the population mean, it should deviate least from the population mean. In other words a small SE means that the sample mean is a better approximation of the population mean. Now from the above mathematical expression one can understand how to have a small SE!
Now, what is the equivalent of reference range. The limits that enclose 95% or 90% sample means with the population mean as centre- are the confidence intervals. Needless to mention this is an abstract concept where you imagine you have taken infinite samples (with replacement) from the same population, calculated their means and plotted them as a distribution. If you are adamant about wanting a textbook definition of confidence intervals one may refer to Kirkwood’s Essential Medical Statistics.
Why do we need to imagine this very abstract concept? Because that is what an epidemiologist’s job is about. Talking about populations while they are just studying samples! And as evident from their calculation, SD and SE give different information about the sample. SD measures the variability is the sample and SE measures the uncertainty in the sample statistic (mean/ proportion/ others).