Standard Deviation vs Standard Error: Clearing up the Confusion with Visual Examples

Standard deviation and standard error are two statistical measures that often get confused with each other. While both measures describe the variability in the data, they serve different purposes.

Standard deviation measures the spread of the data. It calculates how far the individual data points deviate from the mean of the data set. A low standard deviation implies that the data points are tightly clustered around the mean, while a high standard deviation means that the data points are more spread out.

For example, consider Dataset B (10, 11, 12, and 14) and Dataset C (10, 100, 1000, and 2000). Dataset C has a higher standard deviation than Dataset B, indicating that its data points are more spread out.

For instance, imagine you want to estimate the mean height of 30 students in a class. You take four random samples of five students each and calculate the mean and standard deviation for each sample. The standard error of the mean tells you how much the sample means differ from the population mean.

You draw a total of four random samples. Similarly, you compute sample means for each of these samples (highlighted as vertical lines) and standard deviation (SD1, SD2, SD3 & SD4).

On the other hand, standard error of the mean (SEM) measures the precision of the sample mean as an estimator of the population mean. It is calculated by dividing the sample standard deviation by the square root of the sample size.

In other words, Standard Error of Mean is the Standard Deviation of the Sample Means.

The formula for standard error of the mean (SEM) is given below:

SEM = (sample standard deviation) / sqrt(sample size)


In summary, standard deviation and standard error of the mean have different purposes, and understanding the difference between the two is crucial in statistical analysis.

Share this post

More Posts