


With a sample of 30 bags, we had a mean of 18.7 and a standard deviation of 2.053592311. This implies a standard error of 0.374932944. We know that if we take another sample of 30 bags, we will probably not get the same as as we got the first time. Each time we take a sample, we will almost always be a bit too big or a bit too small. The standard error is an estimate of how much we will be off on average. Using that, we can construct an interval in a way so that 95% of the time when we construct this interval, we will get the true mean. For this example, we would be 95% confident in the interval 17.9 to 19.5. We cannot tell if the average per bag is 18 or 19, but we are pretty sure it is not 17 or 20.
Except that one student spilled his candy on the floor. He picked it up and counted it, but another student may have added some bits to his pile. He got 26, which was the highest of any student. If we include his bag, we change our interval to 18.1 to 19.8. But should we count it?
I hope this 100-cents, five-sense experience made sense to them.