8. One-Sample Difference of Means Test
Computing the Z or t Test Statistic
In this formula, $\mu$ is the population mean, $\sigma$ is the population standard deviation, and $\sigma _{\overline{x}}$ is the standard error of the mean (sometimes denoted as SEM in some textbooks.)
In general, the Z statistic should be used if the sample size is greater than or equal to 30 and the population standard deviation is known. Otherwise, the t statistic is used. A Z-score of 1.96 is associated with a probability of .025, or 2.5%, meaning that 95% of all values will fall between a Z-score of -1.96 and +1.96. Thus, at an $\alpha$ level of .05, a Z statistic of greater than 1.96 or less than -1.96 would lead a researcher to reject the null hypothesis.
A one-sample difference of means test is used when examining the differences between a sample and a population. For example, in 2018, the infant mortality rate across the entire United States was 5.7 deaths per 1,000 live births. That same year in Georgia, however, the infant mortality rate averaged 7.0 deaths per live births. How does the Infant Mortality Rate in Georgia differ from the national average?
In the example comparing infant mortality rates in Georgia against the country-wide average, we’ll use Z since we have a large set of population data and a population standard error of the mean of .50. So, using our formula: (7.0-5.7)/0.50 = 2.6, so the Z statistic is 2.6. Our calculated Z statistic (2.6) is greater than a Z value of 1.96 and thus we would reject the null hypothesis and conclude that the infant mortality rate in Georgia is significantly higher than the infant mortality rate across the United States as a whole.