<< Chapter < Page | Chapter >> Page > |
This chapter introduces a new probability density function, the F distribution. This distribution is used for many applications including ANOVA and for testing equality across multiple means. We begin with the F distribution and the test of hypothesis of differences in variances. It is often desirable to compare two variances rather than two averages. For instance, college administrators would like two college professors grading exams to have the same variation in their grading. In order for a lid to fit a container, the variation in the lid and the container should be the same. A supermarket might be interested in the variability of check-out times for two checkers.
In order to perform a F test of two variances, it is important that the following are true:
Unlike most other tests in this book, the F test for equality of two variances is very sensitive to deviations from normality. If the two distributions are not normal, the test can give a biased result for the test statistic.
Suppose we sample randomly from two independent normal populations. Let and be the population variances and and be the sample variances. Let the sample sizes be n 1 and n 2 . Since we are interested in comparing the two sample variances, we use the F ratio:
F has the distribution F ~ F ( n 1 – 1, n 2 – 1)
where n 1 – 1 are the degrees of freedom for the numerator and n 2 – 1 are the degrees of freedom for the denominator.
If the null hypothesis is , then the F Ratio becomes
if δ 0 =1 then
Test statistic is :
The various forms of the hypotheses tested are:
Two-Tailed Test | One-Tailed Test | One-Tailed Test |
---|---|---|
H 0 : σ 1 2 = σ 2 2 | H 0 : σ 1 2 ≤ σ 2 2 | H 0 : σ 1 2 ≥ σ 2 2 |
H 1 : σ 1 2 ≠ σ 2 2 | H 1 : σ 1 2 >σ 2 2 | H 1 : σ 1 2 <σ 2 2 |
A more general form of the null and alternative hypothesis for a two tailed test would be :
Where if δ 0 = 1 it is a simple test of the hypothesis that the two variances are equal. This form of the hypothesis does have the benefit of allowing for tests that are more than for simple differences and can accommodate tests for specific differences as we did for differences in means and proportions. This form of the hypothesis also shows the relationship between the F distribution and the χ 2 : the F is a ratio of two chi squared distributions. This is helpful in determining the degrees of freedom of the resultant F distribution.
If the two populations have equal variances, then and are close in value and the test statistic, is close to one. But if the two population variances are very different, and tend to be very different, too. Choosing as the larger sample variance causes the ratio to be greater than one. If and are far apart, then is a large number.
Therefore, if F is close to one, the evidence favors the null hypothesis (the two population variances are equal). But if F is much larger than one, then the evidence is against the null hypothesis.
Notification Switch
Would you like to follow the 'Introductory statistics' conversation and receive update notifications?