The distribution used for the hypothesis test is a new one. It is called the
F distribution , named after Sir Ronald Fisher, an English statistician. The
F statistic is a ratio (a fraction). There are two sets of degrees of freedom; one for the numerator and one for the denominator.
For example, if
F follows an
F distribution and the number of degrees of freedom for the numerator is four, and the number of degrees of freedom for the denominator is ten, then
F ~
F
4,10 .
Note
The
F distribution is derived from the Student's t-distribution. The values of the
F distribution are squares of the corresponding values of the
t -distribution. One-Way ANOVA expands the
t -test for comparing more than two groups. The scope of that derivation is beyond the level of this course.
To calculate the
F ratio , two estimates of the variance are made.
Variance between samples: An estimate of
σ2 that is the variance of the sample means multiplied by
n (when the sample sizes are the same.). If the samples are different sizes, the variance between samples is weighted to account for the different sample sizes. The variance is also called
variation due to treatment or explained variation.
Variance within samples: An estimate of
σ2 that is the average of the sample variances (also known as a pooled variance). When the sample sizes are different, the variance within samples is weighted. The variance is also called the
variation due to error or unexplained variation.
SSbetween = the
sum of squares that represents the variation among the different samples
SSwithin = the sum of squares that represents the variation within samples that is due to chance.
To find a "sum of squares" means to add together squared quantities that, in some
cases, may be weighted. We used sum of squares to calculate the sample variance andthe sample standard deviation in
Descriptive Statistics .
MS means "
mean square ."
MSbetween is the variance between groups, and
MSwithin is the variance within groups.
Calculation of Sum of Squares and Mean Square
k = the number of different groups
n
j = the size of the
j
th group
s
j = the sum of the values in the
j
th group
n = total number of all the values combined (total sample size: ∑
n
j )
x = one value: ∑
x = ∑
s
j
Sum of squares of all values from every group combined: ∑
x2
Between group variability:
SStotal = ∑
x2 –
Total sum of squares: ∑
x2 –
Explained variation: sum of squares representing variation among the different samples:
SSbetween =
Unexplained variation: sum of squares representing variation within samples due to chance:
df 's for different groups (
df 's for the numerator):
df =
k – 1
Equation for errors within samples (
df 's for the denominator):
dfwithin =
n –
k
Mean square (variance estimate) explained by the different groups:
MSbetween =
Mean square (variance estimate) that is due to chance (unexplained):
MSwithin =
MSbetween and
MSwithin can be written as follows:
The one-way ANOVA test depends on the fact that
MSbetween can be influenced by population differences among means of the several groups. Since
MSwithin compares values of each group to its own group mean, the fact that group means might be different does not affect
MSwithin .