This module describes how to calculate the F Ratio and F Distribution based on the hypothesis test for the One-Way ANOVA.
The distribution used for the hypothesis test is a new one. It is called the
distribution,
named after Sir Ronald Fisher, an English statistician. The
statistic is a ratio (a
fraction). There are two sets of degrees of freedom; one for the numerator and one forthe denominator.
For example, if
follows an
distribution and the degrees of freedom for the
numerator are 4 and the degrees of freedom for the denominator are 10, then
~
.
The
distribution is derived from the Student's-t distribution. One-Way ANOVA expands the
-test for comparing more than two groups. The scope of that derivation is beyond the level of this course.
To calculate the
ratio, two estimates of the variance are made.
-
Variance between samples: An estimate of
that is the variance of the sample
means multiplied by n (when there is equal n). If the samples are different sizes, the variance between samples is weighted toaccount for the different sample sizes. The variance is also called
variation due to treatment or
explainedvariation.
-
Variance within samples: An estimate of
that is the average of the sample
variances (also known as a pooled variance). When the sample sizes are different, thevariance within samples is weighted. The variance is also called the
variation due to error or
unexplained variation.
-
the sum of squares that represents the variation among the different
samples.
-
the sum of squares that represents the variation within samples that is
due to chance.
To find a "sum of squares" means to add together squared quantities which, in some
cases, may be weighted. We used sum of squares to calculate the sample variance andthe sample standard deviation in
Descriptive Statistics .
means "mean square."
is the variance between groups and
is the variance within groups.
Calculation of sum of squares and mean square
-
= the number of different groups
-
= the size of the
group
-
= the sum of the values in the
group
-
= total number of all the values combined. (total sample size:
)
-
= one value:
- Sum of squares of all values from every group combined:
- Between group variability:
- Total sum of squares:
- Explained variation- sum of squares representing variation among the different samples
- Unexplained variation- sum of squares representing variation within samples due to chance:
- df's for different groups (df's for the numerator):
- Equation for errors within samples (df's for the denominator):
- Mean square (variance estimate) explained by the different groups:
- Mean square (variance estimate) that is due to chance (unexplained):
and
can be written as follows:
-
-
The One-Way ANOVA test depends on the fact that
can be influenced by population
differences among means of the several groups. Since
compares values of
each group to its own group mean, the fact that group means might be different doesnot affect
.
The null hypothesis says that all groups are samples from populations having the same
normal distribution. The alternate hypothesis says that at least two of the samplegroups come from populations with different normal distributions. If the null hypothesis
is true,
and
should both estimate the same value.
The null hypothesis says that all the group population means are equal. The hypothesis of equal means implies that the populations have the same normal distribution because it is assumed that the populations are normal and that they have equal variances.
F-ratio or f statistic
If
and
estimate the same value (following the belief that
is
true), then the F-ratio should be approximately equal to 1. Mostly just sampling errorswould contribute to variations away from 1. As it turns out,
consists of
the population variance plus a variance produced from the differences between thesamples.
is an estimate of the population variance. Since variances are
always positive, if the null hypothesis is false,
will generally be larger than
.
Then the F-ratio will be larger than 1.However, if the population effect size is small it is not unlikely that
will be larger in a give sample.
The above calculations were done with groups of different sizes. If the groups are the same size, the calculations simplify somewhat and the F ratio can be written as:
Where ...
-
the sample size
-
-
-
the mean of the sample variances (pooled variance)
-
the variance of the sample means
The data is typically put into a table for easy viewing. One-Way ANOVA results are often displayed in this manner by computer software.
Source of Variation |
Sum of Squares (SS) |
Degrees of Freedom (df) |
Mean Square (MS) |
F |
Factor
(Between) |
SS(Factor) |
k - 1 |
MS(Factor) = SS(Factor)/(k-1) |
F = MS(Factor)/MS(Error) |
Error
(Within) |
SS(Error) |
n - k |
MS(Error) = SS(Error)/(n-k) |
|
Total |
SS(Total) |
n - 1 |
|
|
Three different diet plans are to be tested for mean weight loss. The entries in the table are the weight losses for the different plans. The One-Way ANOVA table is shown below.
Plan 1 |
Plan 2 |
Plan 3 |
5 |
3.5 |
8 |
4.5 |
7 |
4 |
4 |
|
3.5 |
3 |
4.5 |
|
One-Way ANOVA Table: The formulas for SS(Total), SS(Factor) = SS(Between) and SS(Error) = SS(Within) are shown above. This same information is provided by the TI calculator hypothesis test function ANOVA in STAT TESTS (syntax is ANOVA(L1, L2, L3) where L1, L2, L3 have the data from Plan 1, Plan 2, Plan 3 respectively).
Source of Variation |
Sum of Squares (SS) |
Degrees of Freedom (df) |
Mean Square (MS) |
F |
Factor
(Between) |
SS(Factor)
= SS(Between)
=2.2458 |
k - 1
= 3 groups - 1
= 2 |
MS(Factor)
= SS(Factor)/(k-1)
= 2.2458/2
= 1.1229 |
F =
MS(Factor)/MS(Error)
= 1.1229/2.9792
= 0.3769 |
Error
(Within) |
SS(Error)
= SS(Within)
= 20.8542 |
n - k
= 10 total data - 3 groups
= 7 |
MS(Error)
= SS(Error)/(n-k)
= 20.8542/7
= 2.9792 |
|
Total |
SS(Total)
= 2.9792 + 20.8542
=23.1 |
n - 1
= 10 total data - 1
= 9 |
|
|
Got questions? Get instant answers now!
The One-Way ANOVA hypothesis test is always right-tailed because larger F-values are
way out in the right tail of the F-distribution curve and tend to make us reject
.
Notation
The notation for the F distribution is
~
where
and
The mean for the F distribution is