This module introduces conditional probabilities and Bayes' rule.
If
and
are two separate but possibly
dependent random events, then:
Probability of
and
occurring together =
The conditional probability of
, given that
occurs =
The conditional probability of
, given that
occurs =
From elementary rules of probability (Venn diagrams):
Dividing the right-hand pair of expressions by
gives Bayes' rule:
In problems of probabilistic inference, we are often trying to
estimate the most probable underlying model for a randomprocess, based on some observed data or evidence. If
represents a given set of model
parameters, and
represents the
set of observed data values, then the terms in
are given the following terminology:
is the
prior probability of the model
(in the absence of any evidence);
is the probability
of the
evidence
;
is the
likelihood that
the evidence
was
produced, given that the model was
;
is the
posterior probability of the model being
, given that the evidence is
.
Quite often, we try to find the model
which maximizes the posterior
. This is known as
maximum a posteriori or
MAP model selection.
The following example illustrates the concepts of Bayesian model
selection.
Loaded dice
Problem:
Given a tub containing 100 six-sided dice, in which one die
is known to be loaded towards the six to a specified extent,derive an expression for the probability that, after a given
set of throws, an arbitrarily chosen die is the loaded one?Assume the other 99 dice are all fair (not loaded in any
way). The loaded die is known to have the following pmf:
Here derive a good strategy for finding the loaded die from
the tub.
Solution:
The pmfs of the fair dice may be assumed to be:
Let each die have one of two states,
if it is loaded and
if it is fair. These are our two possible
models for the random process and they have
underlying pmfs given by
and
respectively.
After
throws of the chosen
die, let the sequence of throws be
, where each
. This is our
evidence .
We shall now calculate the probability that this die is the
loaded one. We therefore wish to find the
posterior
.
We cannot evaluate this directly, but we can evaluate the
likelihoods ,
and
, since we know the expected pmfs in each case. We
also know the
prior probabilities
and
before we have carried out any throws, and these
are
since only one die in the tub of 100 is
loaded. Hence we can use Bayes' rule:
The denominator term
is there to ensure that
and
sum to unity (as they must). It can most easily be
calculated from:
so that
where
To calculate the likelihoods,
and
, we simply take the product of the probabilities
of each throw occurring in the sequence of throws
, given each of the two modules respectively (since
each new throw is independent of all previous throws, giventhe model). So, after
throws,
these likelihoods will be given by:
and
We can now substitute these probabilities into the above
expression for
and include
and
to get the desired a
posteriori probability
after
throws using
.
We may calculate this iteratively by noting that
and
so that
where
. If we calculate this after every throw of the
current die being tested (i.e. as
increases), then we can either
move on to test the next die from the tub if
becomes sufficiently small (say
) or accept the current die as the loaded one when
becomes large enough (say
). (These thresholds correspond approximately to
and
respectively.)
The choice of these thresholds for
is a function of the desired tradeoff between
speed of searching versus the probability of failure to findthe loaded die, either by moving on to the next die even
when the current one is loaded, or by selecting a fair dieas the loaded one.
The lower threshold,
, is the more critical, because it affects how long
we spend before discarding each fair die. The probability ofcorrectly detecting all the fair dice before the loaded die
is reached is
, where
is the expected number of fair dice tested before
the loaded one is found. So the failure probability due toincorrectly assuming the loaded die to be fair is
approximately
.
The upper threshold,
, is much less critical on search speed, since the
loaded result only occurs once, so it is
a good idea to set it very close to unity. The failureprobability caused by selecting a fair die to be the loaded
one is just
. Hence the
In problems with significant amounts of evidence (e.g. large
), the evidence probability
and the likelihoods can both get very very small, sufficientto cause floating-point underflow on many computers if
equations such as
and
are computed
directly. However the ratio of likelihood to evidenceprobability still remains a reasonable size and is an
important quantity which must be calculatedcorrectly.
One solution to this problem is to compute only the ratio of
likelihoods, as in
. A
more generally useful solution is to computelog(likelihoods) instead. The product operations in the
expressions for the likelihoods then become sums oflogarithms. Even the calculation of likelihood ratios such
as
and comparison with appropriate thresholds can be
done in the log domain. After this, it is OK to return tothe linear domain if necessary since
should be a reasonable value as it is the
ratio of very small quantities.