Recall that the goal of classification is to learn a mapping from the feature space,
, to a label space,
. This mapping,
, is called a
classifier . For example, we might have
We can measure the loss of our classifier using
loss;
i.e.,
Recalling that risk is defined to be the expected value of the loss function, we have
The performance of a given classifier can be evaluated in terms of how close its risk is to the Bayes' risk.
(Bayes' Risk)
The Bayes' risk is the infimum of the risk for all classifiers:
We can prove that the Bayes risk is achieved by the Bayes classifier.
Bayes Classifier
The Bayes classifier is the following mapping:
where
Note that for any
,
is the value of
that maximizes
.
Theorem
Risk of the bayes classifier
Let
be any classifier. We will show that
For any
,
Next consider the difference
where the second equality follows by noting that
.
Next recall
For
such that
, we have
and for
such that
, we have
which implies
or
Note that while the Bayes classifier achieves the Bayes risk, in practice this classifier is not realizable because we do not know the distribution
and so cannot construct
.
Regression
The goal of regression is to learn a mapping from the input space,
,
to the output space,
. This mapping,
, is called a
estimator . For example, we might have
We can measure the loss of our estimator using squared error loss;
i.e.,
Recalling that risk is defined to be the expected value of the loss function, we have
The performance of a given estimator can be evaluated in terms of how close the risk is to the infimum of the risk for all estimator under consideration:
Theorem
Minimum risk under squared error loss (mse)
Let
Thus if
, then
, as desired.
Empirical risk minimization
Empirical Risk
Let
be a collection of training data.
Then the empirical risk is defined as
Empirical risk minimization is the process of choosing a learning rule which minimizes the empirical risk;
i.e.,
Pattern classification
Let the set of possible classifiers be
and let the feature space,
, be
or
. If we use the notation
, then the set of classifiers can be alternatively represented as
In this case, the classifier which minimizes the empirical risk is
Regression
Let the feature space be
and let the set of possible estimators be
In this case, the classifier which minimizes the empirical risk is
Suppose
, our collection of candidate functions, is very large. We can always make
smaller by increasing the cardinality of
, thereby providing more possibilities to fit to the data.
Consider this extreme example: Let
be all measurable functions. Then every function
for which
has zero empirical risk (
). However, clearly this
could be a very poor predictor of
for a new input
.
Classification overfitting
Consider the classifier in
[link] ; this demonstrates overfitting in classification. If the data were in fact generated from two Gaussian distributions centered in the upper left and lower right quadrants of the feature space domain, then the optimal estimator would be the linear estimator in
[link] ; the overfitting would result in a higher probability of error for predicting classes of future observations.
Regression overfitting
Below is an m-file that simulates the polynomial fitting. Feel free to play around with it to get an idea of the overfitting problem.
% poly fitting
% rob nowak 1/24/04clear
close all
% generate and plot "true" functiont = (0:.001:1)';
f = exp(-5*(t-.3).^2)+.5*exp(-100*(t-.5).^2)+.5*exp(-100*(t-.75).^2);figure(1)
plot(t,f)
% generate n training data & plot
n = 10;sig = 0.1; % std of noise
x = .97*rand(n,1)+.01;y = exp(-5*(x-.3).^2)+.5*exp(-100*(x-.5).^2)+.5*exp(-100*(x-.75).^2)+sig*randn(size(x));
figure(1)clf
plot(t,f)hold on
plot(x,y,'.')
% fit with polynomial of order k (poly degree up to k-1)k=3;
for i=1:k V(:,i) = x.^(i-1);
endp = inv(V'*V)*V'*y;
for i=1:k
Vt(:,i) = t.^(i-1);end
yh = Vt*p;figure(1)
clfplot(t,f)
hold onplot(x,y,'.')
plot(t,yh,'m')