<< Chapter < Page Chapter >> Page >

Pattern classification

Recall that the goal of classification is to learn a mapping from the feature space, X , to a label space, Y . This mapping, f , is called a classifier . For example, we might have

X = R d Y = { 0 , 1 } .

We can measure the loss of our classifier using 0 - 1 loss; i.e.,

( y ^ , y ) = 1 { y ^ y } = { 1 , y ^ y 0 , y ^ = y .

Recalling that risk is defined to be the expected value of the loss function, we have

R ( f ) = E X Y ( f ( X ) , Y ) = E X Y 1 { f ( X ) Y } = P X Y f ( X ) Y .

The performance of a given classifier can be evaluated in terms of how close its risk is to the Bayes' risk.

(Bayes' Risk)
The Bayes' risk is the infimum of the risk for all classifiers:
R * = inf f R ( f ) .
We can prove that the Bayes risk is achieved by the Bayes classifier.
Bayes Classifier
The Bayes classifier is the following mapping:
f * ( x ) = 1 , η ( x ) 1 / 2 0 , o t h e r w i s e
where
η ( x ) P Y | X ( Y = 1 | X = x ) .
Note that for any x , f * ( x ) is the value of y { 0 , 1 } that maximizes P X Y ( Y = y | X = x ) .
Theorem

Risk of the bayes classifier

R ( f * ) = R * .

Let g ( x ) be any classifier. We will show that

P ( g ( X ) Y | X = x ) P ( f * ( x ) Y | X = x ) .

For any g ,

P ( g ( X ) Y | X = x ) = 1 - P Y = g ( X ) | X = x = 1 - P Y = 1 , g ( X ) = 1 | X = x + P Y = 0 , g ( X ) = 0 | X = x = 1 - E [ 1 { Y = 1 } 1 { g ( X ) = 1 } | X = x ] + E [ 1 { Y = 0 } 1 { g ( X ) = 0 } | X = x ] = 1 - 1 { g ( x ) = 1 } E [ 1 { Y = 1 } | X = x ] + 1 { g ( x ) = 0 } E [ 1 { Y = 0 } | X = x ] = 1 - 1 { g ( x ) = 1 } P Y = 1 | X = x + 1 { g ( x ) = 0 } P Y = 0 | X = x = 1 - 1 { g ( x ) = 1 } η ( x ) + 1 { g ( x ) = 0 } 1 - η ( x ) .

Next consider the difference

P g ( x ) Y | X = x - P f * ( x ) Y | X = x = η ( x ) 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } + ( 1 - η ( x ) ) 1 { f * ( x ) = 0 } - 1 { g ( x ) = 0 } = η ( x ) 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } - ( 1 - η ( x ) ) 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } = 2 η ( x ) - 1 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } ,

where the second equality follows by noting that 1 { g ( x ) = 0 } = 1 - 1 { g ( x ) = 1 } . Next recall

f * ( x ) = 1 , η ( x ) 1 / 2 0 , o t h e r w i s e .

For x such that η ( x ) 1 / 2 , we have

( 2 η ( x ) - 1 ) 0 1 { f * ( x ) = 1 } 1 - 1 { g ( x ) = 1 } 0 o r 1 0

and for x such that η ( x ) < 1 / 2 , we have

( 2 η ( x ) - 1 ) < 0 1 { f * ( x ) = 1 } 0 - 1 { g ( x ) = 1 } 0 o r 1 0 ,

which implies

2 η ( x ) - 1 1 { f * ( x ) = 1 } - 1 { g ( x ) = 1 } 0

or

P ( g ( X ) Y | X = x ) P ( f * ( x ) Y | X = x ) .

Note that while the Bayes classifier achieves the Bayes risk, in practice this classifier is not realizable because we do not know the distribution P X Y and so cannot construct η ( x ) .

Regression

The goal of regression is to learn a mapping from the input space, X , to the output space, Y . This mapping, f , is called a estimator . For example, we might have

X = R d Y = R .

We can measure the loss of our estimator using squared error loss; i.e.,

( y ^ , y ) = ( y - y ^ ) 2 .

Recalling that risk is defined to be the expected value of the loss function, we have

R ( f ) = E X Y [ ( f ( X ) , Y ) ] = E X Y [ ( f ( X ) - Y ) 2 ] .

The performance of a given estimator can be evaluated in terms of how close the risk is to the infimum of the risk for all estimator under consideration:

R * = inf f R ( f ) .
Theorem

Minimum risk under squared error loss (mse)

Let f * ( x ) = E Y | X [ Y | X = x ]

R ( f * ) = R * .
R ( f ) = E X Y ( f ( X ) - Y ) 2 = E X E Y | X ( f ( X ) - Y ) 2 | X = E X E Y | X ( f ( X ) - E Y | X [ Y | X ] + E Y | X [ Y | X ] - Y ) 2 | X = E X [ E Y | X [ ( f ( X ) - E Y | X [ Y | X ] ) 2 | X ] + 2 E Y | X ( f ( X ) - E Y | X [ Y | X ] ) ( E Y | X [ Y | X ] - Y ) | X + E Y | X [ ( E Y | X [ Y | X ] - Y ) 2 | X ] ] = E X [ E Y | X [ ( f ( X ) - E Y | X [ Y | X ] ) 2 | X ] + 2 ( f ( X ) - E Y | X [ Y | X ] ) × 0 + E Y | X [ ( E Y | X [ Y | X ] - Y ) 2 | X ] ] = E X Y ( f ( X ) - E Y | X [ Y | X ] ) 2 + R ( f * ) .

Thus if f * ( x ) = E Y | X [ Y | X = x ] , then R ( f * ) = R * , as desired.  

Empirical risk minimization

Empirical Risk
Let { X i , Y i } i = 1 n i i d P X Y be a collection of training data. Then the empirical risk is defined as
R ^ n ( f ) = 1 n i = 1 n ( f ( X i ) , Y i ) .
Empirical risk minimization is the process of choosing a learning rule which minimizes the empirical risk; i.e.,
f ^ n = arg min f F R ^ n ( f ) .

Pattern classification

Let the set of possible classifiers be

F = x sign ( w ' x ) : w R d

and let the feature space, X , be [ 0 , 1 ] d or R d . If we use the notation f w ( x ) sign ( w ' x ) , then the set of classifiers can be alternatively represented as

F = f w : w R d .

In this case, the classifier which minimizes the empirical risk is

f ^ n = arg min f F R ^ n ( f ) = arg min w R d 1 n i = 1 n 1 { sign ( w ' X i ) Y i } .
Example linear classifier for two-class problem.

Regression

Let the feature space be

X = [ 0 , 1 ]

and let the set of possible estimators be

F = degree d polynomials on [ 0 , 1 ] .

In this case, the classifier which minimizes the empirical risk is

f ^ n = arg min f F R ^ n ( f ) = arg min f F 1 n i = 1 n ( f ( X i ) - Y i ) 2 .

Alternatively, this can be expressed as

w ^ = arg min w R d + 1 1 n i = 1 n ( w 0 + w 1 X i + ... + w d X i d - Y i ) 2 = arg min w R d + 1 V w - Y 2

where V is the Vandermonde matrix

V = 1 X 1 ... X 1 d 1 X 2 ... X 2 d 1 X n ... X n d .

The pseudoinverse can be used to solve for w ^ :

w ^ = ( V ' V ) - 1 V ' Y .

A polynomial estimate is displayed in [link] .

Example polynomial estimator. Blue curve denotes f * , magenta curve is the polynomial fit to the data (denoted by dots).

Overfitting

Suppose F , our collection of candidate functions, is very large. We can always make

min f F R ^ n ( f )

smaller by increasing the cardinality of F , thereby providing more possibilities to fit to the data.

Consider this extreme example: Let F be all measurable functions. Then every function f for which

f ( x ) = Y i , x = X i for i = 1 , ... , n any value , otherwise

has zero empirical risk ( R ^ n ( f ) = 0 ). However, clearly this could be a very poor predictor of Y for a new input X .

Classification overfitting

Consider the classifier in [link] ; this demonstrates overfitting in classification. If the data were in fact generated from two Gaussian distributions centered in the upper left and lower right quadrants of the feature space domain, then the optimal estimator would be the linear estimator in [link] ; the overfitting would result in a higher probability of error for predicting classes of future observations.

Example of overfitting classifier. The classifier's decision boundary wiggles around in order to correctly label the training data, but the optimal Bayes classifier is a straight line.

Regression overfitting

Below is an m-file that simulates the polynomial fitting. Feel free to play around with it to get an idea of the overfitting problem.

% poly fitting % rob nowak  1/24/04clear close all  % generate and plot "true" functiont = (0:.001:1)'; f = exp(-5*(t-.3).^2)+.5*exp(-100*(t-.5).^2)+.5*exp(-100*(t-.75).^2);figure(1) plot(t,f)  % generate n training data & plot n = 10;sig = 0.1; % std of noise x = .97*rand(n,1)+.01;y = exp(-5*(x-.3).^2)+.5*exp(-100*(x-.5).^2)+.5*exp(-100*(x-.75).^2)+sig*randn(size(x)); figure(1)clf plot(t,f)hold on plot(x,y,'.')  % fit with polynomial of order k  (poly degree up to k-1)k=3; for i=1:k    V(:,i) = x.^(i-1); endp = inv(V'*V)*V'*y;  for i=1:k     Vt(:,i) = t.^(i-1);end yh = Vt*p;figure(1) clfplot(t,f) hold onplot(x,y,'.') plot(t,yh,'m') 
Example polynomial fitting problem. Blue curve is f * , magenta curve is the polynomial fit to the data (dots). (a) Fittinga polynomial of degree d = 0 : This is an example of underfitting (b) d = 2 (c) d = 4 (d) d = 6 : This is an example of overfitting. The empirical loss is zero, but clearly the estimatorwould not do a good job of predicting y when x is close to one.

Questions & Answers

A golfer on a fairway is 70 m away from the green, which sits below the level of the fairway by 20 m. If the golfer hits the ball at an angle of 40° with an initial speed of 20 m/s, how close to the green does she come?
Aislinn Reply
cm
tijani
what is titration
John Reply
what is physics
Siyaka Reply
A mouse of mass 200 g falls 100 m down a vertical mine shaft and lands at the bottom with a speed of 8.0 m/s. During its fall, how much work is done on the mouse by air resistance
Jude Reply
Can you compute that for me. Ty
Jude
what is the dimension formula of energy?
David Reply
what is viscosity?
David
what is inorganic
emma Reply
what is chemistry
Youesf Reply
what is inorganic
emma
Chemistry is a branch of science that deals with the study of matter,it composition,it structure and the changes it undergoes
Adjei
please, I'm a physics student and I need help in physics
Adjanou
chemistry could also be understood like the sexual attraction/repulsion of the male and female elements. the reaction varies depending on the energy differences of each given gender. + masculine -female.
Pedro
A ball is thrown straight up.it passes a 2.0m high window 7.50 m off the ground on it path up and takes 1.30 s to go past the window.what was the ball initial velocity
Krampah Reply
2. A sled plus passenger with total mass 50 kg is pulled 20 m across the snow (0.20) at constant velocity by a force directed 25° above the horizontal. Calculate (a) the work of the applied force, (b) the work of friction, and (c) the total work.
Sahid Reply
you have been hired as an espert witness in a court case involving an automobile accident. the accident involved car A of mass 1500kg which crashed into stationary car B of mass 1100kg. the driver of car A applied his brakes 15 m before he skidded and crashed into car B. after the collision, car A s
Samuel Reply
can someone explain to me, an ignorant high school student, why the trend of the graph doesn't follow the fact that the higher frequency a sound wave is, the more power it is, hence, making me think the phons output would follow this general trend?
Joseph Reply
Nevermind i just realied that the graph is the phons output for a person with normal hearing and not just the phons output of the sound waves power, I should read the entire thing next time
Joseph
Follow up question, does anyone know where I can find a graph that accuretly depicts the actual relative "power" output of sound over its frequency instead of just humans hearing
Joseph
"Generation of electrical energy from sound energy | IEEE Conference Publication | IEEE Xplore" ***ieeexplore.ieee.org/document/7150687?reload=true
Ryan
what's motion
Maurice Reply
what are the types of wave
Maurice
answer
Magreth
progressive wave
Magreth
hello friend how are you
Muhammad Reply
fine, how about you?
Mohammed
hi
Mujahid
A string is 3.00 m long with a mass of 5.00 g. The string is held taut with a tension of 500.00 N applied to the string. A pulse is sent down the string. How long does it take the pulse to travel the 3.00 m of the string?
yasuo Reply
Who can show me the full solution in this problem?
Reofrir Reply
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask