<< Chapter < Page Chapter >> Page >

Introduction

Motivation

In the last lecture we consider a learning problem in which the optimal function belonged to a finite class of functions.Specifically, for some collection of functions F with finite cardinality | F | , we have

min f F R ( f ) = 0 f * F .

This is almost always not the situation in the real-world learning problems. Let us suppose we have a finite collection ofcandidate functions F . Furthermore, we do not assume that the optimal function f * , which satisfies

R ( f * ) = inf f R ( f )

where the inf is taken over all measurable functions, is a member of F . That is, we make few, if any, assumptions about f * . This situation is sometimes termed as Agnostic Learning . The root of the word agnostic literally means not known . The term agnostic learning is used to emphasize the fact that often, perhaps usually, we may have no prior knowledgeabout f * . The question then arises about how we can reasonably select an f F in this setting.

The problem

The PAC style bounds discussed in the previous lecture , offer some help. Since we are selecting a function based on the empirical risk,the question is how close is R ^ n ( f ) to R ( f ) f F . In other words, we wish that the empirical risk is a good indicator of the true risk for every function in F . If this is case, the selection of f that minimizes the empirical risk

f n ^ = arg min f F n R ^ n ( f )

should also yield a small true risk, that is, R ( f n ^ ) should be close to min f F R ( f ) . Finally, we can thus state our desired situation as

P max f F n | R n ^ ( f ) - R ( f ) | > ϵ < δ ,

for small values of ϵ and δ . In other words, with probability at least 1 - δ , | R n ^ ( f ) - R ( f ) | > ϵ , f F . In this lecture, we will start to develop bounds of this form. First we will focus on bounding P ( | R n ^ ( f ) - R ( f ) | > ϵ ) for one fixed f F .

Developing initial bounds

To begin, let us recall the definition of empirical risk for { X i , Y i } i = 1 n be a collection of training data. Then the empirical risk is defined as

R ^ n ( f ) = 1 n i = 1 n ( f ( X i ) , Y i ) .

Note that since the training data { X i , Y i } i = 1 n are assumed to be i.i.d. pairs, the terms in the sum are i.i.d random variables.

Let

L i = ( f ( X i ) , Y i ) .

The collection of losses { L i } i = 1 n is i.i.d according to some unknown distribution (depending on the unknown joint distribution of (X,Y) and the loss function). Theexpectation of L i is E [ ( f ( X i ) , Y i ) ] = E [ ( f ( X ) , Y ) ] = R ( f ) , the true risk of f . For now, let's assume that f is fixed.

E [ R n ^ ( f ) ] = 1 n i = 1 n E [ ( f ( X i ) , Y i ) ] = 1 n i = 1 n E [ L i ] = R ( f )

We know from the strong law of large numbers that the average (or empirical mean) R n ^ ( f ) converges almost surely to the true mean R ( f ) . That is, R n ^ ( f ) R ( f ) almost surely as n . The question is how fast.

Concentration of measure inequalities

Concentration inequalities are upper bounds on how fast empirical means converge to their ensemble counterparts, in probability. The areaof the shaded tail regions in Figure 1 is P ( | R n ^ ( f ) - R ( f ) | > ϵ ) . We are interested in finding out how fast this probability tends to zero as n .

Distribution of R n ^ ( f )

At this stage, we recall Markov's Inequality . Let Z be a nonnegative random variable.

E [ Z ] = 0 z p ( z ) d z = 0 t z p ( z ) d z + u z p ( z ) d z 0 + t t z p ( z ) d z = t P ( Z t ) P ( Z t ) E [ Z ] t P ( Z 2 t 2 ) E [ Z 2 ] t 2

Questions & Answers

what are components of cells
ofosola Reply
twugzfisfjxxkvdsifgfuy7 it
Sami
58214993
Sami
what is a salt
John
the difference between male and female reproduction
John
what is computed
IBRAHIM Reply
what is biology
IBRAHIM
what is the full meaning of biology
IBRAHIM
what is biology
Jeneba
what is cell
Kuot
425844168
Sami
what is cytoplasm
Emmanuel Reply
structure of an animal cell
Arrey Reply
what happens when the eustachian tube is blocked
Puseletso Reply
what's atoms
Achol Reply
discuss how the following factors such as predation risk, competition and habitat structure influence animal's foraging behavior in essay form
Burnet Reply
cell?
Kuot
location of cervical vertebra
KENNEDY Reply
What are acid
Sheriff Reply
define biology infour way
Happiness Reply
What are types of cell
Nansoh Reply
how can I get this book
Gatyin Reply
what is lump
Chineye Reply
what is cell
Maluak Reply
what is biology
Maluak
what is vertibrate
Jeneba
what's cornea?
Majak Reply
what are cell
Achol
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask