<< Chapter < Page Chapter >> Page >

Recap: classifier design

Given a set of training data { X i , Y i } i = 1 n and a finite collection of candidate functions F , select f ^ n F that (hopefully) is a good predictor for future cases. That is

f n ^ = arg min f F R ^ n ( f )

where R ^ n ( f ) is the empirical risk. For any particular f F , the corresponding empirical risk is defined as

R ^ n ( f ) = 1 n i = 1 n 1 { f ( X i ) Y i } .

Hoeffding's inequality

Hoeffding's inequality (Chernoff's bound in this case) allows us to gauge how close R ^ n ( f ) is to the true risk of f , R ( f ) , in probability

P ( | R ^ n ( f ) - R ( f ) | ϵ ) 2 e - 2 n ϵ 2 .

Since our selection process involves deciding among all f F , we would like to gauge how close the empirical risks are to theirexpected values. We can do this by studying the probability that one or more of the empirical risks deviates significantly from itsexpected value. This is captured by the probability

P max f F | R ^ n ( f ) - R ( f ) | ϵ .

Note that the event

max f F | R ^ n ( f ) - R ( f ) | ϵ

is equivalent to union of the events

f F | R ^ n ( f ) - R ( f ) | ϵ .

Therefore, we can use Bonferonni's bound (aka the “union of events” or “union” bound) to obtain

P max f F | R ^ n ( f ) - R ( f ) | ϵ = P f F | R ^ n ( f ) - R ( f ) | ϵ f F P ( | R ^ n ( f ) - R ( f ) | ϵ ) f F 2 e - 2 n ϵ 2 = 2 | F | e - 2 n ϵ 2

where | F | is the number of classifiers in F . In the proof of Hoeffding's inequality we also obtained a one-sided inequality thatimplied

P ( R ( f ) - R ^ n ( f ) ϵ ) e - 2 n ϵ 2

and hence

P max f F R ( f ) - R ^ n ( f ) ϵ | F | e - 2 n ϵ 2 .

We can restate the inequality above as follows, For all f F and for all δ > 0 with probability at least 1 - δ

R ( f ) R ^ n ( f ) + log | F | + log ( 1 / δ ) 2 n .

This follows by setting δ = | F | e - 2 n ϵ 2 and solving for ϵ . Thus with a high probability ( 1 - δ ) , the true risk for all f F is bounded by the empirical risk of f plus a constant that depends on δ > 0 , the number of training samples n, and the size F . Most importantly the bound does not depend on the unknown distribution P X Y . Therefore, we can call this a distribution-free bound.

Error bounds

We can use the distribution-free bound above to obtain a bound on the expected performance of the minimum empirical riskclassifier

f ^ n = arg min f F R ^ n ( f ) .

We are interested in bounding

E [ R ( f ^ n ) ] - min f F R ( f )

the expected risk of f ^ n minus the minimum risk for all f F . Note that this difference is always non-negative since f ^ n is at best as good as

f * = arg min f F R ( f ) .

Recall that f F and δ > 0 , with probability at least 1 - δ

R ( f ) R ^ n ( f ) + C ( F , n , δ )

where

C ( F , n , δ ) = log | F | + log ( 1 / δ ) 2 n .

In particular, since this holds for all f F including f ^ n ,

R ( f ^ n ) R ^ n ( f ^ n ) + C ( F , n , δ )

and for any other f F

R ( f ^ n ) R ^ n ( f ) + C ( F , n , δ )

since R ^ n ( f ^ n ) R ^ n ( f ) f F . In particular,

R ( f ^ n ) R ^ n ( f * ) + C ( F , n , δ )

where f * = arg min f F R ( f ) .

Let Ω denote the set of events on which the above inequality holds. Then by definition

P ( Ω ) 1 - δ .

We can now bound E [ R ( f ^ n ) ] - R ( f * ) as follows

E [ R ( f ^ n ) ] - R ( f * ) = E [ R ( f ^ n ) - R ^ n ( f * ) + R ^ n ( f * ) - R ( f * ) ] = E [ R ( f ^ n ) - R ^ n ( f * ) ]

since E [ R ^ n ( f * ) ] = R ( f * ) . The quantity above is bounded as follows.

E [ R ( f ^ n ) - R ^ n ( f * ) ] = E [ R ( f ^ n ) - R ^ n ( f * ) | Ω ] P ( Ω ) + E [ R ( f ^ n ) - R ^ n ( f * ) | Ω ] P ( Ω ¯ ) E [ R ( f ^ n ) - R ^ n ( f * ) | Ω ] + δ

since P ( Ω ) 1 , 1 - P ( Ω ) δ and R ( f ^ n ) - R ^ n ( f * ) 1

E [ R ( f ^ n ) - R ^ n ( f * ) | Ω ] E [ R ( f ^ n ) - R ^ n ( f ^ n ) | Ω ] C ( F , n , δ ) .

Thus

E [ R ( f ^ n ) - R ^ n ( f * ) ] C ( F , n , δ ) + δ .

So we have

E [ R ( f ^ n ) ] - min f F R ( f ) log | F | + log ( 1 / δ ) 2 n + δ , δ > 0 .

In particular, for δ = 1 / n , we have

E [ R ( f ^ n ) ] - min f F R ( f ) log | F | + log n 2 n + 1 n log | F | + log n + 2 n , since x + y 2 x + y , x , y > 0 .

Application: histogram classifier

Let F be the collection of all classifiers with M equal volume cells. Then | F | = 2 M , and the histogram classification rule

f ^ n = arg min f F 1 n i = 1 n 1 { f ( X i ) Y i }

satisfies

E [ R ( f ^ n ) ] - min f F R ( f ) M log 2 + 2 + log n n

which suggests the choice M = log 2 n (balancing M log 2 with log n ), resulting in

E [ R ( f ^ n ) ] - min f F R ( f ) = O log n n .

Questions & Answers

what is defense mechanism
Chinaza Reply
what is defense mechanisms
Chinaza
I'm interested in biological psychology and cognitive psychology
Tanya Reply
what does preconceived mean
sammie Reply
physiological Psychology
Nwosu Reply
How can I develope my cognitive domain
Amanyire Reply
why is communication effective
Dakolo Reply
Communication is effective because it allows individuals to share ideas, thoughts, and information with others.
effective communication can lead to improved outcomes in various settings, including personal relationships, business environments, and educational settings. By communicating effectively, individuals can negotiate effectively, solve problems collaboratively, and work towards common goals.
it starts up serve and return practice/assessments.it helps find voice talking therapy also assessments through relaxed conversation.
miss
Every time someone flushes a toilet in the apartment building, the person begins to jumb back automatically after hearing the flush, before the water temperature changes. Identify the types of learning, if it is classical conditioning identify the NS, UCS, CS and CR. If it is operant conditioning, identify the type of consequence positive reinforcement, negative reinforcement or punishment
Wekolamo Reply
please i need answer
Wekolamo
because it helps many people around the world to understand how to interact with other people and understand them well, for example at work (job).
Manix Reply
Agreed 👍 There are many parts of our brains and behaviors, we really need to get to know. Blessings for everyone and happy Sunday!
ARC
A child is a member of community not society elucidate ?
JESSY Reply
Isn't practices worldwide, be it psychology, be it science. isn't much just a false belief of control over something the mind cannot truly comprehend?
Simon Reply
compare and contrast skinner's perspective on personality development on freud
namakula Reply
Skinner skipped the whole unconscious phenomenon and rather emphasized on classical conditioning
war
explain how nature and nurture affect the development and later the productivity of an individual.
Amesalu Reply
nature is an hereditary factor while nurture is an environmental factor which constitute an individual personality. so if an individual's parent has a deviant behavior and was also brought up in an deviant environment, observation of the behavior and the inborn trait we make the individual deviant.
Samuel
I am taking this course because I am hoping that I could somehow learn more about my chosen field of interest and due to the fact that being a PsyD really ignites my passion as an individual the more I hope to learn about developing and literally explore the complexity of my critical thinking skills
Zyryn Reply
good👍
Jonathan
and having a good philosophy of the world is like a sandwich and a peanut butter 👍
Jonathan
generally amnesi how long yrs memory loss
Kelu Reply
interpersonal relationships
Abdulfatai Reply
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Statistical learning theory. OpenStax CNX. Apr 10, 2009 Download for free at http://cnx.org/content/col10532/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?

Ask