<< Chapter < Page | Chapter >> Page > |
Suppose = {linear classifiers in }, then we have
Normally, we have a feature vector . A hyperplane in provides a linear classifier in . Nonlinear classifiers can be obtained by a straightforward generalization.
Let , be a collection of functions mapping . These functions, applied to a feature , produce a generalized set of features, . For example, if , then we could consider and . We can then construct a linear classifier in the higher dimensional generalized feature space .
The VC bounds immediately extend to this case, and we have for ' = { generalized linear classifiers based on maps },
Let be a finite-dimensional vector space of real-valued functions on . The class of sets has VC dimension dim( ).
It is sufficient to show that no set of points can be shattered by . Take any points and for each , define the vector .
The set is a linear subspace of of dimension dim ( ) = . Therefore, there exists a non-zero vector such that . We can assume that at least one of these is negative (if all are positive, just negate the sum). We can then re-arrange thisexpression as .
Now suppose that there exists a such that the set selects precisely the on the left-hand side above. Then all terms on the left are non-negative and allthe terms on the right are non-positive. Since is non-zero, this is a contradiction. Therefore, cannot be shattered by sets in , . 6.375pt0.0pt6.375pt
Consider half-spaces in of the form . Each half-space can be described by
Let
Let . Each cell of results from splitting a rectangular region into two smaller rectangles parallel to one ofthe coordinate axes.
, .
Each additional split is analogous to a half-space set. Therefore, each additional split can potentially shatter points. This implies that
.
split shatters two points.
splits shatters three points .
How can we decide what dimension to choose for a generalized linear classifier?
How many leafs should be used for a classification tree?
Complexity Regularization using VC bounds!
SRM is simply complexity regularization using VC type bounds in place of Chernoff's bound or other concentration inequalities.
The basic idea is to consider a sequence of sets of classifiers of increasing VC dimensions . Then for each we find the minimum empirical risk classifier
and then select the final classifier according to
and is the final choice.
The basic rational is that we know
where is a constant.
The end result is that
analogous to our pervious complexity regularization results, except thatcodelengths are replaced by VC dimensions.
In order to prove the result we use the VC probability concentration bound and assume that . This enables a union bounding argument and leads to a risk bound of the form given above.
Complexity of classes depends on richness (shattering capability) relative to a set of arbitrary points. This allows us to effectively “quantize" collections of functions in a slightlydata-dependent manner.
Let
Then
satisfies
compare with
from Lecture 11 .
Notification Switch
Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?