<< Chapter < Page | Chapter >> Page > |
In the following we are going to consider the 2-dimensional case, but all the results can be easily generalized for the -dimensional case ( ), provided the dyadic tree construction is defined properly. Consider a recursive dyadic partition of the feature spaceinto boxes of equal size. Associated with this partition is a tree . Minimizing the empirical risk with respect to this partition produces the histogram classifier with equal-sized bins. Consider also all the possible partitions corresponding to pruned versions ofthe tree . Minimizing the empirical risk with respect to those other partitions results in other classifiers (dyadic decision trees)that are fundamentally different than the histogram rule we analyzed earlier.
Let be the collection of all possible dyadic decision trees corresponding to recursive dyadic partitions of the feature space.Each such tree can be prefix encoded with a bit-string proportional to the number of leafs in the tree as follows; encode the structure ofthe tree in a top-down fashion: (i) assign a zero at each branch node and a one at each leaf node (terminal node) (ii) read the code in abreadth-first fashion, top-down, left-right. [link] exemplifies this coding strategy. Notice that, since we are considering binary trees, the total number of nodes is twice thenumber of leafs minus one, that is, if the number of leafs in the tree is then the number of nodes is . Therefore to encode a tree with leafs we need bits.
Since we want to use the partition associated with this tree for classification we need to assign a decision label (either zero or one)to each leaf. Hence, to encode a decision tree in this fashion we need bits, where is the number of leafs. For a tree with leafs the first bits of the codeword encode the tree structure, and the remaining bits encode the classification labels. This is easily shown to be a prefix code, therefore we can use this under ourclassification scenario.
Let
This optimization can be solved through a bottom-up pruning process (starting from a very large initial tree ) in operations, where is the number of leafs in the initial tree. The complexity regularization theorem tells usthat
In the following we will illustrate the idea behind complexity regularization by applying the basic theorem to histogramclassifiers and classification trees (using our setup above).
Consider the classification setup described in "Classification" , with .
Recall the setup and results of a previous lecture The description here is slightly different than the one in theprevious lecture. . Let
Then . Let . We can encode each element of with bits, where the first bits indicate the smallest such that and the following bits encode the labels of each bin. This is a prefix encoding of all the elements in .
Notification Switch
Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?