<< Chapter < Page | Chapter >> Page > |
We define our estimator as
where
and
Therefore minimizes
over all . We showed before that
To proceed with our analysis we need to make some assumptions on the intrinsic difficulty of the problem. We will assume that theBayes decision boundary is a “well-behaved” 1-dimensional set, in the sense that it has box-counting dimension one (seeAppendix "Box Counting Dimension" ). This implies that, for an histogram with bins, the Bayes decision boundary intersects less than bins, where is a constant that does not depend on . Furthermore we assume that the marginal distribution of satisfies , for any measurable subset . This means that the samples collected do not accumulate anywhere in the unit square.
Under the above assumptions we can conclude that
Therefore
We can balance the terms in the right side of the above expression using (for large) therefore
Now let's consider the dyadic decision trees, under the assumptions above, and contrast these with the histogram classifier. Let
Let . We can prefix encode each element of with bits, as described before.
Let
where
and
Hence minimizes
over all . Moreover
If the Bayes decision boundary is a 1-dimensional set, as in "Histogram Risk Bound" , there exists a tree with at most leafs such that the boundary is contained in at most squares, each of volume . To see this, start with a tree yielding the histogram partition with boxes ( i.e., the tree partitioning the unit square into equal sized squares). Now prune all the nodes that do not intersect the boundary. In [link] we illustrate the procedure. If you carefully bound the number of leafs you need at each level you canshow that you will have in total less than leafs. We conclude then that there exists a tree with at most leafs that has the same risk as a histogram with bins. Therefore, using [link] we have
We can balance the terms in the right side of the above expression using (for large) therefore
Trees generally work much better than histogram classifiers. This is essentially because they provide much more efficient ways ofapproximating the Bayes decision boundary (as we saw in our example, under reasonable assumptions on the Bayes boundary, a tree encodedwith bits can describe the same classifier as an histogram that requires bits).
Notification Switch
Would you like to follow the 'Statistical learning theory' conversation and receive update notifications?