Empirical Comparisons of Learning Methods & Case Studies
Decision trees may be intelligible, but can they cut the mustard Have SVMs replaced neural nets, or are neural nets still best for regression, and SVMs best for classification Boosting maximizes a margin much like SVMs, but can boosting compete with SVMs And is it better to boost weak models, as theory suggests, or to boost stronger models Bagging is much easier than boosting, so how well does bagging stack up against boosting Bagging is supposed to be best with low bias high variance methods like decision trees, so if we bag lower variance models like neural nets are they as good as bagged trees What happens if we do bagging with steroids, i.e. switch to random forests And what about old friends like k-nearest neighbor — should they just be put out to pasture In this lecture I'll compare the performance of a variety of popular machine learning methods on nine performance criteria: Accuracy, F-score, Lift, Precision/Recall Break-Even Point, Area under the ROC, Average Precision, Squared Error, Cross-Entropy, and Probabilistic Calibration. I'll show that while no one learning method does it all, it is possible to "repair" some of them so that they do well on all metrics. I'll then describe NACHOS, a new ensemble method that does even better by by building on top of these other learning methods. Finally, I'll discuss how the nine performance metrics relate to each other, and look at a few case-studies to show why it is important to use the right metric for each problem.
Attribution: The Open Education Consortium
http://www.ocwconsortium.org/courses/view/c7d77a54d2269e57709b34b629ce0a5b/
Course Home http://videolectures.net/mlss05us_caruana_eclmc/