ICCV 2009 Tutorial

Monday, 28 September 2009, afternoon
Tae-Kyun Kim
University of Cambridge Cambridge, UK
Jamie Shotton
Microsoft Research Cambridge, UK
Björn Stenger
Toshiba Research Europe Ltd Cambridge, UK
Many visual recognition tasks such as object tracking, detection and segmentation favour fast and yet accurate classification methods. The classification speed is not just a matter of time-efficiency but is often crucial to achieve good accuracy, which is manifest in e.g. tracking problems. Standard kernel machines e.g. Support Vector Machine and Gaussian Process Classifier are slow and methods for rapid classification have been pursued. A boosting classifier has been so successful owing to its fast computation and yet comparable accuracy to kernel methods, being a standard method in related fields over past decades. It is also pertinent for online learning for adaptation and tracking. Boosting as a representative ensemble learning method, which aggregates simple weak learners, can be seen as a flat tree structure when each learner is a decision-stump. The flat structure ensures reasonably smooth decision regions for good generalisation, however, is not optimal in classification time. A hierarchical structure having many short paths, i.e. a decision tree, is advantageous in speed but is notoriously bad at generalisation. Random Forest, an ensemble of random trees for good generalisation, has been emerging in related tasks including object segmentation and key-point recognition problems. In this tutorial, we review Boosting and Random Forest and present comparative studies with insightful discussions. The tutorial is comprised of largely three parts: Boosting and tree structured classifiers, Random Forest, and Online learning as detailed below.


PART I – Boosting and Tree Structured Classifiers [slides]

1. Motivations

  • Object detection/tracking/segmentation problems

2. Introduction to Boosting [Meir et al 03, Schapire 03]

  • Brief history and formalisation

  • Bagging/random forest [Breiman 01, Geurts et al 06]

3. Standard methods

  • Adaboost [Freund and Schapire 04]

  • Mixture of experts [Jordan and Jacobs 94] and multiple classifier systems [Kittler et al 98,03,07]

  • Robust real-time object detector [Viola and Jones 01]

  • Boosting as a tree-structured classifier

4. AnyBoost as an unified framework [Mason et al 00]

  • Multiple instance/component boosting [Viola et al 06, Dollar et al 08]

5. Tree-structured classifiers

  • JointBoost [Torralba et al 07]

  • ClusterBoostTree [Wu et al 07]

  • Multiple classifier boosting [Kim and Cipolla 08]

  • Speeding up [Li et al 04, Sochman and Matas 05]

  • Super tree [Kim et al 09]

6. Comparisons in literature [Yin, Crinimi et al 07]

PART II – Random Forest [slides]

1. Related studies in computer vision

2. Tutorial on randomised decision forests [Breiman 01, Geurts et al 06]

  • Including toy classification demo

  • Randomised forest for clustering [Moosmann et al 06]

  • Random ferns [Ozuysal et al 07, Bosch et al 07]

3. Applications to vision

  • Keypoint recognition [Lepetit et al 06]

  • Object segmentation [Shotton et al 08] including live demo

PART III - Online learning  for adaptation and tracking [slides]

1. Tracking by classification

  • Online discriminative feature selection [Collins et al 03]

  • Ensemble tracking [Avidan 07]

2. Online Boosting for object tracking

  • Online Boosting [Oza and Russel 01]

  • Online Boosting for feature selection [Grabner and Bischof 06]

  • Application to tracking and challenges

3. Improvements

  • Combining object detector and tracker [Stenger et al 09]

  • Semi-supervised learning [Leistner et al 08]

  • Online multiple classifier/instance boosting [Kim,Woodley,Stenger 09 and Babenko et al 09]

4. Online Random Forest

  • Online adaptive decision trees [Basak 04]

  • Online Random Forests [Osman 08]

Relevant Publications of the Speakers
T-K. Kim and R. Cipolla, MCBoost: Multiple Classifier Boosting for Perceptual Co-clustering of Images and Visual Features, NIPS, 2008.

T-K. Kim*, I. Budvytis*, R. Cipolla, Making a Shallow Network Deep: Growing a Tree from Decision Regions of a Boosting Classifier, CUED/F-INFENG/TR633, Department of Engineering, University of Cambridge, July 2009 (*indicates equal contributions).

J. Shotton, M. Johnson, R. Cipolla, Semantic Texton Forests for Image Categorization and Segmentation. In Proc. IEEE CVPR 2008.

G. Brostow, J. Shotton, J. Fauqueur, R. Cipolla. Segmentation and Recognition using Structure from Motion Point Clouds. In Proc. ECCV 2008.

B. Stenger, T. Woodley, R. Cipolla. Learning to Track With Multiple Observers. Proc. CVPR, Miami, June 2009.

B. Stenger, T. Woodley, T.-K. Kim, C. Hernandez, R. Cipolla. AIDIA - Adaptive Interface for Display InterAction. Proc. BMVC, Leeds, September 2008.

T-K. Kim, T. Woodley, B. Stenger, R. Cipolla, Online Multiple Classifier Boosting for Object Tracking, CUED/F-INFENG/TR631, Department of Engineering, University of Cambridge, June 2009.

T. Woodley, B. Stenger, R. Cipolla. Tracking Using Online Feature Selection and a Local Generative Model. Proc. BMVC, Warwick, September 2007.

B. Stenger, A. Thayananthan, P.H.S. Torr, R. Cipolla, Filtering Using a Tree-based Estimator, Proc. 9th ICCV, pages 1063-1070, Nice, France, October 2003.

Advisory Board
Prof. Roberto Cipolla
University of Cambridge, Cambridge, UK

Prof. Josef Kittler
University of Surrey, Guildford, UK

Copyright © 2009, Tae-Kyun Kim & Bjorn Stenger, All Rights Reserved.