# Improving Regressors using Boosting Techniques

@inproceedings{Drucker1997ImprovingRU, title={Improving Regressors using Boosting Techniques}, author={Harris Drucker}, booktitle={ICML}, year={1997} }

#### 516 Citations

Using Boosting to prune Double-Bagging ensembles

- Computer Science, Mathematics
- Comput. Stat. Data Anal.
- 2009

In this paper, Boosting is used to determine the order in which base predictors are aggregated into a Double-Bagging ensemble, and a subensemble is constructed by early stopping the aggregation… Expand

Boosting and instability for regression trees

- Computer Science, Mathematics
- Comput. Stat. Data Anal.
- 2006

The AdaBoost like algorithm for boosting CART regression trees is considered, the ability of boosting to track outliers and to concentrate on hard observations is used to explore a non-standard regression context. Expand

Boosting Using Neural Networks

- Computer Science
- 1999

A method to construct a committee of weak learners that lowers the error rate in classification and prediction error in regression and the importance of using separate training, validation, and test sets in order to obtain good generalisation is stressed. Expand

Boosting Methods for Regression

- Mathematics, Computer Science
- Machine Learning
- 2004

This paper examines ensemble methods for regression that leverage or “boost” base regressors by iteratively calling them on modified samples and bound the complexity of the regression functions produced in order to derive PAC-style bounds on their generalization errors. Expand

Boosting for Regression Transfer

- Computer Science
- ICML
- 2010

This work introduces the first boosting-based algorithms for transfer learning that apply to regression tasks, and describes two existing classification transfer algorithms, ExpBoost and TrAdaBoost, and shows how they can be modified for regression. Expand

The State of Boosting ∗

- 1999

In many problem domains, combining the predictions of several models often results in a model with improved predictive performance. Boosting is one such method that has shown great promise. On the… Expand

COMBINING BAGGING , BOOSTING AND RANDOM SUBSPACE ENSEMBLES FOR REGRESSION PROBLEMS

- 2012

Bagging, boosting and random subspace methods are well known re-sampling ensemble methods that generate and combine a diversity of learners using the same learning algorithm for the base-regressor.… Expand

A Comparison of Model Aggregation Methods for Regression

- Computer Science
- ICANN
- 2003

Experiments reveal that different types of AdaBoost algorithms require different complexities of base models, and they outperform Bagging at their best, but Bagging achieves a consistent level of success with all base model, providing a robust alternative. Expand

Boosting methodology for regression problems

- Computer Science
- AISTATS
- 1999

This paper develops a new boosting method for regression problems that casts the regression problem as a classification problem and applies an interpretable form of the boosted naive Bayes classifier, which induces a regression model that is shown to be expressible as an additive model. Expand

Random Subspacing for Regression Ensembles

- Computer Science
- FLAIRS Conference
- 2004

A novel approach to ensemble learning for regression models is presented, by combining the ensemble generation technique of random subspace method with the ensemble integration methods of Stacked Regression and Dynamic Selection, which is a more effective method than the popular ensemble methods of Bagging and Boosting. Expand

#### References

SHOWING 1-10 OF 26 REFERENCES

Bagging predictors

- Computer Science
- Machine Learning
- 2004

Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy. Expand

Boosting Decision Trees

- Computer Science
- NIPS
- 1995

A constructive, incremental learning system for regression problems that models data by means of locally linear experts that does not compete for data during learning and derives asymptotic results for this method. Expand

Boosting Performance in Neural Networks

- Computer Science
- Int. J. Pattern Recognit. Artif. Intell.
- 1993

The boosting algorithm is used to construct an ensemble of neural networks that significantly improves performance (compared to a single network) in optical character recognition (OCR) problems and improved performance significantly, and, in some cases, dramatically. Expand

Boosting and Other Ensemble Methods

- Computer Science
- Neural Computation
- 1994

A surprising result is shown for the original boosting algorithm: namely, that as the training set size increases, the training error decreases until it asymptotes to the test error rate. Expand

Experiments with a New Boosting Algorithm

- Computer Science
- ICML
- 1996

This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers. Expand

A decision-theoretic generalization of on-line learning and an application to boosting

- Computer Science
- EuroCOLT
- 1995

The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and the multiplicative weightupdate Littlestone Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. Expand

Bias, Variance , And Arcing Classifiers

- Computer Science
- 1996

This work explores two arcing algorithms, compares them to each other and to bagging, and tries to understand how arcing works, which is more sucessful than bagging in variance reduction. Expand

Classification and Regression Trees

- Mathematics, Computer Science
- 1983

This chapter discusses tree classification in the context of medicine, where right Sized Trees and Honest Estimates are considered and Bayes Rules and Partitions are used as guides to optimal pruning. Expand

OC1: A Randomized Induction of Oblique Decision Trees

- Computer Science
- AAAI
- 1993

A new method that combines deterministic and randomized procedures to search for a good tree is explored, and the accuracy of the trees found with the method matches or exceeds the best results of other machine learning methods. Expand

Stacked generalization

- Mathematics, Computer Science
- Neural Networks
- 1992

The conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate. Expand