Skip to content

BOOTSTRAP

For the original sample, a statistic or estimate is calculated, then that same statistic or estimate is recalculated for each bootstrap resample. The resamples need not be smaller than the original sample, as in the diagram, often they are of the same size. The distribution of bootstrapped statistics or estimates is then compared to the original and is used to assess potential error in the estimate or statistic derived from the original sample.

An especially common use of the bootstrap is in machine learning prediction methods. In one application, predictions from multiple bootstrap samples are aggregated, e.g. by averaging.

The bootstrap is about to enter its second half-century; the first published bootstrap example was in 1969.

Julian Simon (left), the University of Maryland economist and demographer, included a bootstrap sample size illustration in his 1969 text Basic Research Methods for Social Science, among a compendium of Monte Carlo techniques for inference.

The bootstrap was given its name, and full statistical foundations, in 1979, by the Stanford statistician Bradley Efron (right). Of course, only with the widespread availability of computing power did the bootstrap gain popularity.

For many mathematical minds, the crude simplicity of the bootstrap was offensive. Why does it work? Consider a slightly modified version of the bootstrap algorithm:

  1.  Replicate the original sample (say) thousands of times. Now you have a “population” to draw from. Although synthetic, it is also embodies everything you know about the population that gave rise to your sample.
  2.  Draw lots of samples from this synthetic population without replacement.
  3. For each such sample, recalculate the statistic or estimate of interest.

Some reflection will show that this is functionally equivalent to the bootstrap. But how does it compare to classical formula-based inference based on normal approximations?

The latter, instead of replicating the original sample thousands of times, substitutes an infinite population with a normal distribution, based on the sample mean and standard deviation.

Where the data are normally distributed and well-behaved, the classical approach works well, is less “lumpy” than the bootstrap, and provides better coverage in the extremes.

Much real-world data, though, is far from normally-distributed – the bootstrap works much better in such cases.