Skip to content

Historical Spotlight: Risk Simulation – Since 1946

Simulation – a Venerable History

One of the most consequential and valuable analytical tools in business is simulation, which helps us make decisions in the face of uncertainty, such as these:

  • An airline knows on average, what proportion of ticketed passengers show up for a flight, but the number for any given flight is uncertain.  How far should the airline overbook flights in excess of capacity?
  • A company seeking health insurance for its employees is considering self-insuring.  There is uncertainty about the number of employees covered and the average claim per employee.  How much should the company set aside for the year?

You’ll learn how to set up and solve problems like this in the online course Risk Simulation and Queuing here at the Institute, taught by Prof. Cliff Ragsdale, author of the leading text Spreadsheet Modeling and Decision Analysis.

The Atomic Bomb Project

The simplicity and value of simulation can be summed up in its 1946 origin story.  Physicist Stanislaw Ulam, who was working on the atomic bomb project at Los Alamos, was on leave recovering from an illness.  To occupy his mind, he started trying to calculate the probability that a dealt solitaire hand would result in a win – all 52 cards being placed on the piles anchored by the four aces.

“After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than “abstract thinking” might not be to lay it out say one hundred times and simply observe and count the number of successful plays.”

The attractiveness of this brute force method was enhanced by the availability of computers, which were in their infancy.  In fact, the need to run simulations in support of the Manhattan project was a major impetus for the rapid development of computers.  This being secret government work, a code-name was required, and “Monte Carlo” was chosen, in a nod to the Monaco casino town (where Ulam’s uncle gambled).

Monte Carlo simulation is now ubiquitous, used across industries to facilitate both automated and human-intermediated decisions involving uncertainty.  In some cases, as with Ulam playing solitaire, the main benefit of simulation is the reconciliation of many different complex factors to determine a “net outcome.”  In most cases, though, decision-makers are also interested in how different one simulation outcome might be from another, to establish a “range of uncertainty” in the net outcome, and also to understand any unexpected outcomes that might arise.

In 1977, baseball statistician Pete Palmer turned to the computer (historical note: the Apple I had just appeared) to answer some pressing questions, such as the value of the sacrifice bunt (where the batter taps the ball a short distance, knowing he will likely be out, in order to allow runners on base to advance).

Striker

Drawing upon historical parameters, he simulated on the computer thousands of baseball games so that he could track outcomes under certain circumstances.

Palmer determined that the “expected run value” of a runner on second with one out (after the bunt has advanced the runner) is actually less than the expected value with a runner on first and nobody out (i.e. before the bunt). So the bunt actually puts you in a worse situation. It has taken a long time, but in the last few years the bunt has fallen out of favor.

Bottom line:  Monte Carlo simulation is used to

  • Arrive at a “net result” after factoring in uncertainty and complexity, and
  • Understand how much confidence to place in that net result