Skip to content

Examples of Bad Forecasting

In a couple of days, theWall Street Journalwill come out with its November survey of economists’ forecasts. It’s a particularly sensitive time, with elections in a few days and President Trump attacking the Federal Reserve for for raising interest rates. It’s a good time to recall major forecasting gaffes of the past.

In 1987, best-selling author Ravi Batra’s book Surviving the Great Depression of 1990 made the New York Times best-seller list. The economy failed to cooperate; there was a minor dip in 1990 but by and large the two decades after the book’s publication were a period of solid growth for the economy.

Betting on the opposite side of the coin, David Lereah, former chief economist of the National Association of Realtors, wrote a book in 2006 called Why the Real Estate Boom Will Not Bust. An ill-timed forecast if ever there was one.

In 2010, the Virgin airline and space travel magnate Richard Branson forecast an acute shortfall of oil in coming years; the predominant headline for the oil industry since then has been a supply glut and stagnant prices.

All Forecasts Are Wrong

Actually, being wrong is the natural state of a forecast. If a forecast was spot on, you can bet that luck played a key role. There is always uncertainty about the future. Or, as Yogi Berra said, it’s tough to make predictions, especially about the future.

There is a considerable body of statistical methods dealing with time-series forecasts (for more about these methods, see the Forecasting Analytics online course at Statistics.com). They all deal with a useful, but incomplete, part of the problem. They generally consider a time-series of data and answer the question “how will this series play out in the future if the world remains unchanged.”Of course, the world rarely does remain unchanged, so the statistician’s forecast will almost always fail to take that into account.

The Statistician’s Hazard

For many (most?) people statistics is cloaked with mystery. This has a short-term advantage for those armed with statistical models and forecasts: most people will accord them greater weight than they truly deserve. Some see them as black boxes, believable becase they are coming from supposed experts. Others may attempt to understand something about the forecasting or modeling method, but the effort required for that turns their attention away from the aspects of the world that might change in a way to invalidate the model.

Honest statisticians will attempt to provide clarity about what their models and forecasts cover, with candor about what they don’t cover. The temptation, and the hazard, is to relish the role of the wizard and to let the true limits of forecasts and models remain cloaked in statistical obscurity. In the long run, the best antidote to this problem is greater statistical literacy among the consumers of models and forecasts.

Time Series Forecasting – the Basics

Pure time-series forecasting essentially takes a series of observations and projects them into the future, and assumes the same conditions that produced the data will persist into the future. It is useful to think of building up a forecasted value from three components:

  • Level
  • Trend
  • Seasonality

Random noise will also be a component of future values, but random noise can’t be forecasted. “Level” is the most basic component of a future value; it’s what the future of the time series would be if there were no consistent movement up or down, no cyclical pattern, and no noise. “Trend” is long-term growth or decline over the entire time series – say, the growth of the economy. “Seasonality” is a cyclical pattern, say the summer boost in travel, or the daily peaks and valleys of commuter traffic (it has nothing to do with the seasons of the year).

Statistical methods for forecasting generally fall into two categories:

  • Model-based forecasts
  • Data-centric smoothing

Model-bnased methods fit a model to the existing data (say a linear regression), and project it into the future. Data-centric methods attempt to remove noise by averaging or otherwise combining information from a local set of data – for example, a moving average that simply averages the last few months of data. In either case, a seasonal pattern may need to be understood and removed from the data via a seasonal index (“seasonal adjustment“).

You can learn all about these methods in Forecasting Analytics course at Statistics.com.

Keep in mind that the statistical forecast of a time series projects into the future, assuming that the consistent set of conditions that produced the past data will continue into the future. If they change, you’ll need to take that into account. And if the prior data reflect distinct shifts in conditions within the period (e.g. the financial meltdown of 2008), it is probably better to develop different models for the different periods. What-if scenarios can also be built on top of base time series forecasts – e.g. you could generate a base forecast of support call center volume, then add an increment if you anticipate a major software upgrade.

Bottom line: There are elegant statistical methods for time series forecasting, but don’t be fooled by their sophistication. They can take you only so far, and beyond that you are into the art of analytics, not the science of data.