Are we getting better at forecasting?

I was inter­viewed recently for the Boston Globe. The inter­view was by email and I thought it might be use­ful to post here.

Here are the ques­tions from the journalist.

Are we bet­ter at pre­dict­ing future events than we used to be? Or are there obsta­cles inher­ent to the endeavor that pre­vent us from ever really being able to do it accu­rately? If we are bet­ter, then what allowed it? Com­pu­ta­tional power? Abun­dance of data? Some method­olog­i­cal insight?

And: there’s a way in which all sci­en­tific inquiry is about pre­dic­tion. We develop mod­els in order to explain out­comes; once we’ve estab­lished those mod­els, we can the­o­ret­i­cally use them to pre­dict. And yet, fore­cast­ing future events seems like a dis­tinct under­tak­ing. Is it? If so, what is the distinction?

And this is my reply.

Pre­dic­tion is largely about dis­tin­guish­ing gen­uine infor­ma­tion from ran­dom­ness. Noth­ing inter­est­ing is ever per­fectly pre­dictable because there is always an ele­ment of ran­dom­ness asso­ci­ated with it. Some­times the ran­dom­ness is large, and then the event is hard to pre­dict. At other times, the ran­dom­ness is rel­a­tively small and then the event is easy to pre­dict. For exam­ple, tides even a year ahead are rel­a­tively easy to pre­dict because the pat­tern is strong and the uncer­tainty is small. But migra­tion num­bers next year are hard to pre­dict because there are not strong and con­sis­tent his­tor­i­cal pat­terns and there is a lot of uncer­tainty due to chang­ing social envi­ron­ments, chang­ing gov­ern­ment pol­icy, chang­ing finan­cial and polit­i­cal con­texts, and so on.

To take another exam­ple, I do a lot of work in pre­dict­ing elec­tric­ity demand from one day to 30 years ahead. The mod­els we use for pre­dict­ing elec­tric­ity demand in Aus­tralia use infor­ma­tion about tem­per­a­tures, pub­lic hol­i­days, time of day, day of week, and so on. They pro­duce remark­ably accu­rate fore­casts of the level of demand which are used by the Aus­tralian Energy Mar­ket Oper­a­tor in plan­ning suf­fi­cient gen­er­a­tion capac­ity to meet the demand. But the mod­els will never be per­fect because we can’t take account of every person’s deci­sion about when they will turn on their heater, or air-​​conditioner, or microwave oven. The indi­vid­ual choices make the demand essen­tially ran­dom, but with some strong patterns.

So your first ques­tion is really about whether we are get­ting bet­ter at iden­ti­fy­ing and mod­el­ling the pat­terns. The answer is yes. In almost all fields of study, our fore­casts are bet­ter now than they were 20 years ago. But we will never have per­fect fore­casts because there will always be uncer­tainty, ran­dom­ness and unpre­dictable forces involved.

The improved fore­casts have arisen for sev­eral rea­sons. First, com­pu­ta­tional power makes it pos­si­ble to test fore­cast­ing mod­els over and over again, and see what works well. In this way, we can improve the mod­els by sim­ply test­ing them and refin­ing them much more than was ever pos­si­ble before. Sec­ond, com­pu­ta­tional power makes it pos­si­ble to fit much more sophis­ti­cated mod­els than was done pre­vi­ously. Third, com­pu­ta­tional power has allowed the col­lec­tion and analy­sis of enor­mous data sets, and this has led to some inter­est­ing new fore­cast­ing meth­ods. For exam­ple, the rec­om­men­da­tions pro­duced by Net­flix, Ama­zon and other web ser­vices are fore­casts of what you will like based on enor­mous data sets. It has only been pos­si­ble to do these cal­cu­la­tions effec­tively in the last few years.

We have also learned a lot about fore­cast­ing prac­tice in the last 30 years (when the first aca­d­e­mic fore­cast­ing jour­nals were started). We have dis­cov­ered new ways of test­ing mod­els before using them for fore­cast­ing, and we have found new ways to mea­sure the accu­racy of fore­casts mak­ing it pos­si­ble to bet­ter under­stand what makes a good forecast.

I see a dis­tinc­tion between expla­na­tion and pre­dic­tion. They are often con­fused in research, but I think they should be con­sid­ered sep­a­rately. A model that is good at explain­ing the past is not nec­es­sar­ily good at pre­dict­ing the future. We develop mod­els in order to explain out­comes, and those mod­els can be used to pre­dict, but they are not nec­es­sar­ily the bestmod­els for pre­dic­tion. I pre­fer to develop sep­a­rate mod­els that are good for pre­dic­tion, even if those mod­els are not so good at explain­ing the past. Some of the same vari­ables may be used in the two types of mod­els, but some infor­ma­tion may be unique to one of the models.

This is one of the issues for cli­mate change mod­ellers. The mod­els were designed to explain the way the cli­mate works, but they are being used to fore­cast what will hap­pen to the cli­mate over the next few decades. We do not know if the mod­els are good at fore­cast­ing, and it is likely that good fore­cast­ing mod­els are not the same as good expla­na­tion mod­els. We pub­lished a paper on this in the Inter­na­tional Jour­nal of Fore­cast­ing recently.


Related Posts:


  • Rubén Ruiz

    Very inter­est­ing post Robert. Most fore­cast­ers are aware of expla­na­tion vs pre­dic­tion but I can attest most prac­ti­tion­ers and users of fore­cast­ing soft­ware are not, spe­cially at enterprises.