Visualization of probabilistic forecasts

This week my research group dis­cussed Adrian Raftery’s recent paper on “Use and Com­mu­ni­ca­tion of Prob­a­bilis­tic Fore­casts” which pro­vides a fas­ci­nat­ing but brief sur­vey of some of his work on mod­el­ling and com­mu­ni­cat­ing uncer­tain futures. Coin­ci­den­tally, today I was also sent a copy of David Spiegelhalter’s paper on “Visu­al­iz­ing Uncer­tainty About the Future”. Both are well-​​worth reading.

It made me think about my own efforts to com­mu­ni­cate future uncer­tainty through graph­ics. Of course, for time series fore­casts I nor­mally show pre­dic­tion inter­vals. I pre­fer to use more than one inter­val at a time because it helps con­vey a lit­tle more infor­ma­tion. The default in the fore­cast pack­age for R is to show both an 80% and a 95% inter­val like this: Con­tinue reading →

IJF review papers

Review papers are extremely use­ful for new researchers such as PhD stu­dents, or when you want to learn about a new research field. The Inter­na­tional Jour­nal of Fore­cast­ing pro­duced a whole review issue in 2006, and it con­tains some of the most highly cited papers we have ever pub­lished. Now, begin­ning with the lat­est issue of the jour­nal, we have started pub­lish­ing occa­sional review arti­cles on selected areas of fore­cast­ing. The first two arti­cles are:

  1. Elec­tric­ity price fore­cast­ing: A review of the state-​​of-​​the-​​art with a look into the future by Rafał Weron.
  2. The chal­lenges of pre-​​launch fore­cast­ing of adop­tion time series for new durable prod­ucts by Paul Good­win, Sheik Meeran, and Karima Dyussekeneva.

Both tackle very impor­tant top­ics in fore­cast­ing. Weron’s paper con­tains a com­pre­hen­sive sur­vey of work on elec­tric­ity price fore­cast­ing, coher­ently bring­ing together a large body of diverse research — I think it is the longest paper I have ever approved at 50 pages. Good­win, Meeran and Dyussekeneva review research on new prod­uct fore­cast­ing, a prob­lem every com­pany that pro­duces goods or ser­vices has faced; when there are no his­tor­i­cal data avail­able, how do you fore­cast the sales of your product?

We have a few other review papers in progress, so keep an eye out for them in future issues.

 

Seasonal periods

I get ques­tions about this almost every week. Here is an exam­ple from a recent com­ment on this blog:

I have two large time series data. One is sep­a­rated by sec­onds inter­vals and the other by min­utes. The length of each time series is 180 days. I’m using R (3.1.1) for fore­cast­ing the data. I’d like to know the value of the “fre­quency” argu­ment in the ts() func­tion in R, for each data set. Since most of the exam­ples and cases I’ve seen so far are for months or days at the most, it is quite con­fus­ing for me when deal­ing with equally sep­a­rated sec­onds or min­utes. Accord­ing to my under­stand­ing, the “fre­quency” argu­ment is the num­ber of obser­va­tions per sea­son. So what is the “sea­son” in the case of seconds/​minutes? My guess is that since there are 86,400 sec­onds and 1440 min­utes a day, these should be the val­ues for the “freq” argu­ment. Is that correct?

Con­tinue reading →

Prediction intervals too narrow

Almost all pre­dic­tion inter­vals from time series mod­els are too nar­row. This is a well-​​known phe­nom­e­non and arises because they do not account for all sources of uncer­tainty. In my 2002 IJF paper, we mea­sured the size of the prob­lem by com­put­ing the actual cov­er­age per­cent­age of the pre­dic­tion inter­vals on hold-​​out sam­ples. We found that for ETS mod­els, nom­i­nal 95% inter­vals may only pro­vide cov­er­age between 71% and 87%. The dif­fer­ence is due to miss­ing sources of uncertainty.

There are at least four sources of uncer­tainty in fore­cast­ing using time series models:

  1. The ran­dom error term;
  2. The para­me­ter estimates;
  3. The choice of model for the his­tor­i­cal data;
  4. The con­tin­u­a­tion of the his­tor­i­cal data gen­er­at­ing process into the future.

Con­tinue reading →

hts with regressors

The hts pack­age for R allows for fore­cast­ing hier­ar­chi­cal and grouped time series data. The idea is to gen­er­ate fore­casts for all series at all lev­els of aggre­ga­tion with­out impos­ing the aggre­ga­tion con­straints, and then to rec­on­cile the fore­casts so they sat­isfy the aggre­ga­tion con­straints. (An intro­duc­tion to rec­on­cil­ing hier­ar­chi­cal and grouped time series is avail­able in this Fore­sight paper.)

The base fore­casts can be gen­er­ated using any method, with ETS mod­els and ARIMA mod­els pro­vided as options in the forecast.gts() func­tion. As ETS mod­els do not allow for regres­sors, you will need to choose ARIMA mod­els if you want to include regres­sors. Con­tinue reading →

Congratulations to Dr Souhaib Ben Taieb

Souhaib Ben Taieb has been awarded his doc­tor­ate at the Uni­ver­sité libre de Brux­elles and so he is now offi­cially Dr Ben Taieb! Although Souhaib lives in Brus­sels, and was a stu­dent at the Uni­ver­sité libre de Brux­elles, I co-​​supervised his doc­tor­ate (along with Pro­fes­sor Gian­luca Bon­tempi). Souhaib is the 19th PhD stu­dent of mine to graduate.

His the­sis was on “Machine learn­ing strate­gies for multi-​​step-​​ahead time series fore­cast­ing” and is now avail­able online. The prior research in this area has largely cen­tred around two strate­gies (recur­sive and direct), and which one works bet­ter in cer­tain cir­cum­stances. Recur­sive fore­cast­ing is the stan­dard approach where a model is designed to pre­dict one step ahead, and is then iter­ated to obtain multi-​​step-​​ahead fore­casts. Direct fore­cast­ing involves using a sep­a­rate fore­cast­ing model for each fore­cast hori­zon. Souhaib took a very dif­fer­ent per­spec­tive from the prior research and has devel­oped new strate­gies that are either hybrids of these two strate­gies, or com­pletely dif­fer­ent from either of them. The result­ing fore­casts are often sig­nif­i­cantly bet­ter than those obtained using the more tra­di­tional approaches.

Some of the papers to come out of Souhaib’s the­sis are already avail­able on his Google scholar page.

Well done Souhaib, and best wishes for the future.

 

 

 

IIF Sponsored Workshops

The Inter­na­tional Insti­tute of Fore­cast­ers spon­sors work­shops every year, each of which focuses on a spe­cific theme. The pur­pose of these work­shops is to facil­i­tate small, infor­mal meet­ings where experts in a par­tic­u­lar field of fore­cast­ing can dis­cuss fore­cast­ing prob­lems, research, and solu­tions. Over the years, our work­shops have cov­ered top­ics from Pre­dict­ing Rare Events, ICT Fore­cast­ing, and, most recently, Sin­gu­lar Spec­trum Analy­sis. Often these work­shops are asso­ci­ated with a spe­cial issue of the Inter­na­tional Jour­nal of Fore­cast­ing.

If you are already host­ing a work­shop on a fore­cast­ing topic and need sup­port from the IIF, or if you are inter­ested in organ­is­ing and host­ing a new work­shop, please con­tact George Athana­sopou­los.

A list of past work­shops and work­shop guide­lines are pro­vided on the IIF web­site.

TBATS with regressors

I’ve received a few emails about includ­ing regres­sion vari­ables (i.e., covari­ates) in TBATS mod­els. As TBATS mod­els are related to ETS mod­els, tbats() is unlikely to ever include covari­ates as explained here. It won’t actu­ally com­plain if you include an xreg argu­ment, but it will ignore it.

When I want to include covari­ates in a time series model, I tend to use auto.arima() with covari­ates included via the xreg argu­ment. If the time series has mul­ti­ple sea­sonal peri­ods, I use Fourier terms as addi­tional covari­ates. See my post on fore­cast­ing daily data for some dis­cus­sion of this model. Note that fourier() and fourierf() now han­dle msts objects, so it is very sim­ple to do this.

For exam­ple, if holiday con­tains some dummy vari­ables asso­ci­ated with pub­lic hol­i­days and holidayf con­tains the cor­re­spond­ing vari­ables for the first 100 fore­cast peri­ods, then the fol­low­ing code can be used:

y <- msts(x, seasonal.periods=c(7,365.25))
z <- fourier(y, K=c(5,5))
zf <- fourierf(y, K=c(5,5), h=100)
fit <- auto.arima(y, xreg=cbind(z,holiday), seasonal=FALSE)
fc <- forecast(fit, xreg=cbind(zf,holidayf), h=100)

The main dis­ad­van­tage of the ARIMA approach is that the sea­son­al­ity is forced to be peri­odic, whereas a TBATS model allows for dynamic seasonality.

FPP now available as a downloadable e-​​book

FPP coverMy fore­cast­ing text­book with George Athana­sopou­los is already avail­able online (for free), and in print via Ama­zon (for under $40). Now we have made it avail­able as a down­load­able e-​​book via Google Books (for $15.55). The Google Books ver­sion is iden­ti­cal to the print ver­sion on Ama­zon (apart from a few typos that have been fixed).

To use the e-​​book ver­sion on an iPad or Android tablet, you need to have the Google Books app installed [iPad, Android]. You could also put it on an iPhone or Android phone, but I wouldn’t rec­om­mend it as the text will be too small to read.

You can down­load a free sam­ple (up to the end of Chap­ter 2) if you want to check how it will look on your device.

The sales of the print and e-​​book ver­sions are used to fund the run­ning the OTexts web­site where all OTexts books are freely available.

The online ver­sion is con­tin­u­ously updated — any errors dis­cov­ered are fixed imme­di­ately. The print and e-​​book ver­sions will be updated approx­i­mately annu­ally to bring them into line with the online version.