I do not normally post job adverts, but this was very specifically targeted to “applied time series candidates” so I thought it might be of sufficient interest to readers of this blog. Continue reading →
Almost all prediction intervals from time series models are too narrow. This is a well-known phenomenon and arises because they do not account for all sources of uncertainty. In my 2002 IJF paper, we measured the size of the problem by computing the actual coverage percentage of the prediction intervals on hold-out samples. We found that for ETS models, nominal 95% intervals may only provide coverage between 71% and 87%. The difference is due to missing sources of uncertainty.
There are at least four sources of uncertainty in forecasting using time series models:
- The random error term;
- The parameter estimates;
- The choice of model for the historical data;
- The continuation of the historical data generating process into the future.
The hts package for R allows for forecasting hierarchical and grouped time series data. The idea is to generate forecasts for all series at all levels of aggregation without imposing the aggregation constraints, and then to reconcile the forecasts so they satisfy the aggregation constraints. (An introduction to reconciling hierarchical and grouped time series is available in this Foresight paper.)
The base forecasts can be generated using any method, with ETS models and ARIMA models provided as options in the
forecast.gts() function. As ETS models do not allow for regressors, you will need to choose ARIMA models if you want to include regressors. Continue reading →
I’ve received a few emails about including regression variables (i.e., covariates) in TBATS models. As TBATS models are related to ETS models,
tbats() is unlikely to ever include covariates as explained here. It won’t actually complain if you include an
xreg argument, but it will ignore it.
When I want to include covariates in a time series model, I tend to use
auto.arima() with covariates included via the
xreg argument. If the time series has multiple seasonal periods, I use Fourier terms as additional covariates. See my post on forecasting daily data for some discussion of this model. Note that
fourierf() now handle
msts objects, so it is very simple to do this.
For example, if
holiday contains some dummy variables associated with public holidays and
holidayf contains the corresponding variables for the first 100 forecast periods, then the following code can be used:
y <- msts(x, frequency=c(7,365.25)) z <- fourier(y, K=c(5,5)) zf <- fourierf(y, K=c(5,5), h=100) fit <- auto.arima(y, xreg=cbind(z,holiday), seasonal=FALSE) fc <- forecast(fit, xreg=cbind(zf,holidayf), h=100)
The main disadvantage of the ARIMA approach is that the seasonality is forced to be periodic, whereas a TBATS model allows for dynamic seasonality.
My forecasting textbook with George Athanasopoulos is already available online (for free), and in print via Amazon (for under $40). Now we have made it available as a downloadable e-book via Google Books (for $15.55). The Google Books version is identical to the print version on Amazon (apart from a few typos that have been fixed).
To use the e-book version on an iPad or Android tablet, you need to have the Google Books app installed [iPad, Android]. You could also put it on an iPhone or Android phone, but I wouldn’t recommend it as the text will be too small to read.
You can download a free sample (up to the end of Chapter 2) if you want to check how it will look on your device.
The sales of the print and e-book versions are used to fund the running the OTexts website where all OTexts books are freely available.
The online version is continuously updated — any errors discovered are fixed immediately. The print and e-book versions will be updated approximately annually to bring them into line with the online version.
From today’s email:
I have just finished reading a copy of ‘Forecasting:Principles and Practice’ and I have found the book really interesting. I have particularly enjoyed the case studies and focus on practical applications.
After finishing the book I have joined a forecasting competition to put what I’ve learnt to the test. I do have a couple of queries about the forecasting outputs required. The output required is a quantile forecast, is this the same as prediction intervals? Is there any R function to produce quantiles from 0 to 99?
If you were able to point me in the right direction regarding the above it would be greatly appreciated.
The FPP resources page has recently been updated with several new additions including
- R code for all examples in the book. This was already available within each chapter, but the examples have been collected into one file per chapter to save copying and pasting the various code fragments.
- Slides from a course on Predictive Analytics from the University of Sydney.
- Slides from a course on Economic Forecasting from the University of Hawaii.
If any one using the book has other material that could be made available, please send them to me. For example, recorded lectures, slides, additional examples, assignments, exam questions, solutions, etc.
Today I read a paper that had been submitted to the IJF which included the following figure
along with several similar plots. (Click for a larger version.) I haven’t seen anything this bad for a long time. In fact, I think I would find it very difficult to reproduce using R, or even Excel (which is particularly adept at bad graphics).
A few years ago I produced “Twenty rules for good graphics”. I think I need to add a couple of additional rules:
- Represent time changes using lines.
- Never use fill patterns such as cross-hatching.
(My original rule #20 said Avoid pie charts.)
It would have been relatively simple to show these data as six lines on a plot of GDP against time. That would have made it obvious that the European GDP was shrinking, the GDP of Asia/Oceania was increasing, while other regions of the world were fairly stable. At least I think that is what is happening, but it is very hard to tell from such graphical obfuscation.
On 23–25 September, I will be running a 3-day workshop in Perth on “Forecasting: principles and practice” mostly based on my book of the same name.
Workshop participants will be assumed to be familiar with basic statistical tools such as multiple regression, but no knowledge of time series or forecasting will be assumed. Some prior experience in R is highly desirable.
Venue: The University Club, University of Western Australia, Nedlands WA.
- Day 1:
- Forecasting tools, seasonality and trends, exponential smoothing.
- Day 2:
- State space models, stationarity, transformations, differencing, ARIMA models.
- Day 3:
- Time series cross-validation, dynamic regression, hierarchical forecasting, nonlinear models.
The course will involve a mixture of lectures and practical sessions using R. Each participant must bring their own laptop with R installed, along with the fpp package and its dependencies.
For costs and enrolment details, go to
GEFCom 2014 is the most advanced energy forecasting competition ever organized, both in terms of the data involved, and in terms of the way the forecasts will be evaluated.
So everyone interested in energy forecasting should head over to the competition webpage and start forecasting: www.gefcom.org.
Highlights of GEFCom2014:
- An upgraded edition from GEFCom2012
- Four tracks: electric load, electricity price, wind power and solar power forecasting.
- Probabilistic forecasting: contestants are required to submit 99 quantiles for each step throughout the forecast horizon.
- Rolling forecasting: incremental data sets are being released on weekly basis to forecast the next period of interest.
- Prizes for winning teams and institutions: up to 3 teams from each track will be recognized as the winning team; top institutions with multiple well-performing teams will be recognized as the winning institutions.
- Global participation: 200+ people from 40+ countries have already signed up the GEFCom2014 interest list.
Tao Hong (the main organizer) has a few tips on his blog that you should read before starting.