Almost all prediction intervals from time series models are too narrow. This is a well-known phenomenon and arises because they do not account for all sources of uncertainty. In my 2002 IJF paper, we measured the size of the problem by computing the actual coverage percentage of the prediction intervals on hold-out samples. We found that for ETS models, nominal 95% intervals may only provide coverage between 71% and 87%. The difference is due to missing sources of uncertainty.
There are at least four sources of uncertainty in forecasting using time series models:
- The random error term;
- The parameter estimates;
- The choice of model for the historical data;
- The continuation of the historical data generating process into the future.
Continue reading →
The hts package for R allows for forecasting hierarchical and grouped time series data. The idea is to generate forecasts for all series at all levels of aggregation without imposing the aggregation constraints, and then to reconcile the forecasts so they satisfy the aggregation constraints. (An introduction to reconciling hierarchical and grouped time series is available in this Foresight paper.)
The base forecasts can be generated using any method, with ETS models and ARIMA models provided as options in the
forecast.gts() function. As ETS models do not allow for regressors, you will need to choose ARIMA models if you want to include regressors. Continue reading →
Although the Guardian claimed yesterday that I had explained “what went wrong” in the July and August unemployment figures, I made no attempt to do so as I had no information about the problems. Instead, I just explained a little about the purpose of seasonal adjustment.
However, today I learned a little more about the ABS unemployment data problems, including what may be the explanation for the fluctuations. This explanation was offered by Westpac’s chief economist, Bill Evans (see here for a video of him explaining the issue). Continue reading →
It’s not every day that seasonal adjustment makes the front page of the newspapers, but it has today with the ABS saying that the recent seasonally adjusted unemployment data would be revised.
I was interviewed about the underlying concepts for the Guardian in this piece.
Further comment from me about users paying for the ABS data is here.
I keep telling students that there are lots of jobs in data science (including statistics), and they often tell me they can’t find them advertised. As usual, you do have to do some networking, and one of the best ways of doing it is via a Data Science Meetup. Many cities now have them including Melbourne, Sydney, London, etc. It is the perfect opportunity to meet with local employers, many of which are hiring due to the huge expansion in the use of data analysis in business (aka business analytics).
At the end of each Melbourne meetup, some employers have been advertising their current analytic job openings to the audience.
Now the local organizers are going to extend the opportunity to allow job-searchers to give a 90 second pitch to employers. Details are provided on the message board.
I’ve received a few emails about including regression variables (i.e., covariates) in TBATS models. As TBATS models are related to ETS models,
tbats() is unlikely to ever include covariates as explained here. It won’t actually complain if you include an
xreg argument, but it will ignore it.
When I want to include covariates in a time series model, I tend to use
auto.arima() with covariates included via the
xreg argument. If the time series has multiple seasonal periods, I use Fourier terms as additional covariates. See my post on forecasting daily data for some discussion of this model. Note that
fourierf() now handle
msts objects, so it is very simple to do this.
For example, if
holiday contains some dummy variables associated with public holidays and
holidayf contains the corresponding variables for the first 100 forecast periods, then the following code can be used:
y <- msts(x, frequency=c(7,365.25))
z <- fourier(y, K=c(5,5))
zf <- fourierf(y, K=c(5,5), h=100)
fit <- auto.arima(y, xreg=cbind(z,holiday), seasonal=FALSE)
fc <- forecast(fit, xreg=cbind(zf,holidayf), h=100)
The main disadvantage of the ARIMA approach is that the seasonality is forced to be periodic, whereas a TBATS model allows for dynamic seasonality.
My forecasting textbook with George Athanasopoulos is already available online (for free), and in print via Amazon (for under $40). Now we have made it available as a downloadable e-book via Google Books (for $15.55). The Google Books version is identical to the print version on Amazon (apart from a few typos that have been fixed).
To use the e-book version on an iPad or Android tablet, you need to have the Google Books app installed [iPad, Android]. You could also put it on an iPhone or Android phone, but I wouldn’t recommend it as the text will be too small to read.
You can download a free sample (up to the end of Chapter 2) if you want to check how it will look on your device.
The sales of the print and e-book versions are used to fund the running the OTexts website where all OTexts books are freely available.
The online version is continuously updated — any errors discovered are fixed immediately. The print and e-book versions will be updated approximately annually to bring them into line with the online version.
The FPP resources page has recently been updated with several new additions including
- R code for all examples in the book. This was already available within each chapter, but the examples have been collected into one file per chapter to save copying and pasting the various code fragments.
- Slides from a course on Predictive Analytics from the University of Sydney.
- Slides from a course on Economic Forecasting from the University of Hawaii.
If any one using the book has other material that could be made available, please send them to me. For example, recorded lectures, slides, additional examples, assignments, exam questions, solutions, etc.
Today I read a paper that had been submitted to the IJF which included the following figure
along with several similar plots. (Click for a larger version.) I haven’t seen anything this bad for a long time. In fact, I think I would find it very difficult to reproduce using R, or even Excel (which is particularly adept at bad graphics).
A few years ago I produced “Twenty rules for good graphics”. I think I need to add a couple of additional rules:
- Represent time changes using lines.
- Never use fill patterns such as cross-hatching.
(My original rule #20 said Avoid pie charts.)
It would have been relatively simple to show these data as six lines on a plot of GDP against time. That would have made it obvious that the European GDP was shrinking, the GDP of Asia/Oceania was increasing, while other regions of the world were fairly stable. At least I think that is what is happening, but it is very hard to tell from such graphical obfuscation.