Shu Fan and I have developed a model for electricity demand forecasting that is now widely used in Australia for long-term forecasting of peak electricity demand. It has become known as the “Monash Electricity Forecasting Model”. We have decided to release an R package that implements our model so that other people can easily use it. The package is called “MEFM” and is available on github. We will probably also put in on CRAN eventually.
The model was first described in Hyndman and Fan (2010). We are continually improving it, and the latest version is decribed in the model documentation which will be updated from time to time.
The package is being released under a GPL licence, so anyone can use it. All we ask is that our work is properly cited.
Naturally, we are not able to provide free technical support, although we welcome bug reports. We are available to undertake paid consulting work in electricity forecasting.
Amongst today’s email was one from someone running a private competition to classify time series. Here are the essential details.
The data are measurements from a medical diagnostic machine which takes 1 measurement every second, and after 32–1000 seconds, the time series must be classified into one of two classes. Some pre-classified training data is provided. It is not necessary to classify all the test data, but you do need to have relatively high accuracy on what is classified. So you could find a subset of more easily classifiable test time series, and leave the rest of the test data unclassified. Continue reading →
The Human Mortality Database is a wonderful resource for anyone interested in demographic data. It is a carefully curated collection of high quality deaths and population data from 37 countries, all in a consistent format with consistent definitions. I have used it many times and never cease to be amazed at the care taken to maintain such a great resource.
The data are continually being revised and updated. Today the Australian data has been updated to 2011. There is a time lag because of lagged death registrations which results in undercounts; so only data that are likely to be complete are included.
Tim Riffe from the HMD has provided the following information about the update:
- All death counts since 1964 are now included by year of occurrence, up to 2011. We have 2012 data but do not publish them because they are likely a 5% undercount due to lagged registration.
- Death count inputs for 1921 to 1963 are now in single ages. Previously they were in 5-year age groups. Rather than having an open age group of 85+ in this period counts usually go up to the maximum observed (stated) age. This change (i) introduces minor heaping in early years and (ii) implies different apparent old-age mortality than before, since previously anything above 85 was modeled according to the Methods Protocol.
- Population denominators have been swapped out for years 1992 to the present, owing to new ABS methodology and intercensal estimates for the recent period.
Some of the data can be read into R using the
hmd.e0 functions from the demography package. Tim has his own package on github that provides a more extensive interface.
This week my research group discussed Adrian Raftery’s recent paper on “Use and Communication of Probabilistic Forecasts” which provides a fascinating but brief survey of some of his work on modelling and communicating uncertain futures. Coincidentally, today I was also sent a copy of David Spiegelhalter’s paper on “Visualizing Uncertainty About the Future”. Both are well-worth reading.
It made me think about my own efforts to communicate future uncertainty through graphics. Of course, for time series forecasts I normally show prediction intervals. I prefer to use more than one interval at a time because it helps convey a little more information. The default in the forecast package for R is to show both an 80% and a 95% interval like this: Continue reading →
I get questions about this almost every week. Here is an example from a recent comment on this blog:
I have two large time series data. One is separated by seconds intervals and the other by minutes. The length of each time series is 180 days. I’m using R (3.1.1) for forecasting the data. I’d like to know the value of the “frequency” argument in the ts() function in R, for each data set. Since most of the examples and cases I’ve seen so far are for months or days at the most, it is quite confusing for me when dealing with equally separated seconds or minutes. According to my understanding, the “frequency” argument is the number of observations per season. So what is the “season” in the case of seconds/minutes? My guess is that since there are 86,400 seconds and 1440 minutes a day, these should be the values for the “freq” argument. Is that correct?
Continue reading →
I do not normally post job adverts, but this was very specifically targeted to “applied time series candidates” so I thought it might be of sufficient interest to readers of this blog. Continue reading →
Almost all prediction intervals from time series models are too narrow. This is a well-known phenomenon and arises because they do not account for all sources of uncertainty. In my 2002 IJF paper, we measured the size of the problem by computing the actual coverage percentage of the prediction intervals on hold-out samples. We found that for ETS models, nominal 95% intervals may only provide coverage between 71% and 87%. The difference is due to missing sources of uncertainty.
There are at least four sources of uncertainty in forecasting using time series models:
- The random error term;
- The parameter estimates;
- The choice of model for the historical data;
- The continuation of the historical data generating process into the future.
Continue reading →
The hts package for R allows for forecasting hierarchical and grouped time series data. The idea is to generate forecasts for all series at all levels of aggregation without imposing the aggregation constraints, and then to reconcile the forecasts so they satisfy the aggregation constraints. (An introduction to reconciling hierarchical and grouped time series is available in this Foresight paper.)
The base forecasts can be generated using any method, with ETS models and ARIMA models provided as options in the
forecast.gts() function. As ETS models do not allow for regressors, you will need to choose ARIMA models if you want to include regressors. Continue reading →
I’ve received a few emails about including regression variables (i.e., covariates) in TBATS models. As TBATS models are related to ETS models,
tbats() is unlikely to ever include covariates as explained here. It won’t actually complain if you include an
xreg argument, but it will ignore it.
When I want to include covariates in a time series model, I tend to use
auto.arima() with covariates included via the
xreg argument. If the time series has multiple seasonal periods, I use Fourier terms as additional covariates. See my post on forecasting daily data for some discussion of this model. Note that
fourierf() now handle
msts objects, so it is very simple to do this.
For example, if
holiday contains some dummy variables associated with public holidays and
holidayf contains the corresponding variables for the first 100 forecast periods, then the following code can be used:
y <- msts(x, seasonal.periods=c(7,365.25))
z <- fourier(y, K=c(5,5))
zf <- fourierf(y, K=c(5,5), h=100)
fit <- auto.arima(y, xreg=cbind(z,holiday), seasonal=FALSE)
fc <- forecast(fit, xreg=cbind(zf,holidayf), h=100)
The main disadvantage of the ARIMA approach is that the seasonality is forced to be periodic, whereas a TBATS model allows for dynamic seasonality.