The latest issue of the IJF is a bumper issue with over 500 pages of forecasting insights.
The GEFCom2014 papers are included in a special section on probabilistic energy forecasting, guest edited by Tao Hong and Pierre Pinson. This is a major milestone in energy forecasting research with the focus on probabilistic forecasting and forecast evaluation done using a quantile scoring method. Only a few years ago I was having to explain to energy professionals why you couldn’t use a MAPE to evaluate a percentile forecast. With this special section, we now have a tutorial review on probabilistic electric load forecasting by Tao Hong and Shu Fan, which should help everyone get up to speed with current forecasting approaches, evaluation methods and common misunderstandings. The section also contains a large number of very high quality articles showing how to do state-of-the-art density forecasting for electricity load, electricity price, solar and wind power. Moreover, we have some benchmarks on publicly available data sets so future researchers can easily compare their methods against these published results.
In addition to the special section on probabilistic energy forecasting, there is an invited review paper on call centre forecasting by Ibrahim, Ye, L’Ecuyer and Shen. This is an important area in practice, and this paper provides a helpful review of the literature, a summary of the key issues in building good models, and suggests some possible future research directions.
There is also an invited paper from Blasques, Koopman, Łasak and Lucas on “In-sample confidence bands and out-of-sample forecast bands for time-varying parameters in observation-driven models” with some great discussion from Catherine Forbes and Pierre Perron. This was the subject of Siem Jan Koopman’s ISF talk in 2014.
Finally, there are 18 regular contributed papers, more than we normally publish in a whole issue, on topics ranging from forecasting excess stock returns to demographics of households, from forecasting food prices, to evaluating forecasts of counts and intermittent demand. Check them all out on ScienceDirect.
Predictive Energy Analytics in the Big Data World
Cairns, Australia, June 22-23, 2017
This will be a great conference, and it is in a great location — Cairns, Australia, right by the Great Barrier Reef. Even better, if you stay on you can attend the International Symposium on Forecasting which immediately follows the International Symposium on Energy Analytics.
So block out 22-28 June 2017 on your calendars so you can enjoy a tropical paradise in one of the most beautiful parts of Australia, while attending two awesome conferences.
Continue reading →
As mentioned in my previous post on the forecast package v7, the most visible feature was the introduction of ggplot2 graphics. This post briefly summarizes the remaining new features of forecast v7.
Continue reading →
Version 7 of the forecast package was released on CRAN about a month ago, but I'm only just getting around to posting about the new features.
The most visible feature was the introduction of ggplot2 graphics. I first wrote the forecast package before ggplot2 existed, and so only base graphics were available. But I figured it was time to modernize and use the nice features available from ggplot2. The following examples illustrate the main new graphical functionality.
For illustration purposes, I'm using the male and female monthly deaths from lung diseases in the UK.
Continue reading →
In just over three weeks, the inaugural MeDaScIn event will take place. This is an initiative to grow the talent pool of local data scientists and to promote Melbourne as a world city of excellence in Data Science.
The main event takes place on Friday 6th May, with lots of interesting sounding titles and speakers from business and government. I’m the only academic speaker on the program, giving the closing talk on “Automatic FoRecasting”. Earlier in the day I am running a forecasting workshop where I will discuss forecasting issues and answer questions for about 90 minutes. There are still a few places left for the main event, and for the workshops. Book soon if you want to attend.
All the details are here.
I often see figures with two sets of prediction intervals plotted on the same graph using different line types to distinguish them. The results are almost always unreadable. A better way to do this is to use semi-transparent shaded regions. Here is an example showing two sets of forecasts for the Nile River flow.
f1 = forecast(auto.arima(Nile, lambda=0), h=20, level=95)
f2 = forecast(ets(Nile), h=20, level=95)
plot(f1, shadecol=rgb(0,0,1,.4), flwd=1,
main="Forecasts of Nile River flow",
xlab="Year", ylab="Billions of cubic metres")
The blue region shows 95% prediction intervals for the ARIMA forecasts, while the red region shows 95% prediction intervals for the ETS forecasts. Where they overlap, the colors blend to make purple. In this case, the point forecasts are quite close, but the prediction intervals are relatively different.
The GEFCom competitions have been a great success in generating good research on forecasting methods for electricity demand, and in enabling a comprehensive comparative evaluation of various methods. But they have only considered price forecasting in a simplified setting. So I’m happy to see this challenge is being taken up as part of the European Energy Market Conference for 2016, to be held from 6-9 June at the University of Porto in Portugal. Continue reading →
The github page for the forecast package currently shows the following information
Note the downloads figure: 264K/month. I know the package is popular, but that seems crazy. Also, the downloads figure on github only counts the downloads from the RStudio mirror, and ignores downloads from the other 125 mirrors around the world. Continue reading →
I’ve been having discussions with colleagues and university administration about the best way for universities to manage home-grown software.
The traditional business model for software is that we build software and sell it to everyone willing to pay. Very often, that leads to a software company spin-off that has little or nothing to do with the university that nurtured the development. Think MATLAB, S-Plus, Minitab, SAS and SPSS, all of which grew out of universities or research institutions. This model has repeatedly been shown to stifle research development, channel funds away from the institutions where the software was born, and add to research costs for everyone.
I argue that the open-source model is a much better approach both for research development and for university funding. Under the open-source model, we build software, and make it available for anyone to use and adapt under an appropriate licence. This approach has many benefits that are not always appreciated by university administrators. Continue reading →