The Australian Macro Database

AusMacroData is a new website that encourages and facilitates the use of quantitative, publicly available Australian macroeconomic data.  The Australian Macro Database hosted at provides a user-friendly front end for searching among over 40000 economic variables and is loosely based on similar international sites such as the Federal Reserve Economic Database (FRED).  Continue reading →

Tourism forecasting competition data as an R package

The data used in the tourism forecasting competition, discussed in Athanasopoulos et al (2011), have been made available in the Tcomp package for R. The objects are of the same format as for Mcomp package containing data from the M1 and M3 competitions.

Thanks to Peter Ellis for putting the package together. He has also produced a nice blog post about it.

Tourism time series repository

A few years ago, I wrote a paper with George Athanasopoulos and others about a tourism forecasting competition. We originally made the data available as an online supplement to the paper, but that has unfortunately since disappeared although the paper itself is still available.

So I am posting the data here in case anyone wants to use it for replicating our results, or for other research purposes. The data are split into monthly, quarterly and yearly data. There are 366 monthly series, 427 quarterly series and 518 yearly series. Each group of series is further split into training data and test data. Further information is provided in the paper.

If you use the data in a publication, please cite the IJF paper as the source, along with a link to this blog post.

Download the zip file

The latest IJF issue with GEFCom2014 results

The latest issue of the IJF is a bumper issue with over 500 pages of forecasting insights.

The GEFCom2014 papers are included in a special section on probabilistic energy forecasting, guest edited by Tao Hong and Pierre Pinson. This is a major milestone in energy forecasting research with the focus on probabilistic forecasting and forecast evaluation done using a quantile scoring method. Only a few years ago I was having to explain to energy professionals why you couldn’t use a MAPE to evaluate a percentile forecast. With this special section, we now have a tutorial review on probabilistic electric load forecasting by Tao Hong and Shu Fan, which should help everyone get up to speed with current forecasting approaches, evaluation methods and common misunderstandings. The section also contains a large number of very high quality articles showing how to do state-of-the-art density forecasting for electricity load, electricity price, solar and wind power. Moreover, we have some benchmarks on publicly available data sets so future researchers can easily compare their methods against these published results.

In addition to the special section on probabilistic energy forecasting, there is an invited review paper on call centre forecasting by Ibrahim, Ye, L’Ecuyer and Shen. This is an important area in practice, and this paper provides a helpful review of the literature, a summary of the key issues in building good models, and suggests some possible future research directions.

There is also an invited paper from Blasques, Koopman, Łasak and Lucas on “In-sample confidence bands and out-of-sample forecast bands for time-varying parameters in observation-driven models” with some great discussion from Catherine Forbes and Pierre Perron. This was the subject of Siem Jan Koopman’s ISF talk in 2014.

Finally, there are 18 regular contributed papers, more than we normally publish in a whole issue, on topics ranging from forecasting excess stock returns to demographics of households, from forecasting food prices, to evaluating forecasts of counts and intermittent demand. Check them all out on ScienceDirect.

rOpenSci unconference in Brisbane, 21-22 April 2016

The first rOpenSci unconference in Australia will be held on Thursday and Friday (April 21-22) in Brisbane, at the Microsoft Innovation Centre.

This event will bring together researchers, developers, data scientists and open data enthusiasts from industry, government and university. The aim is to conceptualise and develop R-based tools that address current challenges in data science, open science and reproducibility.

Past examples of the projects can herehere, and here. Also here.

You can view more details, see who else is attending, and most importantly, apply to attend at the website.

Reproducibility in computational research

Jane Frazier spoke at our research team meeting today on “Reproducibility in computational research”. We had a very stimulating and lively discussion about the issues involved. One interesting idea was that reproducibility is on a scale, and we can all aim to move further along the scale towards making our own research more reproducible. For example

  • Can you reproduce your results tomorrow on the same computer with the same software installed?
  • Could someone else on a different computer reproduce your results with the same software installed?
  • Could you reproduce your results in 3 years time after some of your software environment may have changed?
  • etc.

Think about what changes you need to make to move one step further along the reproducibility continuum, and do it.

Jane’s slides and handout are below. Continue reading →

R vs Autobox vs ForecastPro vs …

Every now and then a commercial software vendor makes claims on social media about how their software is so much better than the forecast package for R, but no details are provided.

There are lots of reasons why you might select a particular software solution, and R isn’t for everyone. But anyone claiming superiority should at least provide some evidence rather than make unsubstantiated claims. Continue reading →