Big Data for Official Statistics Competition

This is a new competition being organized by EuroStat. The first phase involves nowcasting economic indicators at national and European level including unemployment, HICP, Tourism and Retail Trade and some of their variants.

The main goal of the competition is to discover promising methodologies and data sources that could, now or in the future, be used to improve the production of official statistics in the European Statistical System.

The organizers seem to have been encouraged by the success of Kaggle and other data science competition platforms. Unfortunately, they have chosen not to give any prizes other than an invitation to give a conference presentation or poster, which hardly seems likely to attract many good participants.

The deadline for registration is 10 January 2016. The duration of the competition is roughly a year (including about a month for evaluation).

See the call for participation for more information.

Upcoming talks in California

I’m back in California for the next couple of weeks, and will give the following talk at Stanford and UC-Davis.

Optimal forecast reconciliation for big time series data

Time series can often be naturally disaggregated in a hierarchical or grouped structure. For example, a manufacturing company can disaggregate total demand for their products by country of sale, retail outlet, product type, package size, and so on. As a result, there can be millions of individual time series to forecast at the most disaggregated level, plus additional series to forecast at higher levels of aggregation.

A common constraint is that the disaggregated forecasts need to add up to the forecasts of the aggregated data. This is known as forecast reconciliation. I will show that the optimal reconciliation method involves fitting an ill-conditioned linear regression model where the design matrix has one column for each of the series at the most disaggregated level. For problems involving huge numbers of series, the model is impossible to estimate using standard regression algorithms. I will also discuss some fast algorithms for implementing this model that make it practicable for implementing in business contexts.

Stanford: 4.30pm, Tuesday 6th October.
UCDavis: 4:10pm, Thursday 8th October.

International Symposium on Forecasting: Spain 2016

June 19-22, 2016
Santander, Spain – Palace of La Magdalena

The International Symposium on Forecasting (ISF) is the premier forecasting conference, attracting the world’s leading forecasting researchers, practitioners, and students. Through a combination of keynote speaker presentations, academic sessions, workshops, and social programs, the ISF provides many excellent opportunities for networking, learning, and fun.


Greg Allenby, The Ohio State University, USA
Todd Clark, Federal Reserve Bank of Cleveland, USA
José Duato, Polytechnic University of Valencia, Spain
Robert Fildes, Lancaster University, United Kingdom
Edward Leamer, UCLA Anderson, USA
Henrik Madsen, Technical University of Denmark
Adrian Raftery, University of Washington, USA

Important Dates

Invited Session Proposals: January 31 2016
Abstract Submissions: March 16 2016
Early Registration Ends: May 15 2016

More information at

Murphy diagrams in R

At the recent International Symposium on Forecasting, held in Riverside, California, Tillman Gneiting gave a great talk on “Evaluating forecasts: why proper scoring rules and consistent scoring functions matter”. It will be the subject of an IJF invited paper in due course.

One of the things he talked about was the “Murphy diagram” for comparing forecasts, as proposed in Ehm et al (2015). Here’s how it works for comparing mean forecasts. Continue reading →