In just over three weeks, the inaugural MeDaScIn event will take place. This is an initiative to grow the talent pool of local data scientists and to promote Melbourne as a world city of excellence in Data Science.
The main event takes place on Friday 6th May, with lots of interesting sounding titles and speakers from business and government. I’m the only academic speaker on the program, giving the closing talk on “Automatic FoRecasting”. Earlier in the day I am running a forecasting workshop where I will discuss forecasting issues and answer questions for about 90 minutes. There are still a few places left for the main event, and for the workshops. Book soon if you want to attend.
I gave a seminar at Stanford today. Slides are below. It was definitely the most intimidating audience I’ve faced, with Jerome Friedman, Trevor Hastie, Brad Efron, Persi Diaconis, Susan Holmes, David Donoho and John Chambers all present (and probably other famous names I’ve missed).
I’m back in California for the next couple of weeks, and will give the following talk at Stanford and UC-Davis.
Optimal forecast reconciliation for big time series data
Time series can often be naturally disaggregated in a hierarchical or grouped structure. For example, a manufacturing company can disaggregate total demand for their products by country of sale, retail outlet, product type, package size, and so on. As a result, there can be millions of individual time series to forecast at the most disaggregated level, plus additional series to forecast at higher levels of aggregation.
A common constraint is that the disaggregated forecasts need to add up to the forecasts of the aggregated data. This is known as forecast reconciliation. I will show that the optimal reconciliation method involves fitting an ill-conditioned linear regression model where the design matrix has one column for each of the series at the most disaggregated level. For problems involving huge numbers of series, the model is impossible to estimate using standard regression algorithms. I will also discuss some fast algorithms for implementing this model that make it practicable for implementing in business contexts.
I am teaching part of a short-course on Data Science for Managers from 10-12 October in Melbourne.
The impact of Data Science on modern business is second only to the introduction of computers. And yet, for many businesses the barrier of entry remains too high due to lack of knowhow, organisational inertia, difficulties in hiring the right manpower, an apparent need for upfront commitment, and more.
This course is designed to address these barriers, giving the necessary knowledge and skills to flesh out and manage Data Science functions within your organisation, taking the anxiety-factor out of the Big Data revolution and demonstrating how data-driven decision-making can be integrated into one’s organisation to harness existing advantages and to create new opportunities.
Assuming minimal prior knowledge, this course provides complete coverage of the key aspects, including data wrangling, modelling and analysis, predictive-, descriptive- and prescriptive-analytics, data management and curation, standards for data storage and analysis, the use of structured, semi-structured and unstructured data as well as of open public data, and the data-analytic value chain, all covered at a fundamental level.
The latter method will work for anyone with a Google scholar page. The Google scholar option only includes research papers. The first two methods also include any new seminars I give or new software packages I write.
My talk is on “Exploring the boundaries of predictability: what can we forecast, and when should we give up?” Essentially I will start with some of the ideas in this post, and then discuss the features of hard-to-forecast time series.
So if you’re in the San Francisco Bay area, please come along. Otherwise, it will be streamed live on the Yahoo Labs website. Continue reading →