I’ve been having discussions with colleagues and university administration about the best way for universities to manage home-grown software.
The traditional business model for software is that we build software and sell it to everyone willing to pay. Very often, that leads to a software company spin-off that has little or nothing to do with the university that nurtured the development. Think MATLAB, S-Plus, Minitab, SAS and SPSS, all of which grew out of universities or research institutions. This model has repeatedly been shown to stifle research development, channel funds away from the institutions where the software was born, and add to research costs for everyone.
I argue that the open-source model is a much better approach both for research development and for university funding. Under the open-source model, we build software, and make it available for anyone to use and adapt under an appropriate licence. This approach has many benefits that are not always appreciated by university administrators. Continue reading →
The Overseas Development Institute Fellowship Scheme sends young postgraduate statisticians (and economists) to work in the public sectors of developing countries in Africa, the Caribbean and the Pacific on two-year contracts. This is a great way to develop skills and gain experience working within a developing country’s government. And you get to live in a fascinating place!
The application process for the 2016-2018 Fellowship Scheme is now open. Students are advised to apply before 17 December 2015 for a chance to be part of the ODI Fellowship Scheme.
- degree in statistics, economics, or a related field
- postgraduate degree qualification
- ability to commit to a two-year assignment
Application is via the online application form.
Read some first-hand experiences of current and former Fellows.
This is a new competition being organized by EuroStat. The first phase involves nowcasting economic indicators at national and European level including unemployment, HICP, Tourism and Retail Trade and some of their variants.
The main goal of the competition is to discover promising methodologies and data sources that could, now or in the future, be used to improve the production of official statistics in the European Statistical System.
The organizers seem to have been encouraged by the success of Kaggle and other data science competition platforms. Unfortunately, they have chosen not to give any prizes other than an invitation to give a conference presentation or poster, which hardly seems likely to attract many good participants.
The deadline for registration is 10 January 2016. The duration of the competition is roughly a year (including about a month for evaluation).
See the call for participation for more information.
We have two new continuing positions currently being advertised in our department: for lecturer and senior lecturer. Details are on the Monash website. (For those in North America, a lecturer is equivalent to your assistant professor, and a senior lecturer is equivalent to your associate professor. See the Wikipedia article on Australian academic ranks for more information.)
Although the title says “Lecturer/Senior Lecturer (Econometrics)”, we are interested in a wider range of candidates including statistics and machine learning. I’d particularly like to see strong candidates in computational statistics and machine learning, to add to our growing strength in this area.
Applications close on 20 January 2016. Please direct enquiries to Professor Farshid Vahid.
I prepared the following notes for a consulting client, and I thought they might be of interest to some other people too.
Let denote the value of the time series at time , and suppose we wish to fit a trend with correlated errors of the form
where represents the possibly nonlinear trend and is an autocorrelated error process. Continue reading →
It is a while since I last updated the CRAN version of the forecast package, so I uploaded the latest version (6.2) today. The github version remains the most up-to-date version and is already two commits ahead of the CRAN version.
This update is mostly bug fixes and additional error traps. The full ChangeLog is listed below. Continue reading →
I gave a seminar at Stanford today. Slides are below. It was definitely the most intimidating audience I’ve faced, with Jerome Friedman, Trevor Hastie, Brad Efron, Persi Diaconis, Susan Holmes, David Donoho and John Chambers all present (and probably other famous names I’ve missed).
I’ll be giving essentially the same talk at UC Davis on Thursday. Continue reading →
Jane Frazier spoke at our research team meeting today on “Reproducibility in computational research”. We had a very stimulating and lively discussion about the issues involved. One interesting idea was that reproducibility is on a scale, and we can all aim to move further along the scale towards making our own research more reproducible. For example
- Can you reproduce your results tomorrow on the same computer with the same software installed?
- Could someone else on a different computer reproduce your results with the same software installed?
- Could you reproduce your results in 3 years time after some of your software environment may have changed?
Think about what changes you need to make to move one step further along the reproducibility continuum, and do it.
Jane’s slides and handout are below. Continue reading →
I will be speaking at the Chinese R conference in Nanchang, to be held on 24-25 October, on “Forecasting Big Time Series Data using R”.
Details (for those who can read Chinese) are at china-r.org.
I’m back in California for the next couple of weeks, and will give the following talk at Stanford and UC-Davis.
Optimal forecast reconciliation for big time series data
Time series can often be naturally disaggregated in a hierarchical or grouped structure. For example, a manufacturing company can disaggregate total demand for their products by country of sale, retail outlet, product type, package size, and so on. As a result, there can be millions of individual time series to forecast at the most disaggregated level, plus additional series to forecast at higher levels of aggregation.
A common constraint is that the disaggregated forecasts need to add up to the forecasts of the aggregated data. This is known as forecast reconciliation. I will show that the optimal reconciliation method involves fitting an ill-conditioned linear regression model where the design matrix has one column for each of the series at the most disaggregated level. For problems involving huge numbers of series, the model is impossible to estimate using standard regression algorithms. I will also discuss some fast algorithms for implementing this model that make it practicable for implementing in business contexts.
Stanford: 4.30pm, Tuesday 6th October.
UCDavis: 4:10pm, Thursday 8th October.