# Melbourne Data Science Initiative 2016

In just over three weeks, the inaugural MeDaScIn event will take place. This is an initiative to grow the talent pool of local data scientists and to promote Melbourne as a world city of excellence in Data Science.

The main event takes place on Friday 6th May, with lots of interesting sounding titles and speakers from business and government. I’m the only academic speaker on the program, giving the closing talk on “Automatic FoRecasting”. Earlier in the day I am running a forecasting workshop where I will discuss forecasting issues and answer questions for about 90 minutes. There are still a few places left for the main event, and for the workshops. Book soon if you want to attend.

All the details are here.

# Plotting overlapping prediction intervals

I often see figures with two sets of prediction intervals plotted on the same graph using different line types to distinguish them. The results are almost always unreadable. A better way to do this is to use semi-transparent shaded regions. Here is an example showing two sets of forecasts for the Nile River flow.

 library(forecast)   f1 = forecast(auto.arima(Nile, lambda=0), h=20, level=95) f2 = forecast(ets(Nile), h=20, level=95)   plot(f1, shadecol=rgb(0,0,1,.4), flwd=1, main="Forecasts of Nile River flow", xlab="Year", ylab="Billions of cubic metres") polygon(c(time(f2$mean),rev(time(f2$mean))), c(f2$lower,rev(f2$upper)), col=rgb(1,0,0,.4), border=FALSE) lines(f2\$mean, col=rgb(.7,0,0)) legend("bottomleft", fill=c(rgb(0,0,1,.4),rgb(1,0,0,.4)), col=c("blue","red"), lty=1, legend=c("ARIMA","ETS"))

The blue region shows 95% prediction intervals for the ARIMA forecasts, while the red region shows 95% prediction intervals for the ETS forecasts. Where they overlap, the colors blend to make purple. In this case, the point forecasts are quite close, but the prediction intervals are relatively different.

# Software for honours students

I spoke to our new crop of honours students this morning. Here are my slides, example files and links. Continue reading →

# rOpenSci unconference in Brisbane, 21-22 April 2016

The first rOpenSci unconference in Australia will be held on Thursday and Friday (April 21-22) in Brisbane, at the Microsoft Innovation Centre.

This event will bring together researchers, developers, data scientists and open data enthusiasts from industry, government and university. The aim is to conceptualise and develop R-based tools that address current challenges in data science, open science and reproducibility.

Past examples of the projects can herehere, and here. Also here.

You can view more details, see who else is attending, and most importantly, apply to attend at the website.

# Monash Business Analytics Team Profile

Our research group been growing lately, as you can see below! We were featured in the latest issue of the Monash newsletter The Insider. Check it out.

# Model variance for ARIMA models

From today’s email:

I wanted to ask you about your R forecast package, in particular the Arima() function. We are using this function to fit an ARIMAX model and produce model estimates and standard errors, which in turn can be used to get p-values and later model forecasts. To double check our work, we are also fitting the same model in SAS using PROC ARIMA and comparing model coefficients and output. Continue reading →

# Omitting outliers

Someone sent me this email today:

One of my colleagues said that you once said/wrote that you had encountered very few real outliers in your work, and that normally the “outlier-looking” data points were proper data points that should not have been treated as outliers. Have you discussed this in writing? If so, I would love to read it.

I don’t think I’ve ever said or written anything quite like that, and I see lots of outliers in real data. But I have counselled against omitting apparent outliers.

Often the most interesting part of a data set is in the unusual or unexpected observations, so I’m strongly opposed to automatic omission of outliers. The most famous case of that is the non-detection of the hole in the ozone layer by NASA. The way I was told the story was that outliers had been automatically filtered from the data obtained from Nimbus-7. It was only when the British Antarctic Survey observed the phenomenon in the mid 1980s that scientists went back and found the problem could have been detected a decade earlier if automated outlier filtering had not been applied by NASA. In fact, that is also how the story was told on the NASA website for a few years. But in a letter to the editor of the IMS bulletin, Pukelsheim (1990) explains that the reality was more complicated. In the corrected story, scientists were investigating the unusual observations to see if they were genuine, or the result of instrumental error, but still didn’t detect the problem until quite late.

Whatever actually happened, outliers need to be investigated not omitted. Try to understand what caused some observations to be different from the bulk of the observations. If you understand the reasons, you are then in a better position to judge whether the points can legitimately removed from the data set, or whether you’ve just discovered something new and interesting. Never remove a point just because it is weird.