RSS feeds for statistics and related journals

I’ve now res­ur­rected the col­lec­tion of research jour­nals that I fol­low, and set it up as a shared col­lec­tion in feedly. So any­one can eas­ily sub­scribe to all of the same jour­nals, or select a sub­set of them, to fol­low on feedly. Con­tinue reading →

Di Cook is moving to Monash

I’m delighted that Pro­fes­sor Dianne Cook will be join­ing Monash Uni­ver­sity in July 2015 as a Pro­fes­sor of Busi­ness Ana­lyt­ics. Di is an Aus­tralian who has worked in the US for the past 25 years, mostly at Iowa State Uni­ver­sity. She is mov­ing back to Aus­tralia and join­ing the Depart­ment of Econo­met­rics and Busi­ness Sta­tis­tics in the Monash Busi­ness School, as part of our ini­tia­tive in Busi­ness Analytics.

Di is a world leader in data visu­al­iza­tion, and is well-​​​​known for her work on inter­ac­tive graph­ics. She is also the aca­d­e­mic super­vi­sor of sev­eral lead­ing data sci­en­tists includ­ing Hadley Wick­ham and Yihui Xie, both of whom work for RStu­dio.

Di has a great deal of energy and enthu­si­asm for com­pu­ta­tional sta­tis­tics and data visu­al­iza­tion, and will play a key role in devel­op­ing and teach­ing our new sub­jects in busi­ness analytics.

The Monash Busi­ness School is already excep­tion­ally strong in econo­met­rics (ranked 7th in the world on RePEc), and fore­cast­ing (ranked 11th on RePEc), and we have recently expanded into actu­ar­ial sci­ence. With Di join­ing the depart­ment, we will be extend­ing our exper­tise in the area of data visu­al­iza­tion as well.



Statistical modelling and analysis of big data

There is a one day work­shop on this topic on 23 Feb­ru­ary 2015 at QUT in Bris­bane. I will be speak­ing on “Visu­al­iz­ing and fore­cast­ing big time series data”.


Big data is now endemic in busi­ness, indus­try, gov­ern­ment, envi­ron­men­tal man­age­ment, med­ical sci­ence, social research and so on. One of the com­men­su­rate chal­lenges is how to effec­tively model and analyse these data.

This work­shop will bring together national and inter­na­tional experts in sta­tis­ti­cal mod­el­ling and analy­sis of big data, to share their expe­ri­ences, approaches and opin­ions about future direc­tions in this field.

The work­shop pro­gramme will com­mence at 8.30am and close at 5pm. Reg­is­tra­tion is free, how­ever num­bers are strictly lim­ited so please ensure you reg­is­ter when you receive your invi­ta­tion via email. Morn­ing and after­noon tea will be pro­vided; par­tic­i­pants will need to pur­chase their own lunch.

Fur­ther details will be made avail­able in early Jan­u­ary. Con­tinue reading →

New R package for electricity forecasting

Shu Fan and I have devel­oped a model for elec­tric­ity demand fore­cast­ing that is now widely used in Aus­tralia for long-​​term fore­cast­ing of peak elec­tric­ity demand. It has become known as the “Monash Elec­tric­ity Fore­cast­ing Model”. We have decided to release an R pack­age that imple­ments our model so that other peo­ple can eas­ily use it. The pack­age is called “MEFM” and is avail­able on github. We will prob­a­bly also put in on CRAN eventually.

The model was first described in  Hyn­d­man and Fan (2010). We are con­tin­u­ally improv­ing it, and the lat­est ver­sion is decribed in the model doc­u­men­ta­tion which will be updated from time to time.

The pack­age is being released under a GPL licence, so any­one can use it. All we ask is that our work is prop­erly cited.

Nat­u­rally, we are not able to pro­vide free tech­ni­cal sup­port, although we wel­come bug reports. We are avail­able to under­take paid con­sult­ing work in elec­tric­ity forecasting.


A time series classification contest

Amongst today’s email was one from some­one run­ning a pri­vate com­pe­ti­tion to clas­sify time series. Here are the essen­tial details.

The data are mea­sure­ments from a med­ical diag­nos­tic machine which takes 1 mea­sure­ment every sec­ond, and after 32–1000 sec­onds, the time series must be clas­si­fied into one of two classes. Some pre-​​classified train­ing data is pro­vided. It is not nec­es­sary to clas­sify all the test data, but you do need to have rel­a­tively high accu­racy on what is clas­si­fied. So you could find a sub­set of more eas­ily clas­si­fi­able test time series, and leave the rest of the test data unclas­si­fied. Con­tinue reading →

Honoring Herman Stekler

stekler_The first issue of the IJF for 2015 has just been pub­lished, and I’m delighted that it includes a spe­cial sec­tion hon­or­ing Her­man Stek­ler. It includes arti­cles cov­er­ing a range of his fore­cast­ing inter­ests, although not all of them (sports fore­cast­ing is miss­ing). Her­man him­self wrote a paper for it look­ing at “Forecasting—Yesterday, Today and Tomor­row”.

He is in a unique posi­tion to write such a paper as he has been doing fore­cast­ing research longer than any­one else on the planet — his first pub­lished paper on fore­cast­ing appeared in 1959. Her­man is now 82 years old, and is still very active in research. Only a cou­ple of months ago, he wrote to me with some new research ideas he had been think­ing about, ask­ing me for some feed­back. He is also an extra­or­di­nar­ily con­sci­en­tious and care­ful asso­ciate edi­tor of the IJF and a delight to work with. He is truly “a scholar and a gen­tle­man” and I am very happy that we can honor Her­man in this man­ner. Thanks to Tara Sin­clair, Prakash Loun­gani and Fred Joutz for putting this trib­ute together.

We also pub­lished an inter­view with Her­man in the IJF in 2010 which con­tains some infor­ma­tion about his early years, grad­u­ate edu­ca­tion and first aca­d­e­mic jobs.

Prediction competitions

Com­pe­ti­tions have a long his­tory in fore­cast­ing and pre­dic­tion, and have been instru­men­tal in forc­ing research atten­tion on meth­ods that work well in prac­tice. In the fore­cast­ing com­mu­nity, the M com­pe­ti­tion and M3 com­pe­ti­tion have been par­tic­u­larly influ­en­tial. The data min­ing com­mu­nity have the annual KDD cup which has gen­er­ated atten­tion on a wide range of pre­dic­tion prob­lems and asso­ci­ated meth­ods. Recent KDD cups are hosted on kag­gle.

In my research group meet­ing today, we dis­cussed our (lim­ited) expe­ri­ences in com­pet­ing in some Kag­gle com­pe­ti­tions, and we reviewed the fol­low­ing two papers which describe two pre­dic­tion competitions:

  1. Athana­sopou­los and Hyn­d­man (IJF 2011). The value of feed­back in fore­cast­ing com­pe­ti­tions. [preprint ver­sion]
  2. Roy et al (2013). The Microsoft Aca­d­e­mic Search Dataset and KDD Cup 2013.

Con­tinue reading →

New Australian data on the HMD

The Human Mor­tal­ity Data­base is a won­der­ful resource for any­one inter­ested in demo­graphic data. It is a care­fully curated col­lec­tion of high qual­ity deaths and pop­u­la­tion data from 37 coun­tries, all in a con­sis­tent for­mat with con­sis­tent def­i­n­i­tions. I have used it many times and never cease to be amazed at the care taken to main­tain such a great resource.

The data are con­tin­u­ally being revised and updated. Today the Aus­tralian data has been updated to 2011. There is a time lag because of lagged death reg­is­tra­tions which results in under­counts; so only data that are likely to be com­plete are included.

Tim Riffe from the HMD has pro­vided the fol­low­ing infor­ma­tion about the update:

  1. All death counts since 1964 are now included by year of occur­rence, up to 2011. We have 2012 data but do not pub­lish them because they are likely a 5% under­count due to lagged registration.
  2. Death count inputs for 1921 to 1963 are now in sin­gle ages. Pre­vi­ously they were in 5-​​year age groups. Rather than hav­ing an open age group of 85+ in this period counts usu­ally go up to the max­i­mum observed (stated) age. This change (i) intro­duces minor heap­ing in early years and (ii) implies dif­fer­ent appar­ent old-​​age mor­tal­ity than before, since pre­vi­ously any­thing above 85 was mod­eled accord­ing to the Meth­ods Pro­to­col.
  3. Pop­u­la­tion denom­i­na­tors have been swapped out for years 1992 to the present, owing to new ABS method­ol­ogy and inter­censal esti­mates for the recent period.

Some of the data can be read into R using the and hmd.e0 func­tions from the demog­ra­phy pack­age. Tim has his own pack­age on github that pro­vides a more exten­sive interface.