Errors on percentage errors

The MAPE (mean absolute per­cent­age error) is a pop­u­lar mea­sure for fore­cast accu­racy and is defined as

    \[\text{MAPE} = 100\text{mean}(|y_t - \hat{y}_t|/|y_t|)\]

where y_t denotes an obser­va­tion and \hat{y}_t denotes its fore­cast, and the mean is taken over t.

Arm­strong (1985, p.348) was the first (to my knowl­edge) to point out the asym­me­try of the MAPE say­ing that “it has a bias favor­ing esti­mates that are below the actual val­ues”. Con­tinue reading →

Job at Center for Open Science

This looks like an inter­est­ing job.

Dear Dr. Hyndman,

I write from the Cen­ter for Open Sci­ence, a non-​​profit orga­ni­za­tion based in Char­lottesville, Vir­ginia in the United States, which is ded­i­cated to improv­ing the align­ment between sci­en­tific val­ues and sci­en­tific prac­tices. We are ded­i­cated to open source and open science.

We are reach­ing out to you to find out if you know any­one who might be inter­ested in our Sta­tis­ti­cal and Method­olog­i­cal Con­sul­tant position.

The posi­tion is a unique oppor­tu­nity to con­sult on repro­ducible best prac­tices in data analy­sis and research design; the con­sul­tant will make shorts vis­its to pro­vide lec­tures and train­ing at uni­ver­si­ties, lab­o­ra­to­ries, con­fer­ences, and through vir­tual medi­ums. An espe­cially unique part of the job involves col­lab­o­rat­ing with the White House’s Office of Sci­ence and Tech­nol­ogy Pol­icy on mat­ters relat­ing to reproducibility.

If you know some­one with sub­stan­tial train­ing and expe­ri­ence in sci­en­tific research, quan­ti­ta­tive meth­ods, repro­ducible research prac­tices, and some pro­gram­ming expe­ri­ence (at least R, ide­ally Python or Julia) might you please pass this along to them?

Any­one may find out more about the job or apply via our website:

http://​cen​ter​foropen​science​.org/​j​o​b​s​/​#​stats

The posi­tion is full-​​time and located at our office in beau­ti­ful Char­lottesville, VA.

Thanks in advance for your time and help.

More time series data online

Ear­lier this week I had cof­fee with Ben Fulcher who told me about his online col­lec­tion com­pris­ing about 30,000 time series, mostly med­ical series such as ECG mea­sure­ments, mete­o­ro­log­i­cal series, bird­song, etc. There are some finance series, but not many other data from a busi­ness or eco­nomic con­text, although he does include my Time Series Data Library. In addi­tion, he pro­vides Mat­lab code to com­pute a large num­ber of char­ac­ter­is­tics. Any­one want­ing to test time series algo­rithms on a large col­lec­tion of data should take a look.

Unfor­tu­nately there is no R code, and no R inter­face for down­load­ing the data.

Reflections on UseR! 2013

This week I’ve been at the R Users con­fer­ence in Albacete, Spain. These con­fer­ences are a lit­tle unusual in that they are not really about research, unlike most con­fer­ences I attend. They pro­vide a place for peo­ple to dis­cuss and exchange ideas on how R can be used.

Here are some thoughts and high­lights of the con­fer­ence, in no par­tic­u­lar order. Con­tinue reading →

Makefiles for R/​LaTeX projects

Updated: 21 Novem­ber 2012

Make is a mar­vel­lous tool used by pro­gram­mers to build soft­ware, but it can be used for much more than that. I use make when­ever I have a large project involv­ing R files and LaTeX files, which means I use it for almost all of the papers I write, and almost of the con­sult­ing reports I pro­duce. Con­tinue reading →

COMPSTAT2012

This week I’m in Cyprus attend­ing the COMPSTAT2012 con­fer­ence. There’s been the usual inter­est­ing col­lec­tion of talks, and inter­ac­tions with other researchers. But I was struck by two side com­ments in talks this morn­ing that I’d like to mention.

Stephen Pol­lock: Don’t imag­ine your model is the truth

Actu­ally, Stephen said some­thing like “econ­o­mists (or was it econo­me­tri­cians?) have a bad habit of imag­in­ing their mod­els are true”. He gave the exam­ple of peo­ple ask­ing whether GDP “has a unit root”? GDP is an eco­nomic mea­sure­ment. It no more has a unit root than I do. But the mod­els used to approx­i­mate the dynam­ics of GDP may have a unit root. This is an exam­ple of con­fus­ing your data with your model. Or to put it the other way around, imag­in­ing that the model is true rather than an approx­i­ma­tion. A related thing that tends to annoy me is to refer to the model as the “data gen­er­at­ing process”. No model is a data gen­er­at­ing process, unless the data were obtained by sim­u­la­tion from the model. Mod­els are only ever approx­i­ma­tions, and imag­in­ing that they are data gen­er­at­ing processes only leads to over-​​confidence and bad science.

Matías Salibián-​​Barrera: Make all your code public

After giv­ing an inter­est­ing sur­vey of the robust­base and rrcov pack­ages for R, Matías spent the last ten min­utes of his talk pre­sent­ing the case for repro­ducible research and argu­ing for mak­ing R code pub­lic as much as pos­si­ble.  The ben­e­fits of mak­ing our code pub­lic are obvious:

  • The research can be repro­duced and checked by oth­ers. This is sim­ply good science.
  • Your work will be cited more fre­quently. Other researchers are much less likely to refer to your work if they have to imple­ment your meth­ods them­selves. But if you make it easy, then peo­ple will use your meth­ods and con­se­quently cite your papers.

He also said some­thing like this: “Don’t wait until jour­nals require you to sub­mit code and data; start now by putting your code and data on a web­site.” I agree. Every method­olog­i­cal paper should have an R pack­age as a com­ple­ment.  If that’s too much work, at least put some code on a web­site so that other peo­ple can imple­ment your method. What’s the point of hid­ing your code? In some ways, the code is more impor­tant than the accom­pa­ny­ing pack­age as it rep­re­sents a pre­cise descrip­tion of the method whereas the writ­ten paper may not include all the nec­es­sary details.

How to avoid annoying a referee

It’s not a good idea to annoy the ref­er­ees of your paper. They make rec­om­men­da­tions to the edi­tor about your work and it is best to keep them happy. There is an inter­est­ing dis­cus­sion on stats​.stack​ex​change​.com on this sub­ject. This inspired my own list below.

  • Explain what you’ve done clearly, avoid­ing unnec­es­sary jargon.
  • Don’t claim your paper con­tributes more than it actu­ally does. (I ref­er­eed a paper this week where the author claimed to have invented prin­ci­pal com­po­nent analysis!)
  • Ensure all fig­ures have clear cap­tions and labels.
  • Include cita­tions to the referee’s own work. Obvi­ously you don’t know who is going to ref­eree your paper, but you should aim to cite the main work in the area. It places your work in con­text, and keeps the ref­er­ees happy if they are the authors.
  • Make sure the cited papers say what you think they say. Sight what you cite!
  • Include proper cita­tions for all soft­ware pack­ages. If you are unsure how to cite an R pack­age, try the com­mand citation("packagename").
  • Never pla­gia­rise from other papers — not even sen­tence frag­ments. Use your own words. I’ve ref­er­eed a the­sis which had slabs taken from my own lec­ture notes includ­ing the typos.
  • Don’t pla­gia­rise from your own papers. Either ref­er­ence your ear­lier work, or pro­vide a sum­mary in new words.
  • Pro­vide enough detail so your work can be repli­cated. Where pos­si­ble, pro­vide the data and code. Make sure the code works.
  • When respond­ing to ref­eree reports, make sure you answer every­thing asked of you. (See my ear­lier post “Always lis­ten to review­ers”)
  • If you’ve revised the paper based on ref­er­ees’ com­ments, then thank them in the acknowl­edge­ments section.

For some applied papers, there are spe­cific sta­tis­ti­cal issues that need attention:

  • Give effect sizes with con­fi­dence inter­vals, not just p-​​values.
  • Don’t describe data using the mean and stan­dard devi­a­tion with­out indi­cat­ing whether the data were more-​​or-​​less sym­met­ric and unimodal.
  • Don’t split con­tin­u­ous data into groups.
  • Make sure your data sat­isfy the assump­tions of the sta­tis­ti­cal meth­ods used.

More tongue-​​in-​​cheek advice is pro­vided by Strat­ton and Neil (2005), “How to ensure your paper is rejected by the sta­tis­ti­cal reviewer”. Dia­betic Med­i­cine, 22(4), 371–373.

Feel free to add your own sug­ges­tions over at stats​.stack​ex​change​.com.

Replications and reproducible research

Repro­ducible research

One of the best ways to get started with research in a new area is to try to repli­cate some exist­ing research. In doing so, you will usu­ally gain a much bet­ter under­stand­ing of the topic, and you will often dis­cover some prob­lems with the research, or develop ideas that will lead to a new research paper.

Unfor­tu­nately, a lot of papers are not repro­ducible because the data are not made avail­able, or the descrip­tion of the meth­ods are not detailed enough. The good news is that there is a grow­ing move amongst fund­ing agen­cies and jour­nals to make more research repro­ducible.  Peng, Dominici and Zeger (2006) and Koenker and Zeileis (2009) pro­vide help­ful dis­cus­sions of new tools (espe­cially Sweave) for mak­ing research eas­ier to reproduce.

The Inter­na­tional Jour­nal of Fore­cast­ing is also encour­ag­ing researchers to make their data and com­puter code avail­able in order to allow oth­ers to repli­cate the research. I have just writ­ten an edi­to­r­ial on this topic which will appear in the first issue of 2010. Here is an excerpt from the article:

As the lead­ing jour­nal in fore­cast­ing, the IJF has a respon­si­bil­ity to set research standards.

So, a cou­ple of years ago, we started ask­ing authors to make their data and code avail­able on our web­site. Then last year we changed our guide­lines for authors to say

Authors will nor­mally be expected to sub­mit a com­plete set of any data used in elec­tronic form, or pro­vide instruc­tions for how to obtain them. Excep­tions to this require­ment may be made at the dis­cre­tion of the han­dling edi­tor. The author must describe meth­ods and data suffi­ciently so the research can be repli­cated. The
pro­vi­sion of code as well as data is encour­aged, but not required.

This is con­sis­tent with the moves of many grant­ing agen­cies which are now start­ing to require pub­licly funded research to make the data pub­licly avail­able. Once the data are pub­lic, other researchers can ver­ify (or oth­er­wise) the con­clu­sions drawn.

Six months ago, the Inter­na­tional Jour­nal of Fore­cast­ing web­site (www​.fore​cast​ers​.org/ijf) was redesigned to allow sup­ple­ments and com­ments on each pub­lished paper. Sup­ple­men­tary infor­ma­tion about a paper can be pro­vided by authors and is freely avail­able online. This can include data, com­puter code, large tables, extra figures, extended foot­notes, extra rel­e­vant mate­r­ial, etc. Authors are required to pro­vide what­ever mate­r­ial is needed allow their results to be repli­cated with­out exces­sive difficulty.

Repli­ca­tion articles

It has become stan­dard in most sci­ences for results to be repli­cated before being widely accepted. Remem­ber cold fusion? Research find­ings that can­not be inde­pen­dently ver­i­fied under the same or very sim­i­lar con­di­tions are lit­tle more than pub­lished opin­ions. In fact, the painstak­ing step-​​by-​​step dupli­ca­tion of pub­lished research is often the only way to prop­erly assess the work done by oth­ers (Laine et al, 2007). While repli­cat­ing research is accepted prac­tice in med­i­cine, chem­istry, physics, and many other areas of sci­ence, it has not been part of the research cul­ture in sta­tis­tics, econo­met­rics and other fields asso­ci­ated with forecasting.

The Inter­na­tional Jour­nal of Fore­cast­ing is try­ing to change this cul­ture, and is will­ing to pub­lish repli­ca­tion arti­cles, espe­cially if they pro­vide new insights into pub­lished results. For exam­ple, we pub­lished Gard­ner & Diaz-​​Saiz (2008) which attempted to repli­cate Fildes et al (1998) and pro­vided some use­ful new insight into the orig­i­nal results. In the next issue of the jour­nal, there will also be an invited paper by Heiner Evan­schitzky and Scott Arm­strong on repli­ca­tions in fore­cast­ing research.  I hope every­one work­ing in fore­cast­ing, sta­tis­tics, econo­met­rics and related fields will soon come to see repli­ca­tion stud­ies as an impor­tant part of the research process.

Ref­er­ences

  1. Evan­schitzky, H. & Arm­strong, J. S. (2010). Repli­ca­tions of fore­cast­ing research. Inter­na­tional Jour­nal of Fore­cast­ing, 26, to appear.
  2. Fildes, R., Hibon, M., Makri­dakis, S., & Meade, N. (1998). Gen­er­al­is­ing about uni­vari­ate fore­cast­ing meth­ods: Fur­ther empir­i­cal evi­dence. Inter­na­tional Jour­nal of Fore­cast­ing, 14, 339–358.
  3. Gard­ner, Jr., E. S. & Diaz-​​Saiz, J. (2008). Expo­nen­tial smooth­ing in the telecom­mu­ni­ca­tions data. Inter­na­tional Jour­nal of Fore­cast­ing, 24, 170–174.
  4. Hyn­d­man, R.J. (2010) Encour­ag­ing repli­ca­tions and repro­ducible research. Inter­na­tional Jour­nal of Fore­cast­ing, 26, to appear.
  5. Koenker, R. & Zeilis, A. (2009). On repro­ducible econo­met­ric research. Jour­nal of Applied Econo­met­rics, 24(5), 833–847.
  6. Laine, C., Good­man, S. N., Gris­wold, M. E., & Sox, H. C. (2007). Repro­ducible research: mov­ing toward research the pub­lic can really trust. Annals of Inter­nal Med­i­cine, 146 (6), 450–453.
  7. Peng, R. D., Dominici, F., & Zeger, S. L. (2006). Repro­ducible epi­demi­o­logic research. Amer­i­can Jour­nal of Epi­demi­ol­ogy, 163 (9), 783–789.

Some use­ful websites