Probabilistic forecasting by Gneiting and Katzfuss (2014)

The IJF is intro­duc­ing occa­sional review papers on areas of fore­cast­ing. We did a whole issue in 2006 review­ing 25 years of research since the Inter­na­tional Insti­tute of Fore­cast­ers was estab­lished. Since then, there has been a lot of new work in appli­ca­tion areas such as call cen­ter fore­cast­ing and elec­tric­ity price fore­cast­ing. In addi­tion, there are areas we did not cover in 2006 includ­ing new prod­uct fore­cast­ing and fore­cast­ing in finance. There have also been method­olog­i­cal and the­o­ret­i­cal devel­op­ments over the last eight years. Con­se­quently, I’ve started invit­ing emi­nent researchers to write sur­vey papers for the journal.

One obvi­ous choice was Tilmann Gneit­ing, who has pro­duced a large body of excel­lent work on prob­a­bilis­tic fore­cast­ing in the last few years. The the­ory of fore­cast­ing was badly in need of devel­op­ment, and Tilmann and his coau­thors have made sev­eral great con­tri­bu­tions in this area. How­ever, when I asked him to write a review he explained that another jour­nal had got in before me, and that the review was already writ­ten. It appeared in the very first vol­ume of the new jour­nal Annual Review of Sta­tis­tics and its Appli­ca­tion: Gneit­ing and Katz­fuss (2014) Prob­a­bilis­tic Fore­cast­ing, pp.125–151.

Hav­ing now read it, I’m both grate­ful for this more acces­si­ble intro­duc­tion to the area, and dis­ap­pointed that it didn’t end up in the Inter­na­tional Jour­nal of Fore­cast­ing. I fore­cast that it will be highly cited (although I won’t cal­cu­late a fore­cast dis­tri­b­u­tion or com­pute a scor­ing func­tion for that).

Also, good luck to the new jour­nal; it looks like it will be very use­ful, and is sure to have a high impact fac­tor given it pub­lishes review articles.

How to get your paper rejected quickly

I sent this rejec­tion let­ter this morn­ing about a paper sub­mit­ted to the Inter­na­tional Jour­nal of Forecasting.

Dear XXXXX.

I am writ­ing to you regard­ing man­u­script ????? enti­tled “xxxxxxxxxxxx” which you sub­mit­ted to the Inter­na­tional Jour­nal of Fore­cast­ing.

It so hap­pens that I am aware that this paper was pre­vi­ously reviewed for the YYYYYYY jour­nal. It seems that you have not both­ered to make any of the changes rec­om­mended by the review­ers of your sub­mis­sion to YYYYYYY. Just sub­mit­ting the same paper to another jour­nal is extremely poor prac­tice, and I am dis­ap­pointed that you have taken this path. Review­ers spend a great deal of time pro­vid­ing com­ments, and it is dis­re­spect­ful to ignore them. I don’t expect you to do every­thing they say, but I would expect some of their com­ments to be helpful.

I am unwill­ing to con­sider the paper fur­ther for the Inter­na­tional Jour­nal of Fore­cast­ing. Read the pre­vi­ous reviews to know why. And before you sub­mit the paper to a new jour­nal, take the time to con­sider the reviews you have already been given.

Sin­cerely,

Rob Hyn­d­man
(Editor-​​in-​​Chief, Inter­na­tional Jour­nal of Fore­cast­ing)

I have writ­ten on this issue before. The peer-​​review sys­tem requires peo­ple to donate con­sid­er­able amounts of time to writ­ing reviews. In gen­eral, they do a great job and pro­vide help­ful com­ments. So it really annoys me when authors treat the sys­tem as a game with the aim to get a paper accepted with min­i­mal work, and with no inter­est in learn­ing from feedback.

IJF quality indicators

I often receive email ask­ing about IJF qual­ity indi­ca­tors. Here is one I received today.

Dear Pro­fes­sor Hyndman,

I recently had a paper pub­lished in IJF enti­tled, “xxxxxxxxxxxx”. I am very pleased with the pub­li­ca­tion and con­sider IJF to be an excel­lent out­let for my work in time-​​series econometrics.

I have an unusual request, but I hope you will con­sider respond­ing. My research is judged by non-​​economists and IJF is not on their list of “qual­ity” jour­nals. It makes a sig­nif­i­cant dif­fer­ence in my research rat­ing and pay. Would you mind send­ing some objec­tive infor­ma­tion re the qual­ity of IJF that I can pass along to the committee?

And here is part of my reply:

  • The IJF is ranked A in Aus­tralia (we have four lev­els — A*, A, B and C).†
  • The IJF 2011 2-​​year impact fac­tor is 1.485. In 2010 it was 1.863. The five year impact fac­tor is 2.450. Com­pare this to the Jour­nal of Busi­ness and Eco­nomic Sta­tis­tics which has a 2-​​year impact fac­tor of 1.693, or Com­pu­ta­tional Sta­tis­tics & Data Analy­sis with 1.089.
  • We are ranked 40 out of 305 eco­nom­ics jour­nals based on our 2-​​year impact factor.
  • We receive about 400 sub­mis­sions annu­ally, and pub­lish about 70 per year. But that includes invited papers. Of the con­tributed papers, we reject about 85–90%.

† The Aus­tralian rank­ings were pro­duced by the Aus­tralian Research Coun­cil a few years ago after exten­sive con­sul­ta­tion. They were later dropped, but the rank­ings are still fre­quently cited and used to mea­sure jour­nal qual­ity. Although the ARC no longer has the rank­ings on their web­site, they are avail­able here. Also use­ful is the list of econo­met­rics jour­nals (includ­ing the IJF), and the list of sta­tis­tics jour­nals.

Blogs about research

If you find this blog help­ful (or even if you don’t but you’re inter­ested in blogs on research issues and tools), there are a few other blogs about doing research that you might find use­ful. Here are a few that I read.

I’ve cre­ated a bun­dle so you can sub­scribe to all of these in one go.

Of course, there are lots of sta­tis­tics blogs as well, and blogs about other research dis­ci­plines. The ones above are those that con­cen­trate on generic research issues.

Read the literature

I’ve just fin­ished another reviewer report for a jour­nal, and yet again I’ve had to make com­ments about read­ing the lit­er­a­ture. It’s not dif­fi­cult. Before you write a paper, read what other peo­ple have done. A sim­ple search on Google scholar will usu­ally do the trick. And before you sub­mit a paper, check again that you haven’t missed any­thing important.

The paper I reviewed today did not cite a sin­gle ref­er­ence from either of the two most active research groups in the area in the last ten years. Any search on the topic would have turned up about a dozen papers from these two groups alone.

I don’t mind if papers miss a ref­er­ence or two, espe­cially if they have been pub­lished in an obscure out­let. But I will rec­om­mend a straight reject if a paper hasn’t cited any of the most impor­tant papers from the last five years. Part of a researcher’s task is to engage with what has already been done, and show how any new ideas dif­fer from or extend on pre­vi­ous work.

Put your pre-​​prints online

I have argued pre­vi­ously that research papers should be posted online at the same time as they are sub­mit­ted to a jour­nal. Some­times peo­ple claim that jour­nals don’t allow it, which is non­sense. Almost every jour­nal allows it, and many also allow the pub­lished ver­sion of a paper to appear on your per­sonal website.

Today I dis­cov­ered a new tool (thanks to the IMU newslet­ter) which makes it easy to check a journal’s pol­icy on this. Check out SHERPA/​RoMEO.

It’s a very use­ful tool, but who­ever thought SHERPA/​RoMEO was a good name needs therapy.