Review papers are extremely useful for new researchers such as PhD students, or when you want to learn about a new research field. The International Journal of Forecasting produced a whole review issue in 2006, and it contains some of the most highly cited papers we have ever published. Now, beginning with the latest issue of the journal, we have started publishing occasional review articles on selected areas of forecasting. The first two articles are:
- Electricity price forecasting: A review of the state-of-the-art with a look into the future by Rafał Weron.
- The challenges of pre-launch forecasting of adoption time series for new durable products by Paul Goodwin, Sheik Meeran, and Karima Dyussekeneva.
Both tackle very important topics in forecasting. Weron’s paper contains a comprehensive survey of work on electricity price forecasting, coherently bringing together a large body of diverse research — I think it is the longest paper I have ever approved at 50 pages. Goodwin, Meeran and Dyussekeneva review research on new product forecasting, a problem every company that produces goods or services has faced; when there are no historical data available, how do you forecast the sales of your product?
We have a few other review papers in progress, so keep an eye out for them in future issues.
I’ve been an editor of JSS for the last few years, and as a result I tend to get email from people asking me about publishing papers describing R packages in JSS. So for all those wondering, here are some general comments. Continue reading →
The IJF is introducing occasional review papers on areas of forecasting. We did a whole issue in 2006 reviewing 25 years of research since the International Institute of Forecasters was established. Since then, there has been a lot of new work in application areas such as call center forecasting and electricity price forecasting. In addition, there are areas we did not cover in 2006 including new product forecasting and forecasting in finance. There have also been methodological and theoretical developments over the last eight years. Consequently, I’ve started inviting eminent researchers to write survey papers for the journal.
One obvious choice was Tilmann Gneiting, who has produced a large body of excellent work on probabilistic forecasting in the last few years. The theory of forecasting was badly in need of development, and Tilmann and his coauthors have made several great contributions in this area. However, when I asked him to write a review he explained that another journal had got in before me, and that the review was already written. It appeared in the very first volume of the new journal Annual Review of Statistics and its Application: Gneiting and Katzfuss (2014) Probabilistic Forecasting, pp.125–151.
Having now read it, I’m both grateful for this more accessible introduction to the area, and disappointed that it didn’t end up in the International Journal of Forecasting. I forecast that it will be highly cited (although I won’t calculate a forecast distribution or compute a scoring function for that).
Also, good luck to the new journal; it looks like it will be very useful, and is sure to have a high impact factor given it publishes review articles.
This is a short piece I wrote for the next issue of the Oracle newsletter produced by the International Institute of Forecasters. Continue reading →
I sent this rejection letter this morning about a paper submitted to the International Journal of Forecasting.
I am writing to you regarding manuscript ????? entitled “xxxxxxxxxxxx” which you submitted to the International Journal of Forecasting.
It so happens that I am aware that this paper was previously reviewed for the YYYYYYY journal. It seems that you have not bothered to make any of the changes recommended by the reviewers of your submission to YYYYYYY. Just submitting the same paper to another journal is extremely poor practice, and I am disappointed that you have taken this path. Reviewers spend a great deal of time providing comments, and it is disrespectful to ignore them. I don’t expect you to do everything they say, but I would expect some of their comments to be helpful.
I am unwilling to consider the paper further for the International Journal of Forecasting. Read the previous reviews to know why. And before you submit the paper to a new journal, take the time to consider the reviews you have already been given.
(Editor-in-Chief, International Journal of Forecasting)
I have written on this issue before. The peer-review system requires people to donate considerable amounts of time to writing reviews. In general, they do a great job and provide helpful comments. So it really annoys me when authors treat the system as a game with the aim to get a paper accepted with minimal work, and with no interest in learning from feedback.
The International Journal of Forecasting is calling for papers on probabilistic energy forecasting. Here are the details (taken from Tao Hong’s blog). Continue reading →
I often receive email asking about IJF quality indicators. Here is one I received today.
Dear Professor Hyndman,
I recently had a paper published in IJF entitled, “xxxxxxxxxxxx”. I am very pleased with the publication and consider IJF to be an excellent outlet for my work in time-series econometrics.
I have an unusual request, but I hope you will consider responding. My research is judged by non-economists and IJF is not on their list of “quality” journals. It makes a significant difference in my research rating and pay. Would you mind sending some objective information re the quality of IJF that I can pass along to the committee?
And here is part of my reply:
- The IJF is ranked A in Australia (we have four levels — A*, A, B and C).†
- The IJF 2011 2-year impact factor is 1.485. In 2010 it was 1.863. The five year impact factor is 2.450. Compare this to the Journal of Business and Economic Statistics which has a 2-year impact factor of 1.693, or Computational Statistics & Data Analysis with 1.089.
- We are ranked 40 out of 305 economics journals based on our 2-year impact factor.
- We receive about 400 submissions annually, and publish about 70 per year. But that includes invited papers. Of the contributed papers, we reject about 85–90%.
† The Australian rankings were produced by the Australian Research Council a few years ago after extensive consultation. They were later dropped, but the rankings are still frequently cited and used to measure journal quality. Although the ARC no longer has the rankings on their website, they are available here. Also useful is the list of econometrics journals (including the IJF), and the list of statistics journals.
The nature of research is that other people are probably working on similar ideas to you, and it is possible that someone will beat you to publishing them. Continue reading →
If you find this blog helpful (or even if you don’t but you’re interested in blogs on research issues and tools), there are a few other blogs about doing research that you might find useful. Here are a few that I read.
I’ve created a bundle so you can subscribe to all of these in one go.
Of course, there are lots of statistics blogs as well, and blogs about other research disciplines. The ones above are those that concentrate on generic research issues.
I’ve just finished another reviewer report for a journal, and yet again I’ve had to make comments about reading the literature. It’s not difficult. Before you write a paper, read what other people have done. A simple search on Google scholar will usually do the trick. And before you submit a paper, check again that you haven’t missed anything important.
The paper I reviewed today did not cite a single reference from either of the two most active research groups in the area in the last ten years. Any search on the topic would have turned up about a dozen papers from these two groups alone.
I don’t mind if papers miss a reference or two, especially if they have been published in an obscure outlet. But I will recommend a straight reject if a paper hasn’t cited any of the most important papers from the last five years. Part of a researcher’s task is to engage with what has already been done, and show how any new ideas differ from or extend on previous work.