Nominations for IJF Best Paper 20122013
Hyndsight
The following papers have been nominated for the best paper published in the International Journal of Forecasting in 20122013. I have included an excerpt from the nomination in each case. The papers in bold have been shortlisted for the award, and the editorial board are currently voting on them.

The first rule for the award of best paper should be that the paper clearly reflects the value of the new method/approach when compared to established alternatives in the particular problem context chosen by the researchers. This paper examines alternative models in the important problem of predicting loss from defaulting consumers. The problem context is clear and important — the appraisal of the inclusion of macroeconomic variables and the comparison with other specifications is thorough. It should have impact on the many users of these models.

This paper is important because it seeks to determine how forecasters make their forecasts and whether they incorporate new information into their predictions. The methodology again is applicable to all fields.

This is a methodological paper developing ways to estimate spillovers from one market to others. They use a generalized vector autoregressive framework in which forecast error variance decompositions are invariant to the variable ordering. Even though Diebold and Yilmaz used the method to look at volatility spillovers internationally in the time domain, the procedure is usable more generally in crosssectional data with spatial interconnections too.
Galvão, A. B. (2013). Changes in predictive ability with mixed frequency data. IJF, 29(3), 395–410.
The premise of this paper is just so ‘common sense’: if we have disaggregated data, why don’t we use it? The manuscript links disaggregated data (with different frequencies) with nonlinear features of models, which tend to disappear when the data is aggregated. It is a smart way to use all available information, and, at the same time, learning which features are interesting in the production of a forecast. This approach makes the study of nonlinearities valuable.

The authors explore an extensive set of methods to show that, on aggregating forecasts, a simple average is a benchmark that is very difficult to beat by more sophisticated aggregation schemes. Although the finding per se is not new (we have numerous studies examining the “forecast combination puzzle”), the rigorous approach to comparison of methods makes this manuscript very relevant.

This paper deals with an important problem from the point of view of empirical forecasting which is measuring the uncertainty of forecasts. Second, the problem considered is interesting from the methodological point of view. Third, the procedure proposed can be implemented in practice as it is not extremely complicated. Therefore, the balance between methodology and empirical interest is appropriate.

This paper will revive interesting discussion on simultaneous confidence bands, path forecasts, or whatever name different communities use for multistep ahead probabilistic forecasts in their various forms. Works on this topic are fairly rare while it is of utmost importance to further develop probabilistic forecasting in that direction. The authors have done previous work (in Journal of Applied Econometrics) on the verification on these socalled pathforecasts. In this paper, they push it further by linking them to simultaneous confidence regions obtained in an hypothesis framework. They also show the practical interest of their proposal through a relevant casestudy.

The paper deals with an important macroeconomic topic: predicting recessions. The failure to forecast recessions is one of the main failures in that field. The paper also is important because it presents methodologies that can be used in other areas of forecasting.

This paper is innovative in that brings unexplored issues like noncausal representation of AR processes into the forecasting literature. It opens new lines of inquiry. It may offer advantages for nonGaussian processes, which are so prevalent in financial data.

This paper deals with an important problem from the point of view of empirical forecasting which is measuring the uncertainty of forecasts. Second, the problem considered is interesting from the methodological point of view. Third, the procedure proposed can be implemented in practice as it is not extremely complicated. Therefore, the balance between methodology and empirical interest is appropriate.

This paper shows how regression analysis is misunderstood and misused by leading scholars when they do regression analyses. The paper has attracted much attention. It adds to the research showing that leading scholars made serious errors in papers that they publish in leading economics journals, and this problem has gotten worse over time. The implication is that if even the best and the brightest get it wrong, how can we expect others to get it right? There are more effective ways to analyze data and Soyer and Hogarth suggest one approach.