The MAPE (mean absolute percentage error) is a popular measure for forecast accuracy and is defined as where denotes an observation and denotes its forecast, and the mean is taken over . Armstrong (1985, p.348) was the first (to my knowledge) to point out the asymmetry of the MAPE saying that “it has a bias favoring estimates that are below the actual values”.
Posts Tagged ‘references’:
For all those people asking me how to obtain a print version of my book “Forecasting: principles and practice” with George Athanasopoulos, you now can. Order on Amazon.com Order on Amazon.co.uk Order on Amazon.fr The online book will continue to be freely available. The print version of the book is intended to help fund the development of the OTexts platform. The price is US195 for my previous forecasting textbook, 182 for Gonzalez-Rivera. No matter how good the books are, the prices are absurdly high. OTexts is intended to be a different kind of publisher — all our books are online and free, those in print will be reasonably priced. The online version will continue to be updated regularly. The print version is a snapshot of the online version today. We will release a new print edition occasionally, no more than annually and only when the online version has changed enough to warrant a new print edition. We are planning an offline electronic version as well. I’ll announce it here when it is ready.
Every year or so, Elsevier asks me to nominate five International Journal of Forecasting papers from the last two years to highlight in their marketing materials as “Editor’s Choice”. I try to select papers across a broad range of subjects, and I take into account citations and downloads as well as my own impression of the paper. That tends to bias my selection a little towards older papers as they have had more time to accumulate citations. Here are the papers I chose this morning (in the order they appeared): Diebold and Yilmaz (2012) Better to give than to receive: Predictive directional measurement of volatility spillovers. IJF 28(1), 57–66. Loterman, Brown, Martens, Mues, and Baesens (2012) Benchmarking regression algorithms for loss given default modeling. IJF 28(1), 161–170. Soyer and Hogarth (2012) The illusion of predictability: How regression statistics mislead experts. IJF 28(3), 695–711. Friedman (2012) Fast sparse regression and classification. IJF 28(3), 722–738. Davydenko and Fildes (2013) Measuring forecasting accuracy: The case of judgmental adjustments to SKU-level demand forecasts. IJF 29(3), 510–522. Last time I did this, three of the five papers I chose went on to win awards. (I don’t pick the award winners — that’s a matter for the whole editorial board.) On the other hand, I didn’t pick the
In two weeks I am presenting a workshop at the University of Granada (Spain) on Automatic Time Series Forecasting. Unlike most of my talks, this is not intended to be primarily about my own research. Rather it is to provide a state-of-the-art overview of the topic (at a level suitable for Masters students in Computer Science). I thought I’d provide some historical perspective on the development of automatic time series forecasting, plus give some comments on the current best practices.
Hastie, Tibshirani and Friedman’s Elements of Statistical Learning first appeared in 2001 and is already a classic. It is my go-to book when I need a quick refresher on a machine learning algorithm. I like it because it is written using the language and perspective of statistics, and provides a very useful entry point into the literature of machine learning which has its own terminology for statistical concepts. A free downloadable pdf version is available on the website. Recently, a simpler related book appeared entitled Introduction to Statistical Learning with applications in R by James, Witten, Hastie and Tibshirani. It “is aimed for upper level undergraduate students, masters students and Ph.D. students in the non-mathematical sciences”. This would be a great textbook for our new 3rd year subject on Business Analytics. The R code is a welcome addition in showing how to implement the methods. Again, a free downloadable pdf version is available on the website. There is also a new, free book on Statistical foundations of machine learning by Böntempi and Ben Taieb available on the OTexts platform. This is more of a handbook and is written by two authors coming from a machine learning background. R code is also provided. Being an OTexts book, it is continually updated and revised, and is freely available
The publishing platform I set up for my forecasting book has now been extended to cover more books and greater functionality. Check it out at www.otexts.org.
I received this email today: Dear Professor Hyndman, I was wondering if you could maybe give me some advice on how to organize your research process. I am able to search the literature on a certain topic and identify where there is a question to work with. My main difficult is to organize my paper annotations in order to help me to guide my research process, i.e, how to manage the information gathered in those papers to compose and structure a document which can represent the research developed so far. I have been looking at different tools such scrivener, Qiqqa, papers2, etc but I am not sure if one of these tools would be the right way to go. To be honest I am not even sure a tool would do what I am looking for, not just organize references and annotate pdfs but to get more control of my research process. I appreciate if I could get your thoughts on this subject.
The nature of research is that other people are probably working on similar ideas to you, and it is possible that someone will beat you to publishing them.
If you find this blog helpful (or even if you don’t but you’re interested in blogs on research issues and tools), there are a few other blogs about doing research that you might find useful. Here are a few that I read. Patter — Pat Thomson. The Thesis Whisperer — Inger Mewburn. The Research Whisperer – several RMIT researchers. the (research) supervisor’s friend — Geof Hill. My Research Rants – Jordi Cabot. The Three Month Thesis – James Hayton. profserious – Anthony Finkelstein. Academic Life — Marialuisa Aliotta. Help for New Professors — Faye Hicks. The Art of Scientific Writing – Faye Hicks. Explorations of style– Rachael Cayley. sharmanedit — Anna Sharman. GradHacker – writers from several universities. PhD Life – Warwick Uni students. PhD Comics — essential reading for every PhD student, and good therapy. I’ve created a bundle so you can subscribe to all of these in one go. Of course, there are lots of statistics blogs as well, and blogs about other research disciplines. The ones above are those that concentrate on generic research issues.
I’ve just finished another reviewer report for a journal, and yet again I’ve had to make comments about reading the literature. It’s not difficult. Before you write a paper, read what other people have done. A simple search on Google scholar will usually do the trick. And before you submit a paper, check again that you haven’t missed anything important. The paper I reviewed today did not cite a single reference from either of the two most active research groups in the area in the last ten years. Any search on the topic would have turned up about a dozen papers from these two groups alone. I don’t mind if papers miss a reference or two, especially if they have been published in an obscure outlet. But I will recommend a straight reject if a paper hasn’t cited any of the most important papers from the last five years. Part of a researcher’s task is to engage with what has already been done, and show how any new ideas differ from or extend on previous work.