On 23–25 September, I will be running a 3-day workshop in Perth on “Forecasting: principles and practice” mostly based on my book of the same name. Workshop participants will be assumed to be familiar with basic statistical tools such as multiple regression, but no knowledge of time series or forecasting will be assumed. Some prior experience in R is highly desirable. Venue: The University Club, University of Western Australia, Nedlands WA. Day 1: Forecasting tools, seasonality and trends, exponential smoothing. Day 2: State space models, stationarity, transformations, differencing, ARIMA models. Day 3: Time series cross-validation, dynamic regression, hierarchical forecasting, nonlinear models. The course will involve a mixture of lectures and practical sessions using R. Each participant must bring their own laptop with R installed, along with the fpp package and its dependencies. For costs and enrolment details, go to http://www.cas.maths.uwa.edu.au/courses/forecasting.
Posts Tagged ‘forecasting’:
GEFCom 2014 is the most advanced energy forecasting competition ever organized, both in terms of the data involved, and in terms of the way the forecasts will be evaluated. So everyone interested in energy forecasting should head over to the competition webpage and start forecasting: www.gefcom.org. This time, the competition is hosted on CrowdANALYTIX rather than Kaggle. Highlights of GEFCom2014: An upgraded edition from GEFCom2012 Four tracks: electric load, electricity price, wind power and solar power forecasting. Probabilistic forecasting: contestants are required to submit 99 quantiles for each step throughout the forecast horizon. Rolling forecasting: incremental data sets are being released on weekly basis to forecast the next period of interest. Prizes for winning teams and institutions: up to 3 teams from each track will be recognized as the winning team; top institutions with multiple well-performing teams will be recognized as the winning institutions. Global participation: 200+ people from 40+ countries have already signed up the GEFCom2014 interest list. Tao Hong (the main organizer) has a few tips on his blog that you should read before starting.
At the IIF annual board meeting last month in Rotterdam, I suggested that we provide awards to the top students studying forecasting at university level around the world, to the tune of $100 plus IIF membership for a year. I’m delighted that the idea met with enthusiasm, and that the awards are now available. Even better, my second year forecasting subject has been approved for an award. The IIF have agreed to fund awards for 20 forecasting courses to start with. I believe they have already had several applications, so any other forecasting lecturers out there will need to be quick if they want to be part of it.
This is an example of how to use the demography package in R for stochastic population forecasting with coherent components. It is based on the papers by Hyndman and Booth (IJF 2008) and Hyndman, Booth and Yasmeen (Demography 2013). I will use Australian data from 1950 to 2009 and forecast the next 50 years. In demography, “coherent” forecasts are where male and females (or other sub-groups) do not diverge over time. (Essentially, we require the difference between the groups to be stationary.) When we wrote the 2008 paper, we did not know how to constrain the forecasts to be coherent in a functional data context and so this was not discussed. My later 2013 paper provided a way of imposing coherence. This blog post shows how to implement both ideas using R.
When modelling data with ARIMA models, it is sometimes useful to plot the inverse characteristic roots. The following functions will compute and plot the inverse roots for any fitted ARIMA model (including seasonal models).
Rolling forecasts are commonly used to compare time series models. Here are a few of the ways they can be computed using R. I will use ARIMA models as a vehicle of illustration, but the code can easily be adapted to other univariate time series models.
Every year, the International Institute of Forecasters in conjunction with SAS offer some small grants to help promote research in forecasting. There are two $5000 grants per year for research on forecasting methodology and applications. This year, applications close on 30 September 2014. More details are given here. Information about past SAS-IIF awards is given on the IIF website. It is interesting to see the range of topics covered. Here are the winning projects in the last two years: Jeffrey Stonebraker: “Probabilistic Forecasting of the Global Demand for the Treatment of Hemophilia B.” Yongchen (Herbert) Zhao: “Robust Real-Time Automated Forecast Combination in SAS: Development of a SAS Procedure and a Comprehensive Evaluation of Recently Developed Combination Methods.” Zoe Theocharis, Nigel Harvey, Leonard Smith: “Improving judgmental input to hurricane forecasts in the insurance and reinsurance sector.” Elena-Ivona Dumitrescu, Janine Christine Balter, Peter Reinhard Hansen: “Forecasting Exchange Rate Volatility: Multivariate Realized GARCH Framework.” Yorghos Tripodis: “Forecasting the Cognitive Status in an Aging Population.”
Last week my research group discussed Hal Varian’s interesting new paper on “Big data: new tricks for econometrics”, Journal of Economic Perspectives, 28(2): 3–28. It’s a nice introduction to trees, bagging and forests, plus a very brief entrée to the LASSO and the elastic net, and to slab and spike regression. Not enough to be able to use them, but ok if you’ve no idea what they are.
With the latest version of the hts package for R, it is now possible to specify rather complicated grouping structures relatively easily. All aggregation structures can be represented as hierarchies or as cross-products of hierarchies. For example, a hierarchical time series may be based on geography: country, state, region, store. Often there is also a separate product hierarchy: product groups, product types, packet size. Forecasts of all the different types of aggregation are required; e.g., product type A within region X. The aggregation structure is a cross-product of the two hierarchies. This framework includes even apparently non-hierarchical data: consider the simple case of a time series of deaths split by sex and state. We can consider sex and state as two very simple hierarchies with only one level each. Then we wish to forecast the aggregates of all combinations of the two hierarchies. Any number of separate hierarchies can be combined in this way. Non-hierarchical factors such as sex can be treated as single-level hierarchies.
For the next month I am travelling in Europe and will be giving the following talks. 17 June. Challenges in forecasting peak electricity demand. Energy Forum, Sierre, Valais/Wallis, Switzerland. 20 June. Common functional principal component models for mortality forecasting. International Workshop on Functional and Operatorial Statistics. Stresa, Italy. 24–25 June. Functional time series with applications in demography. Humboldt University, Berlin. 1 July. Fast computation of reconciled forecasts in hierarchical and grouped time series. International Symposium on Forecasting, Rotterdam, Netherlands.