Date: 26 July 2018
Location: University of Melbourne
This page is for people enrolled in my ACEMS halfday workshop.
Prerequisites
Please bring your own laptop with a recent version of R and RStudio installed, along with the fpp2
package and its dependencies.
Participants will be assumed to be familiar with basic statistical tools such as multiple regression, but no knowledge of time series or forecasting will be assumed.
Reference
![Online textbook on forecasting](https://otexts.org/fpp2/fpp2_cover.jpg)Need help with R?
Program
1:00  1:45  Forecast evaluation  Slides 
1:45  2:30  ARIMA models  Slides 
2:30  3:00  Break  
3:00  4:00  Dynamic regression  Slides 
4:00  5:00  Hierarchical forecasting  Slides 
Lab Session 1
 For the first four lab sessions, we will use the
qcement
data (Quarterly Australian Portand Cement production, 1956–2014). Plot the data usingautoplot()
.  Split the data into a training set and a test set of 4 years. We will apply models to the training set, and compare the forecasts on the test set. Use
window()
to split the data.  Compute a seasonal naïve forecast applied to the training data, and plot the results. Use
snaive()
to produce the forecasts.  Test if the residuals are white noise using the
checkresiduals()
function. What do you conclude?
Lab Session 2
 Compare the forecasts of the four benchmark methods to the test data using the
accuracy()
command.  What do you conclude?
Lab Session 3
 For the
qcement
data, fit a suitable ARIMA model for the logged data usingauto.arima()
. You can use the argumentlambda=0
in theauto.arima()
function to take the logs. That way, the forecasts will be on the original scale.  Does this model pass the residuals check?
 How does it compare to the benchmark models on the test data?
Lab Session 4

Download quarterly GDP data from the Australian Macro Database. To make this a time series:
ausgdp < ts(read.csv("gdpcknaaoq.csv")[,1], start=c(1959,3), frequency=4)

Fit a dynamic regression model to the logged
qcement
data with GDP as a predictor variable. Make sure you use the same time periods for both variables. 
How do the results compare with the ARIMA model fitted earlier?
Lab Session 5

The
visnights
data set contains quarterly visitor nights for various regions of Australia. To turn this into anhts
object:library(hts) tourism.hts < hts(visnights, characters = c(3, 5))

Generate forecasts of the bottom level series using ARIMA models, and sum them for “bottomup” forecasts:
visnightsfc < forecast(tourism.hts, method='bu', fmethod='arima', h=8) autoplot(aggts(tourism.hts, level=0)) + autolayer(aggts(visnightsfc, level=0), lty=2) autoplot(aggts(tourism.hts, level=1)) + autolayer(aggts(visnightsfc, level=1), lty=2) autoplot(aggts(tourism.hts, level=2)) + autolayer(aggts(visnightsfc, level=2), lty=2)
Do the forecasts look reasonable?

Now use optimally reconciled forecasts:
visnightsfc2 < forecast(tourism.hts, fmethod='arima', h=8) autoplot(aggts(tourism.hts, level=0)) + autolayer(aggts(visnightsfc2, level=0), lty=2) autoplot(aggts(tourism.hts, level=1)) + autolayer(aggts(visnightsfc2, level=1), lty=2) autoplot(aggts(tourism.hts, level=2)) + autolayer(aggts(visnightsfc2, level=2), lty=2) autoplot(aggts(visnightsfc2, level=0)) + autolayer(aggts(visnightsfc, level=0), lty=2) autoplot(aggts(visnightsfc2, level=1)) + autolayer(aggts(visnightsfc, level=1), lty=2) autoplot(aggts(visnightsfc2, level=2)) + autolayer(aggts(visnightsfc, level=2), lty=2)
What difference has the reconciliation step made?