library(forecast)
library(ggplot2)
<- window(co2, end=c(1990,12))
train <- window(co2, start=c(1991,1))
test <- length(test)
h <- forecast(ets(train), h=h)
ETS <- forecast(auto.arima(train, lambda=0), h=h)
ARIMA <- stlf(train, lambda=0, h=h)
STL <- cbind(ETS=ETS$mean, ARIMA=ARIMA$mean, STL=STL$mean)
X <- cbind(co2, X)
df colnames(df) <- c("Data","ETS","ARIMA","STL")
autoplot(df) +
xlab("Year") + ylab(expression("Atmospheric concentration of CO"[2]))
R packages for forecast combinations
It has been well-known since at least 1969, when Bates and Granger wrote their famous paper on “The Combination of Forecasts”, that combining forecasts often leads to better forecast accuracy.
So it is helpful to have a couple of new R packages which do just that: opera and forecastHybrid.
opera
Opera stands for “Online Prediction by ExpeRt Aggregation”. It was written by Pierre Gaillard and Yannig Goude, and Pierre provides a nice introduction in the vignette. While it can be used with combining any sort of predictions, I will just consider simple univariate time series forecasts, using the monthly co2
data.
Here, ETS has done a particularly bad job at picking the trend, while the other two look ok.
The mixture function from the opera
package computes weights when combining the forecasts based on how well it has done up to that point.
library(opera)
<- mixture(model = "MLpol", loss.type = "square")
MLpol0 <- predict(MLpol0, X, test, type='weights')
weights head(weights)
ETS ARIMA STL
[1,] 0.3333 0.3333 0.3333
[2,] 0.5447 0.0000 0.4553
[3,] 0.6352 0.0000 0.3648
[4,] 0.5416 0.0000 0.4584
[5,] 0.0000 0.0000 1.0000
[6,] 0.0000 0.0000 1.0000
tail(weights)
ETS ARIMA STL
[79,] 0 1 0
[80,] 0 1 0
[81,] 0 1 0
[82,] 0 1 0
[83,] 0 1 0
[84,] 0 1 0
It begins with weighting each forecast method equally, quickly drops the ARIMA method, and then switches to STL alone. But by the end of the test set, it is giving weight 0 to ETS, 1 to ARIMA and 0 to STL. Here are the resulting forecasts:
<- ts(predict(MLpol0, X, test, type='response'), start=c(1991,1), freq=12)
z <- cbind(co2, z)
df colnames(df) <- c("Data","Mixture")
autoplot(df) +
xlab("Year") + ylab(expression("Atmospheric concentration of CO"[2]))
forecastHybrid
The forecastHybrid
package from David Shaub and Peter Ellis fits multiple models from the forecast
package and then combines them using either equal weights, or weights based on in-sample errors. By default, the models combined are from the auto.arima
, ets
, nnetar
, stlm
and tbats
functions. David Shaub provides a helpful vignette explaining how to use the package.
Here is an example using the same co2
data.
library(forecastHybrid)
<- hybridModel(train, weights="equal")
fit1 <- hybridModel(train, weights="insample")
fit2 <- forecast(fit1, h=h)
fc1 <- forecast(fit2, h=h)
fc2 autoplot(fc1) + ggtitle("Hybrid 1") + xlab("Year") +
ylab(expression("Atmospheric concentration of CO"[2]))
Those prediction intervals look dodgy because they are way too conservative. The package is taking the widest possible intervals that includes all the intervals produced by the individual models. So you only need one bad model, and the prediction intervals are screwed. To compute prediction intervals with the required coverage, it would be necessary to estimate the covariances between the different forecast errors, and then find the resulting variance expression for the linear combination of methods.
The combination point forecasts look much better:
<- cbind(Data=co2, Hybrid1=fc1$mean, Hybrid2=fc2$mean)
df autoplot(df) +
xlab("Year") + ylab(expression("Atmospheric concentration of CO"[2]))
Note that the weights are not being updated, unlike with the opera
package. In this particular example, the opera
forecasts are doing substantially better:
<- c(Opera=mean((test-z)^2),
mse Hybrid1=mean((test - fc1$mean)^2),
Hybrid2=mean((test - fc2$mean)^2))
round(mse,2)
Opera Hybrid1 Hybrid2
0.68 0.66 1.17
It should be noted, however, that the opera weights are updated using the past test data, while the forecastHybrid weights are based only on the training data. So this comparison is not entirely “fair”.
Also, all of these results are much better than any of the individual forecasting methods:
<- c(ETS=mean((test-ETS$mean)^2),
mse2 ARIMA=mean((test-ARIMA$mean)^2),
STL=mean((test-STL$mean)^2))
round(mse2,2)
ETS ARIMA STL
1.34 0.68 2.04