<- function(x)
findfrequency
{<- length(x)
n <- as.ts(x)
x # Remove trend from data
<- residuals(tslm(x ~ trend))
x # Compute spectrum by fitting ar model to largest section of x
<- 500
n.freq <- spec.ar(c(na.contiguous(x)), plot=FALSE, n.freq=n.freq)
spec if(max(spec[["spec"]])>10) # Arbitrary threshold chosen by trial and error.
{<- floor(1/spec[["freq"]][which.max(spec[["spec"]])] + 0.5)
period if(period==Inf) # Find next local maximum
{<- which(diff(spec[["spec"]])>0)
j if(length(j)>0)
{<- j[1] + which.max(spec[["spec"]][(j[1]+1):n.freq])
nextmax if(nextmax < length(spec[["freq"]]))
<- floor(1/spec[["freq"]][nextmax] + 0.5)
period else
<- 1L
period
}else
<- 1L
period
}
}else
<- 1L
period
return(as.integer(period))
}
Measuring time series characteristics
A few years ago, I was working on a project where we measured various characteristics of a time series and used the information to determine what forecasting method to apply or how to cluster the time series into meaningful groups. The two main papers to come out of that project were:
I’ve since had a lot of requests for the code which one of my coauthors has been helpfully emailing to anyone who asked. But to make it easier, we thought it might be helpful if I post some updated code here. This is not the same as the R code we used in the paper, as I’ve improved it in several ways (so it will give different results). If you just want the code, skip to the bottom of the post.
Finding the period of the data
Usually in time series work, we know the period of the data (if the observations are monthly, the period is 12, for example). But in this project, some of our data was of unknown period and we wanted a method to automatically determine the appropriate period. The method we used was based on local peaks and troughs in the ACF. But I’ve since devised a better approach (prompted on crossvalidated.com) using an estimate of the spectral density:
The function is called findfrequency
because time series people often call the period of seasonality the “frequency” (which is of course highly confusing).
[Update: This function is now part of the forecast package.]
Decomposing the data into trend and seasonal components
We needed a measure of the strength of trend and the strength of seasonality, and to do this we decomposed the data into trend, seasonal and error terms.
Because not all data could be decomposed additively, we first needed to apply an automated Box-Cox transformation. We tried a range of Box-Cox parameters on a grid, and selected the one which gave the most normal errors. That worked ok, but I’ve since found some papers that provide quite good automated Box-Cox algorithms that I’ve implemented in the forecast package. So this code uses Guerrero’s (1993) method instead.
For seasonal time series, we decomposed the transformed data using an stl decomposition with periodic seasonality.
For non-seasonal time series, we estimated the trend of the transformed data using penalized regression splines via the mgcv package.
<- function(x,transform=TRUE)
decomp
{require(forecast)
# Transform series
if(transform & min(x,na.rm=TRUE) >= 0)
{<- BoxCox.lambda(na.contiguous(x))
lambda <- BoxCox(x,lambda)
x
}else
{<- NULL
lambda <- FALSE
transform
}# Seasonal data
if(frequency(x)>1)
{<- stl(x,s.window="periodic",na.action=na.contiguous)
x.stl <- x.stl[["time.series"]][,2]
trend <- x.stl[["time.series"]][,1]
season <- x - trend - season
remainder
}else #Nonseasonal data
{require(mgcv)
<- 1:length(x)
tt <- rep(NA,length(x))
trend !is.na(x)] <- fitted(gam(x ~ s(tt)))
trend[<- NULL
season <- x - trend
remainder
}return(list(x=x,trend=trend,season=season,remainder=remainder,
transform=transform,lambda=lambda))
}
Putting everything on a [0,1] scale
We wanted to measure a range of characteristics such as strength of seasonality, strength of trend, level of nonlinearity, skewness, kurtosis, serial correlatedness, self-similarity, level of chaoticity (is that a word?) and the periodicity of the data. But we wanted all these on the same scale which meant mapping the natural range of each measure onto [0,1]. The following two functions were used to do this.
# f1 maps [0,infinity) to [0,1]
<- function(x,a,b)
f1
{<- exp(a*x)
eax if (eax == Inf)
<- 1
f1eax else
<- (eax-1)/(eax+b)
f1eax return(f1eax)
}
# f2 maps [0,1] onto [0,1]
<- function(x,a,b)
f2
{<- exp(a*x)
eax <- exp(a)
ea return((eax-1)/(eax+b)*(ea+b)/(ea-1))
}
The values of a and b in each function were chosen so the measure had a 90th percentile of 0.10 when the data were iid standard normal, and a value of 0.9 using a well-known benchmark time series.
Calculating the measures
Now we are ready to calculate the measures on the original data, as well as on the adjusted data (after removing trend and seasonality).
<- function(x)
measures
{require(forecast)
<- length(x)
N <- findfrequency(x)
freq <- c(frequency=(exp((freq-1)/50)-1)/(1+exp((freq-1)/50)))
fx <- ts(x,f=freq)
x
# Decomposition
<- decomp(x)
decomp.x
# Adjust data
if(freq > 1)
<- decomp.x[["trend"]] + decomp.x[["season"]]
fits else # Nonseasonal data
<- decomp.x[["trend"]]
fits <- decomp.x[["x"]] - fits + mean(decomp.x[["trend"]], na.rm=TRUE)
adj.x
# Backtransformation of adjusted data
if(decomp.x[["transform"]])
<- InvBoxCox(adj.x,decomp.x[["lambda"]])
tadj.x else
<- adj.x
tadj.x
# Trend and seasonal measures
<- var(adj.x, na.rm=TRUE)
v.adj if(freq > 1)
{<- decomp.x[["x"]] - decomp.x[["trend"]]
detrend <- decomp.x[["x"]] - decomp.x[["season"]]
deseason <- ifelse(var(deseason,na.rm=TRUE) < 1e-10, 0,
trend max(0,min(1,1-v.adj/var(deseason,na.rm=TRUE))))
<- ifelse(var(detrend,na.rm=TRUE) < 1e-10, 0,
season max(0,min(1,1-v.adj/var(detrend,na.rm=TRUE))))
}else #Nonseasonal data
{<- ifelse(var(decomp.x[["x"]],na.rm=TRUE) < 1e-10, 0,
trend max(0,min(1,1-v.adj/var(decomp.x[["x"]],na.rm=TRUE))))
<- 0
season
}
<- c(fx,trend,season)
m
# Measures on original data
<- mean(x,na.rm=TRUE)
xbar <- sd(x,na.rm=TRUE)
s
# Serial correlation
<- Box.test(x,lag=10)[["statistic"]]/(N*10)
Q <- f2(Q,7.53,0.103)
fQ
# Nonlinearity
<- tseries::terasvirta.test(na.contiguous(x))[["statistic"]]
p <- f1(p,0.069,2.304)
fp
# Skewness
<- abs(mean((x-xbar)^3,na.rm=TRUE)/s^3)
sk <- f1(sk,1.510,5.993)
fs
# Kurtosis
<- mean((x-xbar)^4,na.rm=TRUE)/s^4
k <- f1(k,2.273,11567)
fk
# Hurst=d+0.5 where d is fractional difference.
<- fracdiff::fracdiff(na.contiguous(x),0,0)[["d"]] + 0.5
H
# Lyapunov Exponent
if(freq > N-10)
stop("Insufficient data")
<- numeric(N-freq)
Ly for(i in 1:(N-freq))
{<- order(abs(x[i] - x))
idx <- idx[idx < (N-freq)]
idx <- idx[2]
j <- log(abs((x[i+freq] - x[j+freq])/(x[i]-x[j])))/freq
Ly[i] if(is.na(Ly[i]) | Ly[i]==Inf | Ly[i]==-Inf)
<- NA
Ly[i]
}<- mean(Ly,na.rm=TRUE)
Lyap <- exp(Lyap)/(1+exp(Lyap))
fLyap
<- c(m,fQ,fp,fs,fk,H,fLyap)
m
# Measures on adjusted data
<- mean(tadj.x, na.rm=TRUE)
xbar <- sd(tadj.x, na.rm=TRUE)
s
# Serial
<- Box.test(adj.x,lag=10)[["statistic"]]/(N*10)
Q <- f2(Q,7.53,0.103)
fQ
# Nonlinearity
<- tseries::terasvirta.test(na.contiguous(adj.x))[["statistic"]]
p <- f1(p,0.069,2.304)
fp
# Skewness
<- abs(mean((tadj.x-xbar)^3,na.rm=TRUE)/s^3)
sk <- f1(sk,1.510,5.993)
fs
# Kurtosis
<- mean((tadj.x-xbar)^4,na.rm=TRUE)/s^4
k <- f1(k,2.273,11567)
fk
<- c(m,fQ,fp,fs,fk)
m names(m) <- c("frequency", "trend","seasonal",
"autocorrelation","non-linear","skewness","kurtosis",
"Hurst","Lyapunov",
"dc autocorrelation","dc non-linear","dc skewness","dc kurtosis")
return(m)
}
Here is a quick example applied to Australian monthly gas production:
library(forecast)
measures(gas)
frequency trend seasonal autocorrelation
0.1096 0.9989 0.9337 0.9985
non-linear skewness kurtosis Hurst
0.4947 0.1282 0.0055 0.9996
Lyapunov dc autocorrelation dc non-linear dc skewness
0.5662 0.1140 0.0538 0.1743
dc kurtosis
0.9992
The function is far from perfect, and it is not hard to find examples where it fails. For example, it doesn’t work with multiple seasonality — try measure(taylor)
and check the seasonality. Also, I’m not convinced the kurtosis provides anything useful here, or that the skewness measure is done in the best way possible. But it was really a proof of concept, so we will leave it to others to revise and improve the code.
In our papers, we took the measures obtained using R, and produced self-organizing maps using Viscovery. There is now a som package in R for that, so it might be possible to integrate that step into R as well. The dendogram was generated in matlab, although that could now also be done in R using the ggdendro package for example.