Stationary Univariate Time Series Models 1 Sebastian Fossati University of Alberta 1 These slides are based on Eric Zivot s time series notes available at: http://faculty.washington.edu/ezivot
Example 1: US Unemployment Rate 4 6 8 10 1950 1960 1970 1980 1990 2000 2010 2 / 54
Example 2: US Real GDP 2000 6000 10000 1950 1960 1970 1980 1990 2000 2010 3 / 54
Example 3: US log Real GDP 7.5 8.0 8.5 9.0 9.5 1950 1960 1970 1980 1990 2000 2010 4 / 54
Example 4: US Real GDP Growth Rate 3 2 1 0 1 2 3 4 1950 1960 1970 1980 1990 2000 2010 5 / 54
Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } = {..., Y 1, Y 0, Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ] = µ for all t cov(y t, Y t j ) = E[(Y t µ)(y t j µ)] = γ j for all t and any j Remarks γ j = jth lag autocovariance; γ 0 = var(y t ) ρ j = γ j /γ 0 = jth lag autocorrelation 6 / 54
Example: White Noise (WN(0, σ 2 )) Y t = ε t E[ε t ] = 0, var(ε t ) = σ 2, cov(ε t, ε t j ) = 0 Example: Independent White Noise (IWN(0, σ 2 )) Y t = ε t, ε t iid (0, σ 2 ) E[Y t ] = 0, var(y t ) = σ 2, γ j = 0, j 0 Example: Gaussian White Noise (GWN(0, σ 2 )) Y t = ε t, ε t iid N(0, σ 2 ) 7 / 54
Gaussian White Noise # simulate gaussian white noise process y <- as.ts(rnorm(250)) # plot process plot(y) abline(h=0) 2 1 0 1 2 3 0 50 100 150 200 250 8 / 54
Nonstationary Processes Example: Deterministically trending process Y t = β 0 + β 1 t + ε t, ε t WN(0, σ 2 ) E[Y t ] = β 0 + β 1 t depends on t Note: A simple detrending transformation yield a stationary process: X t = Y t β 0 β 1 t = ε t Example: Random Walk Y t = Y t 1 + ε t, ε t WN(0, σ 2 ), Y 0 is fixed t = Y 0 + ε j var(y t ) = σ 2 t depends on t j=1 Note: A simple detrending transformation yield a stationary process: Y t = Y t Y t 1 = ε t 9 / 54
Deterministically Trending Process # simulate trend process e <- rnorm(250) beta0 <- 0; beta1 <-.1 t <- seq(1,250) y <- as.ts(beta0+beta1*t+e) 0 5 10 15 20 25 0 50 100 150 200 250 10 / 54
Random Walk # simulate random walk process e <- rnorm(250) y <- as.ts(rep(0,250)) y[1] <- e[1] for(i in 2:250){ y[i] <- y[i-1]+e[i] } 20 15 10 5 0 5 0 50 100 150 200 250 11 / 54
Wold s Decomposition Theorem Any covariance stationary time series {Y t } can be represented in the form Y t = µ + ψ j ε t j, ε t WN(0, σ 2 ) ψ 0 = 1, Properties: E[Y t ] = µ j=0 ψj 2 j=0 < γ 0 = var(y t ) = σ 2 j=0 ψ 2 j < γ j = E[(Y t µ)(y t j µ)] = σ 2 (ψ j + ψ j+1 ψ 1 + ) = σ 2 ψ k ψ k+j k=0 12 / 54
AutoRegressive Moving Average (ARMA) Models Idea: Approximate Wold form of stationary time series by parsimonious parametric models (stochastic difference equations) ARMA(p,q) model: Y t µ = φ 1 (Y t 1 µ) + + φ p (Y t p µ) Lag operator notation: + ε t + θ 1 ε t 1 + + θ q ε t q ε t WN(0, σ 2 ) φ(l)(y t µ) = θ(l)ε t φ(l) = 1 φ 1 L φ p L p θ(l) = 1 + θ 1 L + + θ q L q 13 / 54
ARMA(1,0) Model Y t = φy t 1 + ε t, ε t WN(0, σ 2 ) Solution by recursive substitution: Y t = φ t+1 Y 1 + φ t ε 0 + + φε t 1 + ε t t = φ t+1 Y 1 + φ i ε t i = φ t+1 Y 1 + i=0 t ψ i ε t i, ψ i = φ i i=0 Alternatively, solving forward j periods from time t : Y t+j = φ j+1 Y t 1 + φ j ε t + + φε t+j 1 + ε t+j j = φ j+1 Y t 1 + ψ i ε t+j i i=0 14 / 54
Dynamic Multiplier: dy j dε 0 = dy t+j dε t = φ j = ψ j Impulse Response Function (IRF): Plot ψ j vs. j Cumulative impact (up to horizon j): j i=0 ψ j Long-run cumulative impact ψ j = ψ(1) i=0 = ψ(l) evaluated at L = 1 15 / 54
AR(1), φ = 0.75 IRF 0 0.5 1 Cumulative IRF 0 1 2 3 4 0 1 2 3 4 5 6 7 8 9 10 lags 0 1 2 3 4 5 6 7 8 9 10 lags 16 / 54
AR(1), φ = 0.75 IRF 1 0.5 0 0.5 1 Cumulative IRF 0 0.5 1 0 1 2 3 4 5 6 7 8 9 10 lags 0 1 2 3 4 5 6 7 8 9 10 lags 17 / 54
Stability and Stationarity Conditions If φ < 1 then lim j φj = lim ψ j = 0 j and the stationary solution (Wold form) for the AR(1) becomes Y t = φ j ε t j = ψ j ε t j j=0 This is a stable (non-explosive) solution. j=0 Note that ψ(1) = j=0 φ j = 1 1 φ < 18 / 54
If φ = 1 then t Y t = Y 0 + ε j, ψ j = 1, ψ(1) = which is not stationary or stable. j=0 AR(1) in Lag Operator Notation (1 φl)y t = ε t If φ < 1 then (1 φl) 1 = φ j L j = 1 + φl + φ 2 L 2 + j=0 such that (1 φl) 1 (1 φl) = 1 19 / 54
Trick to find Wold form: (1 φl)y t = ε t Y t = (1 φl) 1 ε t = φ j L j ε t = = j=0 φ j ε t j j=0 ψ j ε t j, ψ j = φ j j=0 20 / 54
Moments of Stationary AR(1) Mean adjusted form: Y t µ = φ(y t 1 µ) + ε t, ε t WN(0, σ 2 ), φ < 1 Regression form: Y t = c + φy t 1 + ε t, c = µ(1 φ) Trick for calculating moments: use stationarity properties E[Y t ] = E[Y t j ] for all j cov(y t, Y t j ) = cov(y t k, Y t k j ) for all k, j 21 / 54
Mean of AR(1) E[Y t ] = c + φe[y t 1 ] + E[ε t ] Variance of AR(1) = c + φe[y t ] E[Y t ] = c 1 φ = µ γ 0 = var(y t ) = E[(Y t µ) 2 ] = E[(φ(Y t 1 µ) + ε t ) 2 ] = φ 2 E[(Y t 1 µ) 2 ] + 2φE[(Y t 1 µ)ε t ] + E[ε 2 t ] = φ 2 γ 0 + σ 2 (by stationarity) γ 0 = σ2 1 φ 2 Note: From the Wold representation γ 0 = var φ j ε t j = σ 2 j=0 j=0 φ 2j = σ2 1 φ 2 22 / 54
Autocovariances and Autocorrelations Trick: multiply Y t µ by Y t j µ and take expectations γ j = E[(Y t µ)(y t j µ)] Autocorrelations: = E[φ(Y t 1 µ)(y t j µ)] + E[ε t (Y t j µ)] = φγ j 1 (by stationarity) γ j = φ j γ 0 = φ j σ 2 1 φ 2 ρ j = γ j = φj γ 0 = φ j = ψ j γ 0 γ 0 Note: for the AR(1), ρ j = ψ j. However, this is not true for general ARMA processes. Autocorrelation Function (ACF): Plot ρ j vs. j 23 / 54
Simulated AR(1) Processes 5 0 5 10 15 φ = 0.99 4 2 0 2 4 6 φ = 0.90 0 50 100 150 200 250 0 50 100 150 200 250 φ = 0.75 φ = 0.50 4 3 2 1 0 1 2 3 4 3 2 1 0 1 2 3 0 50 100 150 200 250 0 50 100 150 200 250 24 / 54
ACFs for AR(1) Processes 0.2 0.2 0.4 0.6 0.8 1.0 φ = 0.99 0.2 0.0 0.2 0.4 0.6 0.8 φ = 0.90 1 2 3 4 5 6 7 8 9 10 lags 1 2 3 4 5 6 7 8 9 10 lags φ = 0.75 φ = 0.50 0.2 0.0 0.2 0.4 0.6 0.2 0.0 0.2 0.4 1 2 3 4 5 6 7 8 9 10 lags 1 2 3 4 5 6 7 8 9 10 lags 25 / 54
Half-Life of AR(1): lag at which IRF decreases by one half φ j = 0.5 j ln φ = ln(0.5) j = ln(0.5) ln φ φ half-life 0.99 68.97 0.9 6.58 0.75 2.41 0.5 1.00 0.25 0.50 The half-life is a measure of the speed of mean reversion. 26 / 54
ARMA(p, 0) Model Mean-adjusted form: Y t µ = φ 1 (Y t 1 µ) + + φ p (Y t p µ) + ε t ε t WN(0, σ 2 ) E[Y t ] = µ Lag operator notation: φ(l)(y t µ) = ε t φ(l) = 1 φ 1 L φ p L p Regression Model formulation: Y t = c + φ 1 Y t 1 + + φ p Y t p + ε t φ(l)y t = c + ε t, c = µφ(1) 27 / 54
Stability and Stationarity Conditions Trick: Write pth order SDE as a 1st order vector SDE X t X t 1 X t 2. X t p+1 = φ 1 φ 2 φ p 1 0 0 0 1... 0...... 0 0 1 0 X t 1 X t 2 X t 3. X t p + ε t 0 0. 0 or ξ t = F ξ t 1 + v t (p 1) (p p) (p 1) (p 1) 28 / 54
Use insights from AR(1) to study behavior of VAR(1): ξ t+j = F j+1 ξ t 1 + F j v t +... + Fv t+j 1 + v t+j F j = F F... F (j times) Intuition: Stability and stationarity requires lim F j = 0 j Initial value has no impact on eventual level of series. 29 / 54
Example: AR(2) X t = φ 1 X t 1 + φ 2 X t 2 + ε t or [ X t X t 1 ] = [ φ 1 φ 2 1 0 ] [ X t 1 X t 2 ] + [ ε t 0 ] Iterating j periods out gives [ ] [ ] j+1 [ X t+j φ 1 φ 2 = X t+j 1 1 0 [ ] [ φ 1 φ 2 +... + 1 0 X t 1 X t 2 ] [ ] j [ φ 1 φ 2 + 1 0 ] [ ] ε t+j 1 ε t+j + 0 0 ε t 0 ] 30 / 54
First row gives X t+j X t+j = f (j+1) 11 X t 1 + f (j+1) 12 X t 2 + f (j) 11 ε t + + f 11 ε t+j 1 + ε t+j f (j) 11 = (1, 1) element of F j Note: [ ] [ ] [ ] F 2 = φ 1 φ 2 1 0 φ 1 φ 2 1 0 = φ 2 1 + φ 2 φ 1 φ 2 φ 1 φ 2 Result: The ARMA(p, 0) model is covariance stationary and has Wold representation Y t = µ + ψ j ε t j, ψ 0 = 1 j=0 with ψ j = (1, 1) element of F j provided all of the eigenvalues of F have modulus less than 1. 31 / 54
Finding Eigenvalues λ is an eigenvalue of F and x is an eigenvector if Fx = λx (F λi p )x = 0 Example: AR(2) F λi p is singular det(f λi p ) = 0 det(f λi 2 ) = det = det (( ( φ 1 φ 2 1 0 ) φ 1 λ φ 2 1 λ = λ 2 φ 1 λ φ 2 ) [ λ 0 0 λ ]) 32 / 54
The eigenvalues of F solve the reverse characteristic equation λ 2 φ 1 λ φ 2 = 0 Using the quadratic equation, the roots satisfy λ i = φ 1 ± φ 2 1 + 4φ 2, i = 1, 2 2 These root may be real or complex. Complex roots induce periodic behavior in y t. Recall, if λ i is complex then λ i = a + bi a = R cos(θ), b = R sin(θ) R = a 2 + b 2 = modulus 33 / 54
To see why λ i < 1 implies lim j F j = 0 consider the AR(2) with real-valued eigenvalues. By the spectral decomposition theorem Then F = T ΛT 1, T 1 = T [ ] [ λ 1 0 Λ =, T = 0 λ 2 t 11 t 12 t 21 t 22 ], T 1 = [ t 11 t 12 t 21 t 22 ] and F j = (T ΛT 1 )... (T ΛT 1 ) = T Λ j T 1 provided λ 1 < 1 and λ 2 < 1. lim F j = T lim Λ j T 1 = 0 j j 34 / 54
Note: F j = Λ j T 1 T [ = t 11 t 12 t 21 t 22 so that ] [ λ j 1 0 0 λ j 2 ] [ t 11 t 12 t 21 t 22 ] f (j) 11 = (t 11 t 11 )λ j 1 + (t 12t 22 )λ j 2 = c 1 λ j 1 + c 2λ j 2 = ψ j where Then, c 1 + c 2 = 1 lim ψ j = lim (c 1 λ j 1 + c j j 2λ j 2 ) = 0 35 / 54
Examples of AR(2) Processes AR(2) process with φ 1 = 0.6 and φ 2 = 0.2. Y t = 0.6Y t 1 + 0.2Y t 2 + ε t φ 1 + φ 2 = 0.8 < 1 [ ] 0.6 0.2 F = 1 0 The eigenvalues are found using λ i = φ 1 ± φ 2 1 + 4φ 2 2 λ 1 = 0.6 + (0.6) 2 + 4(0.2) 2 λ 2 = 0.6 (0.6) 2 + 4(0.2) 2 ψ j = c 1 (0.84) j + c 2 ( 0.24) j = 0.84 = 0.24 36 / 54
AR(2) Process, φ 1 = 0.6, φ 2 = 0.2 4 2 0 2 4 0 50 100 150 200 250 ACF 0.2 0.0 0.2 0.4 0.6 0.8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 lags 37 / 54
AR(2) process with φ 1 = 0.5 and φ 2 = 0.8. Note: Then Y t = 0.5Y t 1 0.8Y t 2 + ε t φ 1 + φ 2 = 0.3 < 1 [ ] 0.5 0.8 F = 1 0 φ 2 1 + 4φ 2 = (0.5) 2 4(0.8) = 2.95 complex eigenvalues λ i = a ± bi, i = 1 a = φ 1 (φ 2 2, b = 1 + 4φ 2 ) 2 38 / 54
Here a = 0.5 2.95 2 = 0.25, b = = 0.86 2 λ i = 0.25 ± 0.86i modulus = R = a 2 + b 2 = (0.25) 2 + (0.86) 2 = 0.895 Note: The IRF has the form ψ j = c 1 λ j 1 + c 2λ j 2 R j [cos(θj) + sin(θj)] 39 / 54
AR(2) Process, φ 1 = 0.5, φ 2 = 0.8 4 2 0 2 4 0 50 100 150 200 250 ACF 0.6 0.2 0.0 0.2 0.4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 lags 40 / 54
Stationarity Conditions on Lag Polynomial φ(l) Consider the AR(2) model in lag operator notation (1 φ 1 L φ 2 L 2 )X t = φ(l)x t = ε t For any variable z, consider the characteristic equation φ(z) = 1 φ 1 z φ 2 z 2 = 0 By the fundamental theorem of algebra 1 φ 1 z φ 2 z 2 = (1 λ 1 z)(1 λ 2 z) so that z 1 = 1 λ 1, z 2 = 1 λ 2 are the roots of the characteristic equation. The values λ 1 and λ 2 are the eigenvalues of F. 41 / 54
Result: The inverses of the roots of the characteristic equation φ(z) = 1 φ 1 z φ 2 z 2 φ p z p = 0 are the eigenvalues of the companion matrix F. Therefore, the AR(p) model is stable and stationary provided the roots of φ(z) = 0 have modulus greater than unity (roots lie outside the complex unit circle). Remarks 1. The reverse characteristic equation for the AR(p) is z p φ 1 z p 1 φ 2 z p 2 φ p 1 z φ p = 0 This is the same polynomial equation used to find the eigenvalues of F. 42 / 54
2. If the AR(p) is stationary, then φ(l) = 1 φ 1 L φ p L p = (1 λ 1 L)(1 λ 2 L) (1 λ p L) where λ i < 1. Suppose, all λ i are all real. Then (1 λ i L) 1 = λ j i Lj j=0 φ(l) 1 = (1 λ 1 L) 1 (1 λ p L) 1 = λ j 1 Lj λ j 2 Lj... λ j pl j j=0 j=0 j=0 The Wold solution for X t may be found using X t = φ(l) 1 ε t = λ j 1 Lj λ j 2 Lj... λ j pl j ε t j=0 j=0 j=0 43 / 54
3. A simple algorithm exists to determine the Wold form. To illustrate, consider the AR(2) model. By definition φ(l) 1 = (1 φ 1 L φ 2 L 2 ) 1 = ψ(l), ψ(l) = ψ j L j j=0 1 = φ(l)ψ(l) 1 = (1 φ 1 L φ 2 L 2 ) (1 + ψ 1 L + ψ 2 L 2 + ) Collecting coefficients of powers of L gives 1 = 1 + (ψ 1 φ 1 )L + (ψ 2 φ 1 ψ 1 φ 2 )L 2 + 44 / 54
Since all coefficients on powers of L must be equal to zero, it follows that ψ 1 = φ 1 ψ 2 = φ 1 ψ 1 + φ 2 ψ 3 = φ 1 ψ 2 + φ 2 ψ 1. ψ j = φ 1 ψ j 1 + φ 2 ψ j 2 45 / 54
Moments of Stationary AR(p) Model or Y t µ = φ 1 (Y t 1 µ) + + φ p (Y t p µ) + ε t ε t WN(0, σ 2 ) Y t = c + φ 1 Y t 1 + + φ p Y t p + ε t c = µ(1 π) π = φ 1 + φ 2 + + φ p Note: if π = 1 then φ(1) = 1 π = 0 and z = 1 is a root of φ(z) = 0. In this case we say that the AR(p) process has a unit root and the process is nonstationary. 46 / 54
Straightforward algebra shows that E[Y t ] = µ γ 0 = φ 1 γ 1 + φ 2 γ 2 + + φ p γ p + σ 2 γ j = φ 1 γ j 1 + φ 2 γ j 2 + + φ p γ j p ρ j = φ 1 ρ j 1 + φ 2 ρ j 2 + + φ p ρ j p The recursive equations for ρ j are called the Yule-Walker equations. Result: (γ 0, γ 1,..., γ p 1 ) is determined from the first p elements of the first column of the (p 2 p 2 ) matrix σ 2 [I p 2 (F F )] 1 where F is the state space companion matrix for the AR(p) model. 47 / 54
ARMA(0,1) Process (MA(1)) Y t = µ + ε t + θε t 1 = µ + θ(l)ε t θ(l) = 1 + θl, ε t WN(0, σ 2 ) Moments of MA(1) Mean of MA(1) E[Y t ] = µ Variance of MA(1) var(y t ) = γ 0 = E[(Y t µ) 2 ] = E[(ε t + θε t 1 ) 2 ] = σ 2 (1 + θ 2 ) 48 / 54
Autocovariances and Autocorrelations γ 1 = E[(Y t µ)(y t 1 µ)] = E[(ε t + θε t 1 )(ε t 1 + θε t 2 )] = σ 2 θ ρ 1 = γ 1 = θ γ 0 1 + θ 2 γ j = 0, j > 1 Remark: There is an identification problem for 0.5 < ρ 1 < 0.5 The values θ and θ 1 produce the same value of ρ 1. For example, θ = 0.5 and θ 1 = 2 both produce ρ 1 = 0.4. 49 / 54
Invertibility Condition: The MA(1) is invertible if θ < 1 Result: Invertible MA models have infinite order AR representations (Y t µ) = (1 + θl)ε t, θ < 1 = (1 θ L)ε t, θ = θ so that (1 θ L) 1 (Y t µ) = ε t (θ ) j L j (Y t µ) = ε t j=0 ε t = (Y t µ) + θ (Y t 1 µ) + (θ ) 2 (Y t 2 µ) + 50 / 54
Simulated MA(1) Processes θ = 0.95 θ = 0.40 4 2 0 2 4 3 2 1 0 1 2 0 50 100 150 200 250 0 50 100 150 200 250 θ = 0.95 θ = 0.40 3 2 1 0 1 2 4 2 0 2 4 0 50 100 150 200 250 0 50 100 150 200 250 51 / 54
ACFs for MA(1) Processes ACF 0.2 0.0 0.2 0.4 0.6 θ = 0.95 ACF 0.2 0.1 0.0 0.1 0.2 θ = 0.40 1 2 3 4 5 6 7 8 9 10 lags 1 2 3 4 5 6 7 8 9 10 lags ACF 0.3 0.1 0.0 0.1 0.2 θ = 0.95 ACF 0.5 0.3 0.1 0.1 θ = 0.40 1 2 3 4 5 6 7 8 9 10 lags 1 2 3 4 5 6 7 8 9 10 lags 52 / 54
Long-Run Variance Let Y t be a stationary and ergodic time series with E(Y t ) = µ. Anderson s central limit theorem (Hamilton (1994, pg. 195)) states or T ( Ȳ µ) d N(0, γ j ) Ȳ A N(µ, 1 T γ j ) j= j= Hence, the long-run variance of Y t is T times the asymptotic variance of Ȳ : LRV (Y t ) = T avar(ȳ ) = j= γ j 53 / 54
Remark: Since γ j = γ j, LRV (Y t ) may alternatively be expressed as LRV (Y t ) = γ 0 + 2 j=1 γ j Example: MA(1) Y t = µ + ε t + θε t 1 For the MA(1) process: γ 0 = σ 2 (1 + θ 2 ), γ 1 = σ 2 θ, and γ j = 0 for j > 1. As a result: LRV (Y t ) = σ 2 (1 + θ 2 ) + 2σ 2 θ = σ 2 (1 + θ) 2 54 / 54