Stationary Univariate Time Series Models 1

Σχετικά έγγραφα
2. ARMA 1. 1 This part is based on H and BD.

Stationary ARMA Processes

Stationary ARMA Processes

Module 5. February 14, h 0min

4.6 Autoregressive Moving Average Model ARMA(1,1)

HW 3 Solutions 1. a) I use the auto.arima R function to search over models using AIC and decide on an ARMA(3,1)

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Example Sheet 3 Solutions

Introduction to the ML Estimation of ARMA processes

6.3 Forecasting ARMA processes

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Other Test Constructions: Likelihood Ratio & Bayes Tests

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Section 8.3 Trigonometric Equations

EE512: Error Control Coding

Homework 3 Solutions

2 Composition. Invertible Mappings

Areas and Lengths in Polar Coordinates

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University

Lecture 2: ARMA Models

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Second Order Partial Differential Equations

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Areas and Lengths in Polar Coordinates

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Second Order RLC Filters

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Concrete Mathematics Exercises from 30 September 2016

Finite Field Problems: Solutions

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

Matrices and Determinants

Tridiagonal matrices. Gérard MEURANT. October, 2008

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

5. Partial Autocorrelation Function of MA(1) Process:

MAT Winter 2016 Introduction to Time Series Analysis Study Guide for Midterm

The Simply Typed Lambda Calculus

Lecture 21: Properties and robustness of LSE

ARMA Models: I VIII 1

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

Durbin-Levinson recursive method

Trigonometric Formula Sheet

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Inverse trigonometric functions & General Solution of Trigonometric Equations

( y) Partial Differential Equations

D Alembert s Solution to the Wave Equation

w o = R 1 p. (1) R = p =. = 1

TMA4115 Matematikk 3

PARTIAL NOTES for 6.1 Trigonometric Identities

LAD Estimation for Time Series Models With Finite and Infinite Variance

Numerical Analysis FMN011

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03)..

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Testing for Indeterminacy: An Application to U.S. Monetary Policy. Technical Appendix

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Every set of first-order formulas is equivalent to an independent set

Solution Series 9. i=1 x i and i=1 x i.

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

( ) 2 and compare to M.

ECE Spring Prof. David R. Jackson ECE Dept. Notes 2

Econ Spring 2004 Instructor: Prof. Kiefer Solution to Problem set # 5. γ (0)

Probability and Random Processes (Part II)

ST5224: Advanced Statistical Theory II

Reminders: linear functions

Jordan Form of a Square Matrix

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Introduction to Time Series Analysis. Lecture 16.

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

DERIVATION OF MILES EQUATION FOR AN APPLIED FORCE Revision C

Congruence Classes of Invertible Matrices of Order 3 over F 2

Statistical Inference I Locally most powerful tests

Solutions to Exercise Sheet 5

MATHEMATICS. 1. If A and B are square matrices of order 3 such that A = -1, B =3, then 3AB = 1) -9 2) -27 3) -81 4) 81

Appendix to On the stability of a compressible axisymmetric rotating flow in a pipe. By Z. Rusak & J. H. Lee

Homework 8 Model Solution Section

Space-Time Symmetries

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

C.S. 430 Assignment 6, Sample Solutions

Section 9.2 Polar Equations and Graphs

Section 7.6 Double and Half Angle Formulas

Variational Wavefunction for the Helium Atom

Lanczos and biorthogonalization methods for eigenvalues and eigenvectors of matrices

Statistics 104: Quantitative Methods for Economics Formula and Theorem Review

Higher Derivative Gravity Theories

Mean-Variance Analysis

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

The Jordan Form of Complex Tridiagonal Matrices

Forced Pendulum Numerical approach

5. Choice under Uncertainty

Problem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ.

DiracDelta. Notations. Primary definition. Specific values. General characteristics. Traditional name. Traditional notation

5.4 The Poisson Distribution.

Srednicki Chapter 55

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

Transcript:

Stationary Univariate Time Series Models 1 Sebastian Fossati University of Alberta 1 These slides are based on Eric Zivot s time series notes available at: http://faculty.washington.edu/ezivot

Example 1: US Unemployment Rate 4 6 8 10 1950 1960 1970 1980 1990 2000 2010 2 / 54

Example 2: US Real GDP 2000 6000 10000 1950 1960 1970 1980 1990 2000 2010 3 / 54

Example 3: US log Real GDP 7.5 8.0 8.5 9.0 9.5 1950 1960 1970 1980 1990 2000 2010 4 / 54

Example 4: US Real GDP Growth Rate 3 2 1 0 1 2 3 4 1950 1960 1970 1980 1990 2000 2010 5 / 54

Covariance Stationary Time Series Stochastic Process: sequence of rv s ordered by time {Y t } = {..., Y 1, Y 0, Y 1,...} Defn: {Y t } is covariance stationary if E[Y t ] = µ for all t cov(y t, Y t j ) = E[(Y t µ)(y t j µ)] = γ j for all t and any j Remarks γ j = jth lag autocovariance; γ 0 = var(y t ) ρ j = γ j /γ 0 = jth lag autocorrelation 6 / 54

Example: White Noise (WN(0, σ 2 )) Y t = ε t E[ε t ] = 0, var(ε t ) = σ 2, cov(ε t, ε t j ) = 0 Example: Independent White Noise (IWN(0, σ 2 )) Y t = ε t, ε t iid (0, σ 2 ) E[Y t ] = 0, var(y t ) = σ 2, γ j = 0, j 0 Example: Gaussian White Noise (GWN(0, σ 2 )) Y t = ε t, ε t iid N(0, σ 2 ) 7 / 54

Gaussian White Noise # simulate gaussian white noise process y <- as.ts(rnorm(250)) # plot process plot(y) abline(h=0) 2 1 0 1 2 3 0 50 100 150 200 250 8 / 54

Nonstationary Processes Example: Deterministically trending process Y t = β 0 + β 1 t + ε t, ε t WN(0, σ 2 ) E[Y t ] = β 0 + β 1 t depends on t Note: A simple detrending transformation yield a stationary process: X t = Y t β 0 β 1 t = ε t Example: Random Walk Y t = Y t 1 + ε t, ε t WN(0, σ 2 ), Y 0 is fixed t = Y 0 + ε j var(y t ) = σ 2 t depends on t j=1 Note: A simple detrending transformation yield a stationary process: Y t = Y t Y t 1 = ε t 9 / 54

Deterministically Trending Process # simulate trend process e <- rnorm(250) beta0 <- 0; beta1 <-.1 t <- seq(1,250) y <- as.ts(beta0+beta1*t+e) 0 5 10 15 20 25 0 50 100 150 200 250 10 / 54

Random Walk # simulate random walk process e <- rnorm(250) y <- as.ts(rep(0,250)) y[1] <- e[1] for(i in 2:250){ y[i] <- y[i-1]+e[i] } 20 15 10 5 0 5 0 50 100 150 200 250 11 / 54

Wold s Decomposition Theorem Any covariance stationary time series {Y t } can be represented in the form Y t = µ + ψ j ε t j, ε t WN(0, σ 2 ) ψ 0 = 1, Properties: E[Y t ] = µ j=0 ψj 2 j=0 < γ 0 = var(y t ) = σ 2 j=0 ψ 2 j < γ j = E[(Y t µ)(y t j µ)] = σ 2 (ψ j + ψ j+1 ψ 1 + ) = σ 2 ψ k ψ k+j k=0 12 / 54

AutoRegressive Moving Average (ARMA) Models Idea: Approximate Wold form of stationary time series by parsimonious parametric models (stochastic difference equations) ARMA(p,q) model: Y t µ = φ 1 (Y t 1 µ) + + φ p (Y t p µ) Lag operator notation: + ε t + θ 1 ε t 1 + + θ q ε t q ε t WN(0, σ 2 ) φ(l)(y t µ) = θ(l)ε t φ(l) = 1 φ 1 L φ p L p θ(l) = 1 + θ 1 L + + θ q L q 13 / 54

ARMA(1,0) Model Y t = φy t 1 + ε t, ε t WN(0, σ 2 ) Solution by recursive substitution: Y t = φ t+1 Y 1 + φ t ε 0 + + φε t 1 + ε t t = φ t+1 Y 1 + φ i ε t i = φ t+1 Y 1 + i=0 t ψ i ε t i, ψ i = φ i i=0 Alternatively, solving forward j periods from time t : Y t+j = φ j+1 Y t 1 + φ j ε t + + φε t+j 1 + ε t+j j = φ j+1 Y t 1 + ψ i ε t+j i i=0 14 / 54

Dynamic Multiplier: dy j dε 0 = dy t+j dε t = φ j = ψ j Impulse Response Function (IRF): Plot ψ j vs. j Cumulative impact (up to horizon j): j i=0 ψ j Long-run cumulative impact ψ j = ψ(1) i=0 = ψ(l) evaluated at L = 1 15 / 54

AR(1), φ = 0.75 IRF 0 0.5 1 Cumulative IRF 0 1 2 3 4 0 1 2 3 4 5 6 7 8 9 10 lags 0 1 2 3 4 5 6 7 8 9 10 lags 16 / 54

AR(1), φ = 0.75 IRF 1 0.5 0 0.5 1 Cumulative IRF 0 0.5 1 0 1 2 3 4 5 6 7 8 9 10 lags 0 1 2 3 4 5 6 7 8 9 10 lags 17 / 54

Stability and Stationarity Conditions If φ < 1 then lim j φj = lim ψ j = 0 j and the stationary solution (Wold form) for the AR(1) becomes Y t = φ j ε t j = ψ j ε t j j=0 This is a stable (non-explosive) solution. j=0 Note that ψ(1) = j=0 φ j = 1 1 φ < 18 / 54

If φ = 1 then t Y t = Y 0 + ε j, ψ j = 1, ψ(1) = which is not stationary or stable. j=0 AR(1) in Lag Operator Notation (1 φl)y t = ε t If φ < 1 then (1 φl) 1 = φ j L j = 1 + φl + φ 2 L 2 + j=0 such that (1 φl) 1 (1 φl) = 1 19 / 54

Trick to find Wold form: (1 φl)y t = ε t Y t = (1 φl) 1 ε t = φ j L j ε t = = j=0 φ j ε t j j=0 ψ j ε t j, ψ j = φ j j=0 20 / 54

Moments of Stationary AR(1) Mean adjusted form: Y t µ = φ(y t 1 µ) + ε t, ε t WN(0, σ 2 ), φ < 1 Regression form: Y t = c + φy t 1 + ε t, c = µ(1 φ) Trick for calculating moments: use stationarity properties E[Y t ] = E[Y t j ] for all j cov(y t, Y t j ) = cov(y t k, Y t k j ) for all k, j 21 / 54

Mean of AR(1) E[Y t ] = c + φe[y t 1 ] + E[ε t ] Variance of AR(1) = c + φe[y t ] E[Y t ] = c 1 φ = µ γ 0 = var(y t ) = E[(Y t µ) 2 ] = E[(φ(Y t 1 µ) + ε t ) 2 ] = φ 2 E[(Y t 1 µ) 2 ] + 2φE[(Y t 1 µ)ε t ] + E[ε 2 t ] = φ 2 γ 0 + σ 2 (by stationarity) γ 0 = σ2 1 φ 2 Note: From the Wold representation γ 0 = var φ j ε t j = σ 2 j=0 j=0 φ 2j = σ2 1 φ 2 22 / 54

Autocovariances and Autocorrelations Trick: multiply Y t µ by Y t j µ and take expectations γ j = E[(Y t µ)(y t j µ)] Autocorrelations: = E[φ(Y t 1 µ)(y t j µ)] + E[ε t (Y t j µ)] = φγ j 1 (by stationarity) γ j = φ j γ 0 = φ j σ 2 1 φ 2 ρ j = γ j = φj γ 0 = φ j = ψ j γ 0 γ 0 Note: for the AR(1), ρ j = ψ j. However, this is not true for general ARMA processes. Autocorrelation Function (ACF): Plot ρ j vs. j 23 / 54

Simulated AR(1) Processes 5 0 5 10 15 φ = 0.99 4 2 0 2 4 6 φ = 0.90 0 50 100 150 200 250 0 50 100 150 200 250 φ = 0.75 φ = 0.50 4 3 2 1 0 1 2 3 4 3 2 1 0 1 2 3 0 50 100 150 200 250 0 50 100 150 200 250 24 / 54

ACFs for AR(1) Processes 0.2 0.2 0.4 0.6 0.8 1.0 φ = 0.99 0.2 0.0 0.2 0.4 0.6 0.8 φ = 0.90 1 2 3 4 5 6 7 8 9 10 lags 1 2 3 4 5 6 7 8 9 10 lags φ = 0.75 φ = 0.50 0.2 0.0 0.2 0.4 0.6 0.2 0.0 0.2 0.4 1 2 3 4 5 6 7 8 9 10 lags 1 2 3 4 5 6 7 8 9 10 lags 25 / 54

Half-Life of AR(1): lag at which IRF decreases by one half φ j = 0.5 j ln φ = ln(0.5) j = ln(0.5) ln φ φ half-life 0.99 68.97 0.9 6.58 0.75 2.41 0.5 1.00 0.25 0.50 The half-life is a measure of the speed of mean reversion. 26 / 54

ARMA(p, 0) Model Mean-adjusted form: Y t µ = φ 1 (Y t 1 µ) + + φ p (Y t p µ) + ε t ε t WN(0, σ 2 ) E[Y t ] = µ Lag operator notation: φ(l)(y t µ) = ε t φ(l) = 1 φ 1 L φ p L p Regression Model formulation: Y t = c + φ 1 Y t 1 + + φ p Y t p + ε t φ(l)y t = c + ε t, c = µφ(1) 27 / 54

Stability and Stationarity Conditions Trick: Write pth order SDE as a 1st order vector SDE X t X t 1 X t 2. X t p+1 = φ 1 φ 2 φ p 1 0 0 0 1... 0...... 0 0 1 0 X t 1 X t 2 X t 3. X t p + ε t 0 0. 0 or ξ t = F ξ t 1 + v t (p 1) (p p) (p 1) (p 1) 28 / 54

Use insights from AR(1) to study behavior of VAR(1): ξ t+j = F j+1 ξ t 1 + F j v t +... + Fv t+j 1 + v t+j F j = F F... F (j times) Intuition: Stability and stationarity requires lim F j = 0 j Initial value has no impact on eventual level of series. 29 / 54

Example: AR(2) X t = φ 1 X t 1 + φ 2 X t 2 + ε t or [ X t X t 1 ] = [ φ 1 φ 2 1 0 ] [ X t 1 X t 2 ] + [ ε t 0 ] Iterating j periods out gives [ ] [ ] j+1 [ X t+j φ 1 φ 2 = X t+j 1 1 0 [ ] [ φ 1 φ 2 +... + 1 0 X t 1 X t 2 ] [ ] j [ φ 1 φ 2 + 1 0 ] [ ] ε t+j 1 ε t+j + 0 0 ε t 0 ] 30 / 54

First row gives X t+j X t+j = f (j+1) 11 X t 1 + f (j+1) 12 X t 2 + f (j) 11 ε t + + f 11 ε t+j 1 + ε t+j f (j) 11 = (1, 1) element of F j Note: [ ] [ ] [ ] F 2 = φ 1 φ 2 1 0 φ 1 φ 2 1 0 = φ 2 1 + φ 2 φ 1 φ 2 φ 1 φ 2 Result: The ARMA(p, 0) model is covariance stationary and has Wold representation Y t = µ + ψ j ε t j, ψ 0 = 1 j=0 with ψ j = (1, 1) element of F j provided all of the eigenvalues of F have modulus less than 1. 31 / 54

Finding Eigenvalues λ is an eigenvalue of F and x is an eigenvector if Fx = λx (F λi p )x = 0 Example: AR(2) F λi p is singular det(f λi p ) = 0 det(f λi 2 ) = det = det (( ( φ 1 φ 2 1 0 ) φ 1 λ φ 2 1 λ = λ 2 φ 1 λ φ 2 ) [ λ 0 0 λ ]) 32 / 54

The eigenvalues of F solve the reverse characteristic equation λ 2 φ 1 λ φ 2 = 0 Using the quadratic equation, the roots satisfy λ i = φ 1 ± φ 2 1 + 4φ 2, i = 1, 2 2 These root may be real or complex. Complex roots induce periodic behavior in y t. Recall, if λ i is complex then λ i = a + bi a = R cos(θ), b = R sin(θ) R = a 2 + b 2 = modulus 33 / 54

To see why λ i < 1 implies lim j F j = 0 consider the AR(2) with real-valued eigenvalues. By the spectral decomposition theorem Then F = T ΛT 1, T 1 = T [ ] [ λ 1 0 Λ =, T = 0 λ 2 t 11 t 12 t 21 t 22 ], T 1 = [ t 11 t 12 t 21 t 22 ] and F j = (T ΛT 1 )... (T ΛT 1 ) = T Λ j T 1 provided λ 1 < 1 and λ 2 < 1. lim F j = T lim Λ j T 1 = 0 j j 34 / 54

Note: F j = Λ j T 1 T [ = t 11 t 12 t 21 t 22 so that ] [ λ j 1 0 0 λ j 2 ] [ t 11 t 12 t 21 t 22 ] f (j) 11 = (t 11 t 11 )λ j 1 + (t 12t 22 )λ j 2 = c 1 λ j 1 + c 2λ j 2 = ψ j where Then, c 1 + c 2 = 1 lim ψ j = lim (c 1 λ j 1 + c j j 2λ j 2 ) = 0 35 / 54

Examples of AR(2) Processes AR(2) process with φ 1 = 0.6 and φ 2 = 0.2. Y t = 0.6Y t 1 + 0.2Y t 2 + ε t φ 1 + φ 2 = 0.8 < 1 [ ] 0.6 0.2 F = 1 0 The eigenvalues are found using λ i = φ 1 ± φ 2 1 + 4φ 2 2 λ 1 = 0.6 + (0.6) 2 + 4(0.2) 2 λ 2 = 0.6 (0.6) 2 + 4(0.2) 2 ψ j = c 1 (0.84) j + c 2 ( 0.24) j = 0.84 = 0.24 36 / 54

AR(2) Process, φ 1 = 0.6, φ 2 = 0.2 4 2 0 2 4 0 50 100 150 200 250 ACF 0.2 0.0 0.2 0.4 0.6 0.8 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 lags 37 / 54

AR(2) process with φ 1 = 0.5 and φ 2 = 0.8. Note: Then Y t = 0.5Y t 1 0.8Y t 2 + ε t φ 1 + φ 2 = 0.3 < 1 [ ] 0.5 0.8 F = 1 0 φ 2 1 + 4φ 2 = (0.5) 2 4(0.8) = 2.95 complex eigenvalues λ i = a ± bi, i = 1 a = φ 1 (φ 2 2, b = 1 + 4φ 2 ) 2 38 / 54

Here a = 0.5 2.95 2 = 0.25, b = = 0.86 2 λ i = 0.25 ± 0.86i modulus = R = a 2 + b 2 = (0.25) 2 + (0.86) 2 = 0.895 Note: The IRF has the form ψ j = c 1 λ j 1 + c 2λ j 2 R j [cos(θj) + sin(θj)] 39 / 54

AR(2) Process, φ 1 = 0.5, φ 2 = 0.8 4 2 0 2 4 0 50 100 150 200 250 ACF 0.6 0.2 0.0 0.2 0.4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 lags 40 / 54

Stationarity Conditions on Lag Polynomial φ(l) Consider the AR(2) model in lag operator notation (1 φ 1 L φ 2 L 2 )X t = φ(l)x t = ε t For any variable z, consider the characteristic equation φ(z) = 1 φ 1 z φ 2 z 2 = 0 By the fundamental theorem of algebra 1 φ 1 z φ 2 z 2 = (1 λ 1 z)(1 λ 2 z) so that z 1 = 1 λ 1, z 2 = 1 λ 2 are the roots of the characteristic equation. The values λ 1 and λ 2 are the eigenvalues of F. 41 / 54

Result: The inverses of the roots of the characteristic equation φ(z) = 1 φ 1 z φ 2 z 2 φ p z p = 0 are the eigenvalues of the companion matrix F. Therefore, the AR(p) model is stable and stationary provided the roots of φ(z) = 0 have modulus greater than unity (roots lie outside the complex unit circle). Remarks 1. The reverse characteristic equation for the AR(p) is z p φ 1 z p 1 φ 2 z p 2 φ p 1 z φ p = 0 This is the same polynomial equation used to find the eigenvalues of F. 42 / 54

2. If the AR(p) is stationary, then φ(l) = 1 φ 1 L φ p L p = (1 λ 1 L)(1 λ 2 L) (1 λ p L) where λ i < 1. Suppose, all λ i are all real. Then (1 λ i L) 1 = λ j i Lj j=0 φ(l) 1 = (1 λ 1 L) 1 (1 λ p L) 1 = λ j 1 Lj λ j 2 Lj... λ j pl j j=0 j=0 j=0 The Wold solution for X t may be found using X t = φ(l) 1 ε t = λ j 1 Lj λ j 2 Lj... λ j pl j ε t j=0 j=0 j=0 43 / 54

3. A simple algorithm exists to determine the Wold form. To illustrate, consider the AR(2) model. By definition φ(l) 1 = (1 φ 1 L φ 2 L 2 ) 1 = ψ(l), ψ(l) = ψ j L j j=0 1 = φ(l)ψ(l) 1 = (1 φ 1 L φ 2 L 2 ) (1 + ψ 1 L + ψ 2 L 2 + ) Collecting coefficients of powers of L gives 1 = 1 + (ψ 1 φ 1 )L + (ψ 2 φ 1 ψ 1 φ 2 )L 2 + 44 / 54

Since all coefficients on powers of L must be equal to zero, it follows that ψ 1 = φ 1 ψ 2 = φ 1 ψ 1 + φ 2 ψ 3 = φ 1 ψ 2 + φ 2 ψ 1. ψ j = φ 1 ψ j 1 + φ 2 ψ j 2 45 / 54

Moments of Stationary AR(p) Model or Y t µ = φ 1 (Y t 1 µ) + + φ p (Y t p µ) + ε t ε t WN(0, σ 2 ) Y t = c + φ 1 Y t 1 + + φ p Y t p + ε t c = µ(1 π) π = φ 1 + φ 2 + + φ p Note: if π = 1 then φ(1) = 1 π = 0 and z = 1 is a root of φ(z) = 0. In this case we say that the AR(p) process has a unit root and the process is nonstationary. 46 / 54

Straightforward algebra shows that E[Y t ] = µ γ 0 = φ 1 γ 1 + φ 2 γ 2 + + φ p γ p + σ 2 γ j = φ 1 γ j 1 + φ 2 γ j 2 + + φ p γ j p ρ j = φ 1 ρ j 1 + φ 2 ρ j 2 + + φ p ρ j p The recursive equations for ρ j are called the Yule-Walker equations. Result: (γ 0, γ 1,..., γ p 1 ) is determined from the first p elements of the first column of the (p 2 p 2 ) matrix σ 2 [I p 2 (F F )] 1 where F is the state space companion matrix for the AR(p) model. 47 / 54

ARMA(0,1) Process (MA(1)) Y t = µ + ε t + θε t 1 = µ + θ(l)ε t θ(l) = 1 + θl, ε t WN(0, σ 2 ) Moments of MA(1) Mean of MA(1) E[Y t ] = µ Variance of MA(1) var(y t ) = γ 0 = E[(Y t µ) 2 ] = E[(ε t + θε t 1 ) 2 ] = σ 2 (1 + θ 2 ) 48 / 54

Autocovariances and Autocorrelations γ 1 = E[(Y t µ)(y t 1 µ)] = E[(ε t + θε t 1 )(ε t 1 + θε t 2 )] = σ 2 θ ρ 1 = γ 1 = θ γ 0 1 + θ 2 γ j = 0, j > 1 Remark: There is an identification problem for 0.5 < ρ 1 < 0.5 The values θ and θ 1 produce the same value of ρ 1. For example, θ = 0.5 and θ 1 = 2 both produce ρ 1 = 0.4. 49 / 54

Invertibility Condition: The MA(1) is invertible if θ < 1 Result: Invertible MA models have infinite order AR representations (Y t µ) = (1 + θl)ε t, θ < 1 = (1 θ L)ε t, θ = θ so that (1 θ L) 1 (Y t µ) = ε t (θ ) j L j (Y t µ) = ε t j=0 ε t = (Y t µ) + θ (Y t 1 µ) + (θ ) 2 (Y t 2 µ) + 50 / 54

Simulated MA(1) Processes θ = 0.95 θ = 0.40 4 2 0 2 4 3 2 1 0 1 2 0 50 100 150 200 250 0 50 100 150 200 250 θ = 0.95 θ = 0.40 3 2 1 0 1 2 4 2 0 2 4 0 50 100 150 200 250 0 50 100 150 200 250 51 / 54

ACFs for MA(1) Processes ACF 0.2 0.0 0.2 0.4 0.6 θ = 0.95 ACF 0.2 0.1 0.0 0.1 0.2 θ = 0.40 1 2 3 4 5 6 7 8 9 10 lags 1 2 3 4 5 6 7 8 9 10 lags ACF 0.3 0.1 0.0 0.1 0.2 θ = 0.95 ACF 0.5 0.3 0.1 0.1 θ = 0.40 1 2 3 4 5 6 7 8 9 10 lags 1 2 3 4 5 6 7 8 9 10 lags 52 / 54

Long-Run Variance Let Y t be a stationary and ergodic time series with E(Y t ) = µ. Anderson s central limit theorem (Hamilton (1994, pg. 195)) states or T ( Ȳ µ) d N(0, γ j ) Ȳ A N(µ, 1 T γ j ) j= j= Hence, the long-run variance of Y t is T times the asymptotic variance of Ȳ : LRV (Y t ) = T avar(ȳ ) = j= γ j 53 / 54

Remark: Since γ j = γ j, LRV (Y t ) may alternatively be expressed as LRV (Y t ) = γ 0 + 2 j=1 γ j Example: MA(1) Y t = µ + ε t + θε t 1 For the MA(1) process: γ 0 = σ 2 (1 + θ 2 ), γ 1 = σ 2 θ, and γ j = 0 for j > 1. As a result: LRV (Y t ) = σ 2 (1 + θ 2 ) + 2σ 2 θ = σ 2 (1 + θ) 2 54 / 54