MAT Winter 2016 Introduction to Time Series Analysis Study Guide for Midterm

Σχετικά έγγραφα
4.6 Autoregressive Moving Average Model ARMA(1,1)

6.3 Forecasting ARMA processes

Durbin-Levinson recursive method

Homework 3 Solutions

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Section 8.3 Trigonometric Equations

derivation of the Laplacian from rectangular to spherical coordinates

2 Composition. Invertible Mappings

Section 7.6 Double and Half Angle Formulas

Math221: HW# 1 solutions

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

Solutions to Exercise Sheet 5

Module 5. February 14, h 0min

Example Sheet 3 Solutions

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Finite Field Problems: Solutions

Problem Set 3: Solutions

Second Order Partial Differential Equations

Concrete Mathematics Exercises from 30 September 2016

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Section 9.2 Polar Equations and Graphs

EE512: Error Control Coding

CRASH COURSE IN PRECALCULUS

PARTIAL NOTES for 6.1 Trigonometric Identities

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

Other Test Constructions: Likelihood Ratio & Bayes Tests

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Every set of first-order formulas is equivalent to an independent set

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Lecture 2. Soundness and completeness of propositional logic

Srednicki Chapter 55

Approximation of distance between locations on earth given by latitude and longitude

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

C.S. 430 Assignment 6, Sample Solutions

Econ Spring 2004 Instructor: Prof. Kiefer Solution to Problem set # 5. γ (0)

Partial Trace and Partial Transpose

HW 3 Solutions 1. a) I use the auto.arima R function to search over models using AIC and decide on an ARMA(3,1)

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

The Simply Typed Lambda Calculus

Areas and Lengths in Polar Coordinates

Matrices and Determinants

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Second Order RLC Filters

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

Numerical Analysis FMN011

Areas and Lengths in Polar Coordinates

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr

Math 6 SL Probability Distributions Practice Test Mark Scheme

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Statistical Inference I Locally most powerful tests

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

ECE Spring Prof. David R. Jackson ECE Dept. Notes 2

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

MATH423 String Theory Solutions 4. = 0 τ = f(s). (1) dτ ds = dxµ dτ f (s) (2) dτ 2 [f (s)] 2 + dxµ. dτ f (s) (3)

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

Variational Wavefunction for the Helium Atom

Uniform Convergence of Fourier Series Michael Taylor

w o = R 1 p. (1) R = p =. = 1

Problem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ.

( y) Partial Differential Equations

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Notes on the Open Economy

Overview. Transition Semantics. Configurations and the transition relation. Executions and computation

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

Inverse trigonometric functions & General Solution of Trigonometric Equations

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Congruence Classes of Invertible Matrices of Order 3 over F 2

Finite difference method for 2-D heat equation

Differentiation exercise show differential equation

Tridiagonal matrices. Gérard MEURANT. October, 2008

5. Choice under Uncertainty

Homework 8 Model Solution Section

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

Fractional Colorings and Zykov Products of graphs

5.4 The Poisson Distribution.

A Note on Intuitionistic Fuzzy. Equivalence Relation

Right Rear Door. Let's now finish the door hinge saga with the right rear door

Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03)..

2. ARMA 1. 1 This part is based on H and BD.

Exercises to Statistics of Material Fatigue No. 5

Orbital angular momentum and the spherical harmonics

ARMA Models: I VIII 1

1 String with massive end-points

The challenges of non-stable predicates

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

Empirical best prediction under area-level Poisson mixed models

Lecture 2: ARMA Models

TMA4115 Matematikk 3

CHAPTER 101 FOURIER SERIES FOR PERIODIC FUNCTIONS OF PERIOD

D Alembert s Solution to the Wave Equation

Appendix to On the stability of a compressible axisymmetric rotating flow in a pipe. By Z. Rusak & J. H. Lee

DESIGN OF MACHINERY SOLUTION MANUAL h in h 4 0.

Example of the Baum-Welch Algorithm

Tutorial problem set 6,

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

Solution Series 9. i=1 x i and i=1 x i.

Transcript:

MAT 3379 - Winter 2016 Introduction to Time Series Analysis Study Guide for Midterm You will be allowed to have one A4 sheet (one-sided) of notes date: Monday, Febraury 29, Midterm 1 Topics 1 Evaluate covariance function in simple models - see Q1, Q2, Q3 in Assignment 1; 2 Check if ARMA model is stationary and causal - see Q4 in Assignment 1; 3 Derive the linear representation for AR(1), ARMA(1, q) - Q1 in Assignment 2; Examples in Lecture Notes; IMPORTANT: Derive means that you have to start with the definition of the model and end up with the final formula, cf Example 26 in Lecture Notes 4 Calculate autocovariance function for AR(1), AR(2), ARMA(1,1), ARMA (1,2), MA(q) using the liner representation or recursive approach - Q1 in Assignment 2 5 ARMA Model identification from graphs of ACF and PACF - see Q2 in Assignment 2; 6 Derivation of the best linear predictor for AR(1) and AR(2) using Yule- Walker procedure Calculation of MSPE - Q4 in Assignment 2; IMPOR- TANT: Derive means that you have to start with the definition of the model and end up with the final formula, as I did in Section 31 in Lecture Notes in a general case 7 Find the best linear predictor for AR(1) and AR(2) using Yule-Walker procedure Calculate MSPE - Q4 in Assignment 2 IMPORTANT: Find means that you can use Eqs (9) and (10) in Lecture Notes, as I did in Example 31 8 Find the best linear predictor for MA(1), MA(2), ARMA(1,1) and ARMA(1,2) models using Yule-Walker procedure (for small n) - Q2, Q3 in Assignment 3; IMPORTANT: Find means that you can use Eqs (9) and (10) 9 Simple calculations with the Durbin-Levinson algorithm PACF - Q1, Q2 in Assignment 3 10 Derive the formulas for Yule-Walker estimators in AR(1), AR(2) and ARMA(1,1) models IMPORTANT: Derive means that you have to start with the definition of the model and end up with the final formula, as I did in Section 62 in Lecture Notes - Q7b) in Assignment 2 2 Details Here, I combined the calculations scattered in Lecture Notes and Assignments

21 Derive linear representation 211 AR(1) [Example 25 in Lecture Notes] AR(1) is defined by where φ(z) = 1 φz Define φ(b)x t = Z t, (1) χ(z) = 1 φ(z) Now, the function 1/(1 φz) has the following power series expansion: χ(z) = 1 φ(z) = φ j z j This expansion makes sense whenever φ < 1 Take equation (18) and multiply both sides by χ(b): since χ(z)φ(z) = 1 for all z That is, χ(b)φ(b)x t = χ(b)z t, X t = χ(b)z t, X t = χ(b)z t = φ j B j Z t = φ j Z t j The above formula gives a linear representation for AR(1) with ψ j = φ j We note that the above computation makes sense whenever φ < 1 212 ARMA(1,1) [Example 26 in Lecture Notes] ARMA(1,1) is defined by where φ(z) = 1 φz, θ(z) = 1 + θz Define φ(b)x t = θ(b)z t, (2) χ(z) = 1 φ(z) = φ j z j Take equation (3) and multiply both sides by χ(b): since χ(z)φ(z) = 1 for all z That is, χ(b)φ(b)x t = χ(b)θ(b)z t, X t = χ(b)θ(b)z t, X t = χ(b)θ(b)z t = φ j B j (1 + θb)z t = φ j Z t j + θ φ j Z t j 1

Until now everything was almost the same as for AR(1) Now, we want X t to have a form ψ jz t j That is ψ j Z t j = φ j Z t j + θ φ j Z t j 1 j=1 Re-write it as ψ 0 Z t + ψ j Z t j = φ 0 Z t + (φ j + θφ j 1 )Z t j We can identify coefficients as j=1 ψ 0 = 1, ψ j = φ j 1 (θ + φ), j 1 The above formula gives a linear representation for ARMA(1,1) The formula is obtained under the condition that φ < 1 Furthermore, it is also assumed that θ + φ 0, otherwise X t = Z t 213 ARMA(1,2) ARMA(1,2) is given by where φ(z) = 1 φz, θ(z) = 1 + θ 1 z + θ 2 z 2 Define φ(b)x t = θ(b)z t, (3) χ(z) = 1 φ(z) = φ j z j Take equation (3) and multiply both sides by χ(b): since χ(z)φ(z) = 1 for all z That is, χ(b)φ(b)x t = χ(b)θ(b)z t, X t = χ(b)θ(b)z t, X t = χ(b)θ(b)z t = φ j B j (1+θ 1 B+θ 2 B 2 )Z t = φ j Z t j +θ 1 Now, we want X t to have a form ψ jz t j That is ψ j Z t j = φ j Z t j + θ 1 φ j Z t j 1 + θ 2 φ j Z t j 2 φ j Z t j 1 +θ 2 Re-write it as ψ 0 Z t +ψ 1 Z t 1 + ψ j Z t j = φ 0 Z t +(φ 1 +θ 1 )Z t 1 + (φ j +θ 1 φ j 1 +θ 2 φ j 2 )Z t j j=2 We can identify coefficients as ψ 0 = 1, ψ 1 = φ + θ 1, ψ j = φ j 2 (φ 2 + θ 1 φ + θ 2 ), j 2 (4) The above formula gives a linear representation for ARMA(1,1) The formula is obtained under the condition that φ < 1 j=2 φ j Z t j 2

214 ARMA(1,q) The same idea as above 22 Derive autocovariance function for ARMA model 221 AR(1) Using the linear representation: [Example 410 in Lecture Notes] We use the representation X t = φj Z t j and the general formula γ X (h) = σz 2 ψ jψ j+h to obtain γ X (h) = σ 2 Zφ h 1 Using the recursive method: [Example 212 in Lecture Notes] Take AR(1) equation X t = φx t 1 +Z t Multiply both sides by X t h and apply the expected value to get E[X t X t h ] = φe[x t 1 X t h ] + E[Z t X t h ] Since E[X t ] = 0, then E[X t X t h ] = γ X (h) and E[X t X t h ] = γ h 1 Also, for all h 1 we can see that Z t is independent of X t h = φj Z t h j Hence, E[Z t X t h ] = E[Z t ]E[X t h ] = 0 (This is the whole trick, if you multiply by X t+h it will not work) Hence, we obtain γ X (h) = φγ X (h 1), or by induction γ X (h) = φ h1 γ X (0), h 1 We need to start the recursion by computing γ X (0) = Var(X t ) = σx 2 We have Var(X t ) = φ 2 Var(X t 1 ) + Var(Z t ) (again, X t 1 and Z t are independent) Since X t is stationary we get σ 2 X = φ 2 σ 2 X + σ 2 Z Solving for σx 2 : σx 2 = σz 2 1 Finally γ X (h) = φ h σz 2 1 222 AR(2) We did not derive the linear representa- Using the linear representation: tion for this model 1 There was a typo in Lecture Notes

Using the recursive method: Take AR(2) equation X t = φ 1 X t 1 +φ 2 X t 2 + Z t Multiply both sides by X t h and apply the expected value to get E[X t X t h ] = φ 1 E[X t 1 X t h ] + φ 1 E[X t 2 X t h ] + E[Z t X t h ] Since E[X t ] = 0, we have E[X t X t h ] = γ X (h), E[X t 1 X t h ] = γ X (h 1) and E[X t 2 X t h ] = γ X (h 2) Also, for all h 1, Z t is independent of X t h Hence, for h 1 Hence, we obtain E[Z t X t h ] = E[Z t ]E[X t h ] = 0 γ X (h) = φ 1 γ X (h 1) + φ 2 γ X (h 2) (5) We need to start the recursion by computing γ X (0) = Var(X t ) = σx 2 and γ X (1) To get γ X (1) we use again AR(2) equation, multiply by X t 1 and apply expectation to get so that and E[X t X t 1 ] = φ 1 E[X 2 t 1] + φ 2 E[X t 2 X t 1 ] + E[Z t X t 1 ] }{{} =0 γ X (1) = φ 1 γ X (0) + φ 2 γ X (1), (6) γ X (1) φ 1 = γ X (0) (7) Now, we need to get γ X (0) Take the AR(2) equation, multiply by X t and apply expectation to get Now, Hence, E[X 2 t ] = φ 1 E[X t 1 X t ] + φ 2 E[X t 2 X t ] + E[Z t X t ] E[Z t X t ] = E[Z t (φ 1 X t 1 + φ 2 X t 2 + Z t )] = σ 2 Z We already know (equation (5) with h = 2) γ X (0) = φ 1 γ X (1) + φ 2 γ X (2) + σ 2 Z (8) γ X (2) = φ 1 γ X (1) + φ 2 γ X (0) We plug-in this expression into (8) to get Solving (8)-(9) we obtain γ X (0) = φ 1 γ X (1) + φ 2 {φ 1 γ X (1) + φ 2 γ X (0)} + σ 2 Z (9) γ X (h) = φ 1 γ X (h 1) + φ 2 γ X (h 2), h 2, φ 1 γ X (1) = σz 2 (1 + φ 2 ) {( ) 2 φ 2 1 } γ X (0) = σz 2 (1 + φ 2 ) {( ) 2 φ 2 1 } Note: to check that the last two equations make sense, take φ 2 = 0, φ 1 = φ Then AR(2) reduces to AR(1) and the last two formulas should reduce to γ X (0) and γ X (1) for AR(1)

223 MA(q) Trivial 224 ARMA(1,1) and ARMA(1,q) Using linear representation: For ARMA((1, 1)) (an in general for ARMA(1,q)) use the linear representation with the general formula for the covariance of the linear process Specifically, since ψ 0 = 1, ψ j = φ j 1 (φ + θ), j 1, we have ] γ X (0) = σz 2 ψj 2 = σzψ 2 0+σ 2 Z 2 ψj 2 = σz 2 1 + (φ + θ) 2 φ 2(j 1) = σz [1 2 (θ + φ)2 + Similarly, γ X (1) = σ 2 Z j=1 [(θ + φ) + φ ] (θ + φ)2 You can also obtain similar formulas for γ X (h) You can also notice that γ X (h) = φ h 1 γ X (1) That is [ ] γ X (h) = σzφ 2 h 1 (θ + φ)2 (θ + φ) + φ j=1 Using the recursive method: You take the defining equation X t = φx t 1 + Z t + θz t 1, multiply both sides by X t h and then try to find a recursive equation, similar to AR(1) or AR(2) 23 The best linear predictor for AR(1) and AR(2) using Yule-Walker procedure Yule-Walker equation: or, equivalently, Formula for MSPE n (k): 231 Find P n X n+1 for AR(1): Γ n a n = γ(n; k) (10) a n = Γ 1 n γ(n; k) (11) γ X (0) a T n γ(n; k) [Example 31 in Lecture Notes] AR(1) model is given by X t = φx t 1 + Z t, where Z t are iid centered with mean zero and variance σ 2 Z and φ < 1 Hence, µ = E[X t] = 0 Recall that Then σ 2 Z γ X (h) = φ h, h 0 γ(n; k) = γ(n; 1) = (γ X (1),, γ X (n)) T = σ2 Z (φ,, φn ) T

The equation (10) becomes σ 2 Z 1 φ φ 2 φ 3 φ n 1 φ 1 φ φ 2 φ n 2 a 1 a n = σ2 Z φ n 1 φ n 2 φ n 3 φ n 4 1 (12) Now, either you invert the matrix on the left hand side or you guess the solution a n = (φ,, 0) T You have to verify that the guessed solution solves (12) Hence, in AR(1) case the prediction is 232 Find P n X n+2 for AR(1): P n X n+1 = φx n Now, we try to guess P n X n+2 If we happen to have observations X 1,, X n+1, then prediction of the next X n+2 th value is φx n+1 However, we have only n observations, so that in the latter formula we have to predict X n+1 The prediction of X n+1 has the form φx n Hence, we may guess that P n X n+2 = φ(φx n ) = φ 2 X n You have to verify that this is the correct guess 233 Find P n X n+1 for AR(2): AR(2) model is X t = φ 1 X t 1 + φ 2 X t 2 + Z t Hence, we may guess that the one-step prediction for AR(2) has the form P n X n+1 = φ 1 X n + φ 2 X n 1, that is (a 1, a 2,, a n ) = (φ 1, φ 2, 0,, 0) We verify it by checking validity of the Yule-Walker equation for two-step prediction: γ X (0) γ X (1) γ X (2) γ X (3) γ X (n 1) γ X (1) γ X (0) γ X (1) γ X (2) γ X (n 2) γ X (n 1) γ X (n 2) γ X (n 3) γ X (n 4) γ X (0) We have to check whether our choice is correct For the first and the second row on the left hand side we get, respectively, φ 1 γ X (0) + φ 2 γ X (1) = γ X (1); φ 1 γ X (1) + φ 2 γ X (0) = γ X (2) φ φ n a 1 a n = Now, looking at Assignment 2, we can recognize that the above equations are exactly formulas that are valid for covariances of AR(2) model That is, we verified that our guess was correct You can check that all remaining rows on the left hand side reduce to the recursive formula for AR(2) We can recognize the first equation to be (6) while the second one is just the recursive formula for AR(2) That is, we verified that our guess was correct 234 MSPE n (1) for AR(1) MSPE n (1) = γ X (0) a T n γ(n; 1) = γ X(0) φγ X (1) = σ2 Z φ2 σ 2 Z = σ2 Z γ X (1) γ X (n)

235 MSPE n (2) for AR(1) MSPE n (2) = γ X (0) a T n γ(n; 2) = σ2 Z φ2 γ X (2) = σ2 Z σ2 Z φ4 = (1 φ4 ) 236 MSPE n (1) for AR(2) σ 2 Z MSPE n (1) = γ X (0) a T n γ(n; 1) = γ X (0) (φ 1, φ 2 )(γ X (1), γ X (2)) T = γ X (0) φ 1 γ X (1) φ 2 γ X (2) = γ X (0) φ 1 γ X (1) φ 2 (φ 1 γ X (1) + φ 2 γ X (0)), where in the last line I used the recursive formula for AR(2) You can leave it as it is, or you can plug-in the messy expressions for AR(2) 24 Durbin-Levinson algorithm for AR(p) 241 AR(1) Find φ 11, φ 22 From Theorem 32 in Lecture Notes we have: Furthermore, φ 11 = ρ X (1) = γ X(1) γ X (0) = φ φ 22 = [γ X (2) φ 11 γ X (1)] /v 1 = [γ X (2) φγ X (1)] /v 1 (13) We note that for AR(1) model we have γ X (2) = φγ X (1), hence φ 22 = 0 242 AR(2) Find φ 11, φ 22, φ 33 From Theorem 32 in Lecture Notes we have: Furthermore, φ 11 = ρ X (1) = γ X(1) γ X (0) = φ 1 The formulas for covariances of AR(2) are φ 22 = [γ X (2) φ 11 γ X (1)] /v 1 (14) φ 1 γ X (h 1) + φ 2 γ X (h 2) = γ X (h), h 2, (15) φ 1 γ X (1) = γ X (0) Use the above formulas (first one with h = 2) and replace γ X (2) in (14) to get φ 22 = [φ 1 γ X (1) + φ 2 γ X (0) φ 11 γ X (1)] /v 1 (16) [ ] = γ X (0) φ2 1 φ 1 φ 2 + φ 2 γ X (0) γ X (0) 2 1 (1 φ 2) /v 2 1 (17) Now, from Theorem 32, v 1 = v 0 ( 11) = γ X (0) ( φ 2 ) 1 1 ( ) 2 (18)

If you combine (16)-(18) together you will get φ 22 = φ 2 (19) Now, for φ 33 : recall that this value represents partial autocovariance at lag 3 It was mentioned in class that for AR(2), PACF vanishes after lag 2 Hence, we should get φ 33 = 0 We will verify it From Theorem 32 we get Use (15) with h = 3 to get φ 33 = [γ X (3) φ 21 γ X (2) φ 22 γ X (1)] /v 1 2 φ 33 = [(φ 1 )γ X (2) (φ 2 φ 22 )γ X (1)] /v 1 2 Keeping in mind (19), in order to show that φ 33 = 0 it is enough to show that φ 21 = φ 1 We use again Theorem 32: φ 21 = φ 12 φ 11 = φ 11 ( 2) = φ 1 ( ) = φ 1 That is, φ 33 = 0