Durbin-Levinson recursive method

Σχετικά έγγραφα
6.3 Forecasting ARMA processes

4.6 Autoregressive Moving Average Model ARMA(1,1)

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

MAT Winter 2016 Introduction to Time Series Analysis Study Guide for Midterm

Numerical Analysis FMN011

HW 3 Solutions 1. a) I use the auto.arima R function to search over models using AIC and decide on an ARMA(3,1)

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Section 7.6 Double and Half Angle Formulas

Math221: HW# 1 solutions

1 String with massive end-points

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Homework 3 Solutions

Matrices and Determinants

Example Sheet 3 Solutions

EE512: Error Control Coding

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

Areas and Lengths in Polar Coordinates

Module 5. February 14, h 0min

Section 8.3 Trigonometric Equations

Areas and Lengths in Polar Coordinates

Finite Field Problems: Solutions

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

derivation of the Laplacian from rectangular to spherical coordinates

Second Order RLC Filters

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Concrete Mathematics Exercises from 30 September 2016

Differentiation exercise show differential equation

Other Test Constructions: Likelihood Ratio & Bayes Tests

TMA4115 Matematikk 3

Srednicki Chapter 55

CHAPTER 101 FOURIER SERIES FOR PERIODIC FUNCTIONS OF PERIOD

ES440/ES911: CFD. Chapter 5. Solution of Linear Equation Systems

Lecture 34 Bootstrap confidence intervals

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Overview. Transition Semantics. Configurations and the transition relation. Executions and computation

The challenges of non-stable predicates

Solutions to Exercise Sheet 5

Finite difference method for 2-D heat equation

Econ Spring 2004 Instructor: Prof. Kiefer Solution to Problem set # 5. γ (0)

2 Composition. Invertible Mappings

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Uniform Convergence of Fourier Series Michael Taylor

D Alembert s Solution to the Wave Equation

CRASH COURSE IN PRECALCULUS

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

ST5224: Advanced Statistical Theory II

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

상대론적고에너지중이온충돌에서 제트입자와관련된제동복사 박가영 인하대학교 윤진희교수님, 권민정교수님

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

PARTIAL NOTES for 6.1 Trigonometric Identities

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Section 9.2 Polar Equations and Graphs

Homework 8 Model Solution Section

Introduction to Time Series Analysis. Lecture 16.

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

w o = R 1 p. (1) R = p =. = 1

Second Order Partial Differential Equations

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.

Approximation of distance between locations on earth given by latitude and longitude

Orbital angular momentum and the spherical harmonics

Every set of first-order formulas is equivalent to an independent set

Reminders: linear functions

Inverse trigonometric functions & General Solution of Trigonometric Equations

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

( )( ) ( ) ( )( ) ( )( ) β = Chapter 5 Exercise Problems EX α So 49 β 199 EX EX EX5.4 EX5.5. (a)

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

Problem Set 3: Solutions

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Forced Pendulum Numerical approach

Block Ciphers Modes. Ramki Thurimella

Tridiagonal matrices. Gérard MEURANT. October, 2008

Partial Trace and Partial Transpose

( ) 2 and compare to M.

Πανεπιστήμιο Κρήτης, Τμήμα Επιστήμης Υπολογιστών Άνοιξη HΥ463 - Συστήματα Ανάκτησης Πληροφοριών Information Retrieval (IR) Systems

DESIGN OF MACHINERY SOLUTION MANUAL h in h 4 0.

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

Εγκατάσταση λογισμικού και αναβάθμιση συσκευής Device software installation and software upgrade

10.7 Performance of Second-Order System (Unit Step Response)

Asymptotic distribution of MLE

ARMA Models: I VIII 1

Notes on the Open Economy

Modbus basic setup notes for IO-Link AL1xxx Master Block

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Derivation of Optical-Bloch Equations

Trigonometric Formula Sheet

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Appendix to On the stability of a compressible axisymmetric rotating flow in a pipe. By Z. Rusak & J. H. Lee

Μηχανική Μάθηση Hypothesis Testing

6.003: Signals and Systems. Modulation

The Simply Typed Lambda Calculus

(a,b) Let s review the general definitions of trig functions first. (See back cover of your book) sin θ = b/r cos θ = a/r tan θ = b/a, a 0

Integrals in cylindrical, spherical coordinates (Sect. 15.7)

LAD Estimation for Time Series Models With Finite and Infinite Variance

Main source: "Discrete-time systems and computer control" by Α. ΣΚΟΔΡΑΣ ΨΗΦΙΑΚΟΣ ΕΛΕΓΧΟΣ ΔΙΑΛΕΞΗ 4 ΔΙΑΦΑΝΕΙΑ 1

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

Parametrized Surfaces

Abstract Storage Devices

Transcript:

Durbin-Levinson recursive method A recursive method for computing ϕ n is useful because it avoids inverting large matrices; when new data are acquired, one can update predictions, instead of starting again from scratch; the procedure is a method for computing important theoretical quantities. 9 ottobre 2014 1 / 19

Durbin-Levinson recursive method A recursive method for computing ϕ n is useful because Idea it avoids inverting large matrices; when new data are acquired, one can update predictions, instead of starting again from scratch; the procedure is a method for computing important theoretical quantities. ˆX n+1 = P L(X1,...,X n)x n+1 = P L(X2,...,X n)x n+1 + a ( X 1 P L(X2,...,X n)x 1 ) Note ( X 1 P L(X2,...,X n)x 1 ) is orthogonal to the previous. 9 ottobre 2014 1 / 19

Durbin-Levinson, 2 ˆX n+1 = P L(X1,...,X n)x n+1 = P L(X2,...,X n)x n+1 + a ( ) X 1 P L(X2,...,X n)x 1 Check orthogonality condition to find a: i > 1 : ˆX n+1 X n+1, X i = P L(X2,...,X n)x n+1 X n+1, X i + a X 1 P L(X2,...,X n)x 1, X i = 0 + 0 last step coming from the definitions of projections (i = 2... n). 9 ottobre 2014 2 / 19

Durbin-Levinson, 3 ˆX n+1 = P L(X1,...,X n)x n+1 = P L(X2,...,X n)x n+1 + a ( X 1 P L(X2,...,X n)x 1 ) Check orthogonality condition with i = 1: 9 ottobre 2014 3 / 19

Durbin-Levinson, 3 ˆX n+1 = P L(X1,...,X n)x n+1 = P L(X2,...,X n)x n+1 + a ( X 1 P L(X2,...,X n)x 1 ) Check orthogonality condition with i = 1: 0 = ˆX n+1 X n+1, X 1 P L(X2,...,X n)x 1 9 ottobre 2014 3 / 19

Durbin-Levinson, 3 ˆX n+1 = P L(X1,...,X n)x n+1 = P L(X2,...,X n)x n+1 + a ( ) X 1 P L(X2,...,X n)x 1 Check orthogonality condition with i = 1: 0 = ˆX n+1 X n+1, X 1 P L(X2,...,X n)x 1 = P L(X2,...,X n)x n+1 X n+1, X 1 P L(X2,...,X n)x 1 +a X 1 P L(X2,...,X n)x 1 2 9 ottobre 2014 3 / 19

Durbin-Levinson, 3 ˆX n+1 = P L(X1,...,X n)x n+1 = P L(X2,...,X n)x n+1 + a ( ) X 1 P L(X2,...,X n)x 1 Check orthogonality condition with i = 1: 0 = ˆX n+1 X n+1, X 1 P L(X2,...,X n)x 1 = P L(X2,...,X n)x n+1 X n+1, X 1 P L(X2,...,X n)x 1 +a X 1 P L(X2,...,X n)x 1 2 = X n+1, X 1 P L(X2,...,X n)x 1 + a X 1 P L(X2,...,X n)x 1 2 9 ottobre 2014 3 / 19

Durbin-Levinson, 3 ˆX n+1 = P L(X1,...,X n)x n+1 = P L(X2,...,X n)x n+1 + a ( X 1 P L(X2,...,X n)x 1 ) Check orthogonality condition with i = 1: 0 = ˆX n+1 X n+1, X 1 P L(X2,...,X n)x 1 = P L(X2,...,X n)x n+1 X n+1, X 1 P L(X2,...,X n)x 1 +a X 1 P L(X2,...,X n)x 1 2 = X n+1, X 1 P L(X2,...,X n)x 1 + a X 1 P L(X2,...,X n)x 1 2 = a = X n+1, X 1 P L(X2,...,X n)x 1 X 1 P L(X2,...,X n)x 1 2 9 ottobre 2014 3 / 19

Durbin-Levinson. 4 We tried ˆX n+1 = P L(X1,...,X n)x n+1 = P L(X2,...,X n)x n+1 + a ( ) X 1 P L(X2,...,X n)x 1 and found a = X n+1, X 1 P L(X2,...,X n)x 1 X 1 P L(X2,...,X n)x 1 2 = X n+1, X 1 P L(X2,...,X n)x 1 v 1 n 1 with v n 1 = E( ˆX n X n 2 ) = X n P L(X1,...,X n 1 )X n 2 = X 1 P L(X2,...,X n)x 1 2. 9 ottobre 2014 4 / 19

Durbin-Levinson. 4 We tried ˆX n+1 = P L(X1,...,X n)x n+1 = P L(X2,...,X n)x n+1 + a ( ) X 1 P L(X2,...,X n)x 1 and found a = X n+1, X 1 P L(X2,...,X n)x 1 X 1 P L(X2,...,X n)x 1 2 = X n+1, X 1 P L(X2,...,X n)x 1 v 1 n 1 with v n 1 = E( ˆX n X n 2 ) = X n P L(X1,...,X n 1 )X n 2 = X 1 P L(X2,...,X n)x 1 2. We write ˆX n+1 = ϕ n,1 X n + + ϕ n,n X 1 = n ϕ n,j X n+1 j 9 ottobre 2014 4 / 19

Durbin-Levinson. 4 We tried ˆX n+1 = P L(X1,...,X n)x n+1 = P L(X2,...,X n)x n+1 + a ( ) X 1 P L(X2,...,X n)x 1 and found a = X n+1, X 1 P L(X2,...,X n)x 1 X 1 P L(X2,...,X n)x 1 2 = X n+1, X 1 P L(X2,...,X n)x 1 v 1 n 1 with v n 1 = E( ˆX n X n 2 ) = X n P L(X1,...,X n 1 )X n 2 = X 1 P L(X2,...,X n)x 1 2. We write ˆX n+1 = ϕ n,1 X n + + ϕ n,n X 1 = n ϕ n,j X n+1 j so that P L(X2,...,X n)x n+1 = n 1 ϕ n 1,j X n+1 j 9 ottobre 2014 4 / 19

Durbin-Levinson. 4 We tried ˆX n+1 = P L(X1,...,X n)x n+1 = P L(X2,...,X n)x n+1 + a ( ) X 1 P L(X2,...,X n)x 1 and found a = X n+1, X 1 P L(X2,...,X n)x 1 X 1 P L(X2,...,X n)x 1 2 = X n+1, X 1 P L(X2,...,X n)x 1 v 1 n 1 with v n 1 = E( ˆX n X n 2 ) = X n P L(X1,...,X n 1 )X n 2 = X 1 P L(X2,...,X n)x 1 2. We write ˆX n+1 = ϕ n,1 X n + + ϕ n,n X 1 = n ϕ n,j X n+1 j so that P L(X2,...,X n)x n+1 = n 1 and substituting we get a recursion. ϕ n 1,j X n+1 j 9 ottobre 2014 4 / 19

Durbin-Levinson algorithm. 5 ˆX n+1 = n ϕ n,j X n+1 j = P L(X2,...,X n)x n+1 + a ( ) X 1 P L(X2,...,X n)x 1 Hence ϕ n,n = a = X n+1, X 1 P L(X2,...,X n)x 1 vn 1 1 n 1 = γ(n) ϕ n 1,j γ(n j) v 1 n 1. 9 ottobre 2014 5 / 19

Durbin-Levinson algorithm. 6 Then from n n 1 n 1 ϕ n,j X n+1 j = ϕ n 1,j X n+1 j + a(x 1 ϕ n 1,j X j+1 ) n 1 n 1 = ϕ n 1,j X n+1 j + a(x 1 ϕ n 1,n k X n+1 k ) k=1 one sees ϕ n,j = ϕ n 1,j aϕ n 1,n j = ϕ n 1,j ϕ n,n ϕ n 1,n j j = 1... n 1 9 ottobre 2014 6 / 19

Durbin-Levinson algorithm. 6 Then from n n 1 n 1 ϕ n,j X n+1 j = ϕ n 1,j X n+1 j + a(x 1 ϕ n 1,j X j+1 ) n 1 n 1 = ϕ n 1,j X n+1 j + a(x 1 ϕ n 1,n k X n+1 k ) k=1 one sees ϕ n,j = ϕ n 1,j aϕ n 1,n j = ϕ n 1,j ϕ n,n ϕ n 1,n j j = 1... n 1 We need also a recursive procedure for v n. 9 ottobre 2014 6 / 19

Durbin-Levinson algorithm. 7 n v n = E( ˆX n+1 X n+1 2 ) = γ 0 ϕ n,j γ(j) n 1 = γ 0 ϕ n,n γ(n) (ϕ n 1,j ϕ n,n ϕ n 1,n j )γ(j) n 1 n 1 = γ 0 ϕ n 1,j γ(j) ϕ n,n γ(n) ϕ n 1,n j γ(j) ( ) = v n 1 ϕ n,n ϕ n,n v n 1 = v n 1 1 ϕ 2 n,n. The terms in red are equal because of the definition ϕ n,n. 9 ottobre 2014 7 / 19

Durbin-Levinson algorithm. 7 v n = E( ˆX n+1 X n+1 2 ) = γ 0 n ϕ n,j γ(j) n 1 = γ 0 ϕ n,n γ(n) (ϕ n 1,j ϕ n,n ϕ n 1,n j )γ(j) n 1 n 1 = γ 0 ϕ n 1,j γ(j) ϕ n,n γ(n) ϕ n 1,n j γ(j) = v n 1 ϕ n,n ϕ n,n v n 1 = v n 1 ( 1 ϕ 2 n,n ). The terms in red are equal because of the definition ϕ n,n. The final formula v n = ( 1 ϕ 2 n,n) vn 1 shows that ϕ n,n determines the decrease of predictive error with increasing n. 9 ottobre 2014 7 / 19

Durbin-Levinson algorithm. Summary v 0 = E( X 1 ˆX 1 2 ) = E( X 1 2 ) = γ(0) 9 ottobre 2014 8 / 19

Durbin-Levinson algorithm. Summary v 0 = E( X 1 ˆX 1 2 ) = E( X 1 2 ) = γ(0) ϕ 1,1 = γ(1) v 0 = ρ(1) 9 ottobre 2014 8 / 19

Durbin-Levinson algorithm. Summary v 0 = E( X 1 ˆX 1 2 ) = E( X 1 2 ) = γ(0) ϕ 1,1 = γ(1) = ρ(1) v 0 v 1 = ( 1 ϕ 2 ) 1,1 v0 = γ(0) ( 1 ρ(1) 2) 9 ottobre 2014 8 / 19

Durbin-Levinson algorithm. Summary v 0 = E( X 1 ˆX 1 2 ) = E( X 1 2 ) = γ(0) ϕ 1,1 = γ(1) = ρ(1) v 0 v 1 = ( 1 ϕ 2 ) 1,1 v0 = γ(0) ( 1 ρ(1) 2). n 1 ϕ n,n = γ(n) ϕ n 1,j γ(n j) v 1 n 1 9 ottobre 2014 8 / 19

Durbin-Levinson algorithm. Summary v 0 = E( X 1 ˆX 1 2 ) = E( X 1 2 ) = γ(0) ϕ 1,1 = γ(1) = ρ(1) v 0 v 1 = ( 1 ϕ 2 ) 1,1 v0 = γ(0) ( 1 ρ(1) 2). n 1 ϕ n,n = γ(n) ϕ n 1,j γ(n j) v 1 n 1 ϕ n,j = ϕ n 1,j ϕ n,n ϕ n 1,n j j = 1... n 1 9 ottobre 2014 8 / 19

Durbin-Levinson algorithm. Summary v 0 = E( X 1 ˆX 1 2 ) = E( X 1 2 ) = γ(0) ϕ 1,1 = γ(1) = ρ(1) v 0 v 1 = ( 1 ϕ 2 ) 1,1 v0 = γ(0) ( 1 ρ(1) 2). n 1 ϕ n,n = γ(n) ϕ n 1,j γ(n j) v 1 n 1 ϕ n,j = ϕ n 1,j ϕ n,n ϕ n 1,n j j = 1... n 1 v n = ( 1 ϕ 2 n,n) vn 1. 9 ottobre 2014 8 / 19

Durbin-Levinson algorithm. Summary v 0 = E( X 1 ˆX 1 2 ) = E( X 1 2 ) = γ(0) ϕ 1,1 = γ(1) = ρ(1) v 0 v 1 = ( 1 ϕ 2 ) 1,1 v0 = γ(0) ( 1 ρ(1) 2). n 1 ϕ n,n = γ(n) ϕ n 1,j γ(n j) v 1 n 1 ϕ n,j = ϕ n 1,j ϕ n,n ϕ n 1,n j j = 1... n 1 v n = ( 1 ϕ 2 n,n) vn 1. One could divide everything by γ(0) and work with ACF instead of ACVF 9 ottobre 2014 8 / 19

Durbin-Levinson algorithm for AR(1) X t stationary with X t = φx t 1 + Z t, Z t WN(0, σ 2 ) and E(X s Z t ) = 0 if s < t 9 ottobre 2014 9 / 19

Durbin-Levinson algorithm for AR(1) X t stationary with X t = φx t 1 + Z t, Z t WN(0, σ 2 ) and E(X s Z t ) = 0 if s < t = γ(h) = σ2 φ h 1 φ 2. 9 ottobre 2014 9 / 19

Durbin-Levinson algorithm for AR(1) X t stationary with X t = φx t 1 + Z t, Z t WN(0, σ 2 ) and E(X s Z t ) = 0 if s < t = γ(h) = σ2 φ h 1 φ 2. v 0 = σ2 1 φ 2, ϕ 1,1 = φ, v 1 = σ 2, 9 ottobre 2014 9 / 19

Durbin-Levinson algorithm for AR(1) X t stationary with X t = φx t 1 + Z t, Z t WN(0, σ 2 ) and E(X s Z t ) = 0 if s < t = γ(h) = σ2 φ h 1 φ 2. ϕ 2,2 = σ2 v 0 = 1 φ 2, ϕ 1,1 = φ, v 1 = σ 2, [ σ 2 φ 2 1 φ 2 ϕ σ2 φ 1 φ 2 ] v 1 1 = 0. ϕ 2,1 = ϕ 1,1, v 2 = v 1, ϕ n,1 = φ, ϕ n,j = 0 j > 1, v n = v 1 = σ 2. 9 ottobre 2014 9 / 19

Durbin-Levinson algorithm for MA(1) X t = Z t ϑz t 1, Z t WN(0, σ 2 ), γ(0) = σ 2 (1 + ϑ 2 ), γ(1) = σ 2 ϑ. 9 ottobre 2014 10 / 19

Durbin-Levinson algorithm for MA(1) X t = Z t ϑz t 1, Z t WN(0, σ 2 ), γ(0) = σ 2 (1 + ϑ 2 ), γ(1) = σ 2 ϑ. v 0 = σ 2 (1 + ϑ 2 ) ϕ 1,1 = ϑ 1 + ϑ 2 9 ottobre 2014 10 / 19

Durbin-Levinson algorithm for MA(1) X t = Z t ϑz t 1, Z t WN(0, σ 2 ), γ(0) = σ 2 (1 + ϑ 2 ), γ(1) = σ 2 ϑ. v 0 = σ 2 (1 + ϑ 2 ) ϕ 1,1 = ϑ 1 + ϑ 2 v 1 = σ2 (1 + ϑ 2 + ϑ 4 ) 1 + ϑ 2 ϕ 2,2 = 1 + ϑ 2 + ϑ 4... v 2 = σ2 (1 + ϑ 2 + ϑ 4 + ϑ 6 ) 1 + ϑ 2 + ϑ 4... ϑ 2 9 ottobre 2014 10 / 19

Durbin-Levinson algorithm for MA(1) X t = Z t ϑz t 1, Z t WN(0, σ 2 ), γ(0) = σ 2 (1 + ϑ 2 ), γ(1) = σ 2 ϑ. v 0 = σ 2 (1 + ϑ 2 ) ϕ 1,1 = ϑ 1 + ϑ 2 v 1 = σ2 (1 + ϑ 2 + ϑ 4 ) 1 + ϑ 2 ϕ 2,2 = 1 + ϑ 2 + ϑ 4... v 2 = σ2 (1 + ϑ 2 + ϑ 4 + ϑ 6 ) 1 + ϑ 2 + ϑ 4... Remarks: Computations are long and tedious. v n converges (slowly) towards σ 2 (the white-noise variance) if ϑ < 1. ϑ 2 9 ottobre 2014 10 / 19

Durbin-Levinson for sinusoidal wave X t = B cos(ωt) + C sin(ωt), with ω R, E(B) = E(C) = E(BC) = 0, V(B) = V(C) = σ 2. 9 ottobre 2014 11 / 19

Durbin-Levinson for sinusoidal wave X t = B cos(ωt) + C sin(ωt), with ω R, E(B) = E(C) = E(BC) = 0, V(B) = V(C) = σ 2. Then γ(h) = σ 2 cos(ωh). 9 ottobre 2014 11 / 19

Durbin-Levinson for sinusoidal wave X t = B cos(ωt) + C sin(ωt), with ω R, E(B) = E(C) = E(BC) = 0, V(B) = V(C) = σ 2. Then γ(h) = σ 2 cos(ωh). v 0 = σ 2 ϕ 1,1 = cos(ω) 9 ottobre 2014 11 / 19

Durbin-Levinson for sinusoidal wave X t = B cos(ωt) + C sin(ωt), with ω R, E(B) = E(C) = E(BC) = 0, V(B) = V(C) = σ 2. Then γ(h) = σ 2 cos(ωh). v 0 = σ 2 v 1 = σ 2 (1 cos 2 (ω)) = σ 2 sin 2 (ω) ϕ 1,1 = cos(ω) ϕ 2,2 = cos(2ω) cos2 (ω) sin 2 (ω) = 1 9 ottobre 2014 11 / 19

Durbin-Levinson for sinusoidal wave X t = B cos(ωt) + C sin(ωt), with ω R, E(B) = E(C) = E(BC) = 0, V(B) = V(C) = σ 2. Then γ(h) = σ 2 cos(ωh). v 0 = σ 2 v 1 = σ 2 (1 cos 2 (ω)) = σ 2 sin 2 (ω) v 2 = 0 = X n+1 = P L(Xn,Xn 1 )X n+1. ϕ 1,1 = cos(ω) ϕ 2,2 = cos(2ω) cos2 (ω) sin 2 (ω) = 1 9 ottobre 2014 11 / 19

Partial auto-correlation For a stationary process {X t } α(h) the partial auto-correlation represents the correlation between X t and X t+h, after removing the effect of intermediate values. 9 ottobre 2014 12 / 19

Partial auto-correlation For a stationary process {X t } α(h) the partial auto-correlation represents the correlation between X t and X t+h, after removing the effect of intermediate values. Definition: α(1) = ρ(x t, X t+1 ) = ρ(1). α(h) = ρ(x t P L(Xt+1,...,X t+h 1 )X t, X t+h P L(Xt+1,...,X t+h 1 )X t+h ) h > 1. 9 ottobre 2014 12 / 19

Partial auto-correlation For a stationary process {X t } α(h) the partial auto-correlation represents the correlation between X t and X t+h, after removing the effect of intermediate values. Definition: α(1) = ρ(x t, X t+1 ) = ρ(1). α(h) = ρ(x t P L(Xt+1,...,X t+h 1 )X t, X t+h P L(Xt+1,...,X t+h 1 )X t+h ) h > 1. α(h) = E((X t P L(Xt+1,...,X t+h 1 )X t )(X t+h P L(Xt+1,...,X t+h 1 )X t+h )) V(X t P L(Xt+1,...,X t+h 1 )X t ) = X 1 P L(X2,...,X h )X 1, X h+1 P L(X2,...,X h )X h+1 X 1 P L(X2,...,X h )X 1 2 = X 1, X h+1 P L(X2,...,X h )X h+1 X 1 P L(X2,...,X h )X 1 2 = ϕ h,h. 9 ottobre 2014 12 / 19

Partial auto-correlation For a stationary process {X t } α(h) the partial auto-correlation represents the correlation between X t and X t+h, after removing the effect of intermediate values. Definition: α(1) = ρ(x t, X t+1 ) = ρ(1). α(h) = ρ(x t P L(Xt+1,...,X t+h 1 )X t, X t+h P L(Xt+1,...,X t+h 1 )X t+h ) h > 1. α(h) = E((X t P L(Xt+1,...,X t+h 1 )X t )(X t+h P L(Xt+1,...,X t+h 1 )X t+h )) V(X t P L(Xt+1,...,X t+h 1 )X t ) = X 1 P L(X2,...,X h )X 1, X h+1 P L(X2,...,X h )X h+1 X 1 P L(X2,...,X h )X 1 2 = X 1, X h+1 P L(X2,...,X h )X h+1 X 1 P L(X2,...,X h )X 1 2 = ϕ h,h. Durbin-Levinson s algorithm is a method to compute α( ). 9 ottobre 2014 12 / 19

Remember in fact Durbin-Levinson algorithm. 5 ˆX n+1 = n ϕ n,j X n+1 j = P L(X2,...,X n)x n+1 + a ( ) X 1 P L(X2,...,X n)x 1 Hence ϕ n,n = a = X n+1, X 1 P L(X2,...,X n)x 1 vn 1 1 n 1 = γ(n) ϕ n 1,j γ(n j) v 1 n 1. 9 ottobre 2014 13 / 19

Examples of PACF {X t } AR(1), = α(1) = φ, α(h) = 0 for h > 1 (seen before). 9 ottobre 2014 14 / 19

Examples of PACF {X t } AR(1), = α(1) = φ, α(h) = 0 for h > 1 (seen before). {X t } AR(p), i.e. stationary proces s.t. p X t = φ k X t k + Z t, {Z t } WN(0, σ 2 ). k=1 9 ottobre 2014 14 / 19

Examples of PACF {X t } AR(1), = α(1) = φ, α(h) = 0 for h > 1 (seen before). {X t } AR(p), i.e. stationary proces s.t. p X t = φ k X t k + Z t, {Z t } WN(0, σ 2 ). If t p, k=1 P L(X1,...,X t)x t+1 = p k=1 φ kx t+1 k (check). 9 ottobre 2014 14 / 19

Examples of PACF {X t } AR(1), = α(1) = φ, α(h) = 0 for h > 1 (seen before). {X t } AR(p), i.e. stationary proces s.t. p X t = φ k X t k + Z t, {Z t } WN(0, σ 2 ). If t p, k=1 P L(X1,...,X t)x t+1 = p k=1 φ kx t+1 k (check). Then ϕ p,p = α(p) = φ p, ϕ h,h = 0 if h > p, i.e. α(h) = 0 for h > p. 9 ottobre 2014 14 / 19

Examples of PACF {X t } AR(1), = α(1) = φ, α(h) = 0 for h > 1 (seen before). {X t } AR(p), i.e. stationary proces s.t. p X t = φ k X t k + Z t, {Z t } WN(0, σ 2 ). If t p, k=1 P L(X1,...,X t)x t+1 = p k=1 φ kx t+1 k (check). Then ϕ p,p = α(p) = φ p, ϕ h,h = 0 if h > p, i.e. α(h) = 0 for h > p. {X t } MA(1) = α(h) = ϑ h /(1 + ϑ 2 + + ϑ 2h ) (long computation) 9 ottobre 2014 14 / 19

Examples of PACF {X t } AR(1), = α(1) = φ, α(h) = 0 for h > 1 (seen before). {X t } AR(p), i.e. stationary proces s.t. p X t = φ k X t k + Z t, {Z t } WN(0, σ 2 ). If t p, k=1 P L(X1,...,X t)x t+1 = p k=1 φ kx t+1 k (check). Then ϕ p,p = α(p) = φ p, ϕ h,h = 0 if h > p, i.e. α(h) = 0 for h > p. {X t } MA(1) = α(h) = ϑ h /(1 + ϑ 2 + + ϑ 2h ) (long computation) PACF of AR processes has finite support, while PACF of MA is always non-zero. This is the opposite as for ACF. 9 ottobre 2014 14 / 19

Examples of PACF {X t } AR(1), = α(1) = φ, α(h) = 0 for h > 1 (seen before). {X t } AR(p), i.e. stationary proces s.t. p X t = φ k X t k + Z t, {Z t } WN(0, σ 2 ). If t p, k=1 P L(X1,...,X t)x t+1 = p k=1 φ kx t+1 k (check). Then ϕ p,p = α(p) = φ p, ϕ h,h = 0 if h > p, i.e. α(h) = 0 for h > p. {X t } MA(1) = α(h) = ϑ h /(1 + ϑ 2 + + ϑ 2h ) (long computation) PACF of AR processes has finite support, while PACF of MA is always non-zero. This is the opposite as for ACF. Sample PACF. Apply Durbin-Levinson algorithm to ˆγ( ). 9 ottobre 2014 14 / 19

Sample ACF and PACF Oveshort data ACF -0.5 0.0 0.5 1.0 0 5 10 15 Lag Partial ACF -0.4 0.0 0.2 5 10 15 Lag 9 ottobre 2014 15 / 19

Sample ACF of Huron: AR(1) fit ACF of detrended Huron data ACF -0.4-0.2 0.0 0.2 0.4 0.6 0.8 1.0 0 5 10 15 Lag 9 ottobre 2014 16 / 19

Sample ACF of Huron: AR(1) fit ACF of detrended Huron data ACF -0.4-0.2 0.0 0.2 0.4 0.6 0.8 1.0 0 5 10 15 Add theoretical ACF of AR(1) with φ = 0.79. Lag 9 ottobre 2014 17 / 19

Sample ACF of Huron: AR(1) fit ACF of detrended Huron data ACF -0.4-0.2 0.0 0.2 0.4 0.6 0.8 1.0 0 5 10 15 Lag Add confidence intervals, assuming φ = 0.79 (different from book). 9 ottobre 2014 18 / 19

Sample ACF and PACF of Huron data Huron data ACF -0.2 0.2 0.6 1.0 0 5 10 15 Lag Partial ACF -0.2 0.2 0.6 5 10 15 PACF suggests use of an AR(2) model. Lag 9 ottobre 2014 19 / 19