Introduction to the ML Estimation of ARMA processes

Σχετικά έγγραφα
HW 3 Solutions 1. a) I use the auto.arima R function to search over models using AIC and decide on an ARMA(3,1)

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University

4.6 Autoregressive Moving Average Model ARMA(1,1)

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Asymptotic distribution of MLE

6.3 Forecasting ARMA processes

Other Test Constructions: Likelihood Ratio & Bayes Tests

Stationary ARMA Processes

Module 5. February 14, h 0min

6. MAXIMUM LIKELIHOOD ESTIMATION

Stationary ARMA Processes

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Solution Series 9. i=1 x i and i=1 x i.

Probability and Random Processes (Part II)

Statistical Inference I Locally most powerful tests

ST5224: Advanced Statistical Theory II

2. ARMA 1. 1 This part is based on H and BD.

Lecture 12: Pseudo likelihood approach

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.

2 Composition. Invertible Mappings

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Lecture 21: Properties and robustness of LSE

Econ Spring 2004 Instructor: Prof. Kiefer Solution to Problem set # 5. γ (0)

The challenges of non-stable predicates

Queensland University of Technology Transport Data Analysis and Modeling Methodologies

Numerical Analysis FMN011

5. Partial Autocorrelation Function of MA(1) Process:

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Problem Set 3: Solutions

Example Sheet 3 Solutions

Introduction to Time Series Analysis. Lecture 16.

EE512: Error Control Coding

Homework for 1/27 Due 2/5

Statistics 104: Quantitative Methods for Economics Formula and Theorem Review

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Web-based supplementary materials for Bayesian Quantile Regression for Ordinal Longitudinal Data

Homework 3 Solutions

5.4 The Poisson Distribution.

Lecture 7: Overdispersion in Poisson regression

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

SECTION II: PROBABILITY MODELS

A Bonus-Malus System as a Markov Set-Chain. Małgorzata Niemiec Warsaw School of Economics Institute of Econometrics

Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03)..

Durbin-Levinson recursive method

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Matrices and Determinants

Second Order Partial Differential Equations

Exercises to Statistics of Material Fatigue No. 5

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω

Higher Derivative Gravity Theories

An Introduction to Signal Detection and Estimation - Second Edition Chapter II: Selected Solutions

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Inverse trigonometric functions & General Solution of Trigonometric Equations

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

C.S. 430 Assignment 6, Sample Solutions

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Supplementary Appendix

ENGR 691/692 Section 66 (Fall 06): Machine Learning Assigned: August 30 Homework 1: Bayesian Decision Theory (solutions) Due: September 13

Section 7.6 Double and Half Angle Formulas

Approximation of distance between locations on earth given by latitude and longitude

FORMULAS FOR STATISTICS 1

Every set of first-order formulas is equivalent to an independent set

Theorem 8 Let φ be the most powerful size α test of H

Partial Differential Equations in Biology The boundary element method. March 26, 2013

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016

Second Order RLC Filters

5. Choice under Uncertainty

Lecture 34 Bootstrap confidence intervals

Table 1: Military Service: Models. Model 1 Model 2 Model 3 Model 4 Model 5 Model 6 Model 7 Model 8 Model 9 num unemployed mili mili num unemployed

w o = R 1 p. (1) R = p =. = 1

Uniform Convergence of Fourier Series Michael Taylor

Stationary Univariate Time Series Models 1

Concrete Mathematics Exercises from 30 September 2016

Tridiagonal matrices. Gérard MEURANT. October, 2008

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

The Simply Typed Lambda Calculus

Fractional Colorings and Zykov Products of graphs

Finite Field Problems: Solutions

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Main source: "Discrete-time systems and computer control" by Α. ΣΚΟΔΡΑΣ ΨΗΦΙΑΚΟΣ ΕΛΕΓΧΟΣ ΔΙΑΛΕΞΗ 4 ΔΙΑΦΑΝΕΙΑ 1

( y) Partial Differential Equations

LAD Estimation for Time Series Models With Finite and Infinite Variance

DERIVATION OF MILES EQUATION FOR AN APPLIED FORCE Revision C

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Elements of Information Theory

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

Section 8.3 Trigonometric Equations

Μηχανική Μάθηση Hypothesis Testing

Outline Analog Communications. Lecture 05 Angle Modulation. Instantaneous Frequency and Frequency Deviation. Angle Modulation. Pierluigi SALVO ROSSI

[1] P Q. Fig. 3.1

Solutions to Exercise Sheet 5

DESIGN OF MACHINERY SOLUTION MANUAL h in h 4 0.

Forced Pendulum Numerical approach

Mean-Variance Analysis


Biostatistics for Health Sciences Review Sheet

Transcript:

Introduction to the ML Estimation of ARMA processes Eduardo Rossi University of Pavia October 2013 Rossi ARMA Estimation Financial Econometrics - 2013 1 / 1

We consider the AR(p) model: Y t = c + φ 1 Y t 1 +... + φ p Y t p + ε t t = 1,..., T ε t WN(0, σ 2 ) where Y 0, Y 1,..., Y 1 p are given. Notation as a regression model Y t = z tθ + ε t with θ = (c, φ 1,..., φ p ) and z t = (1, Y t 1,..., Y t p ) : c Y 1 1 Y 0... Y 1 p θ 1. =........ + Y T 1 Y T 1... Y T p θ p ε 1. ε T Rossi ARMA Estimation Financial Econometrics - 2013 2 / 1

OLS Estimation of AR(p) The model is: The OLS estimator: y = Zθ + ε θ = (Z Z) 1 Z y = (Z Z) 1 Z (Zθ + ε) = θ + (Z Z) 1 Z ε ( ) 1 ( ) 1 1 = θ + T Z Z T Z ε OLS is no longer linear in y. Hence cannot be BLUE. In general OLS in no more unbiased. Small sample properties are analytically difficult to derive. Rossi ARMA Estimation Financial Econometrics - 2013 3 / 1

OLS Estimation of AR(p) If Y t is a stable AR(p) process and ε t is a standard white noise, then the following results hold (Mann and Wald, 1943): 1 T (Z Z) Γ p ( ) 1 T T Z d ε N(0, σ 2 Γ) then consistency and asymptotic normality follows from Cramer s theorem: T ( θ θ) d N(0, σ 2 Γ 1 ) Rossi ARMA Estimation Financial Econometrics - 2013 4 / 1

Impact of autocorrelation on regression results Necessary condition for the consistency of OLS estimator with stochastic (but stationary) regressors is that z t is asymptotically uncorrelated with ε t, i.e. plim ( 1 T Z ε t ) = 0: ( 1 plim θ θ = plim = Γ 1 plim T Z Z ) 1 ( 1 plim ( ) 1 T Z ε ) T Z ε OLS is no longer consistent under autocorrelation of the regression error as ( ) 1 plim T Z ε 0 Rossi ARMA Estimation Financial Econometrics - 2013 5 / 1

OLS Estimation - Example Consider an AR(1) model with first-order autocorrelation of its errors Y t = φy t 1 + u t u t = ρu t 1 + ε t ε t WN(0, σ 2 ) such that Z = [Y 0,..., Y T 1 ]. Then E [ ] [ 1 1 T Z u = E T ] Y t 1 u t = 1 T t=1 E[Y t 1 (ρ(y t 1 φy t 2 ) + ε t )] t=1 since u t = ρ(y t 1 φy t 2 ) + ε t Rossi ARMA Estimation Financial Econometrics - 2013 6 / 1

OLS Estimation - Example E [ ] 1 T Z u ( ) ( ) 1 = ρ E[Y 2 1 T t 1] φρ E[Y t 1 Y t 2 ] + T t=1 t=1 ( ) 1 E[Y t 1 ε t ] T t=1 = ρ [γ y (0) φγ y (1)] where γ y (h) is the autocovariance function of {Y t } which can be represented as an AR(2) process. Rossi ARMA Estimation Financial Econometrics - 2013 7 / 1

MLE AR(1) For the Gaussian AR(1) process, Y t = c + φy t 1 + ε t φ < 1 the joint distribution of is the observations ε t NID(0, σ 2 ) Y T = (Y 1,..., Y T ) Y T N(µ, Σ) y (Y 1, Y 2,..., Y T ) are the single realization of Y T Rossi ARMA Estimation Financial Econometrics - 2013 8 / 1

MLE AR(1) µ = µ. µ Y 1. Y T Σ = N (µ, Σ) γ 0... γ T 1..... γ T 1... γ 0 Rossi ARMA Estimation Financial Econometrics - 2013 9 / 1

The p.d.f. of the sample y = (Y 1, Y 2,..., Y T ) is given by the multivariate normal density f Y (y; µ, Σ) = (2π) T 2 Σ 1 2 exp { 1 } 2 (y µ) Σ 1 (y µ) Denoting Σ = σ 2 y Ω with Ω ij = φ i j γ γ 0... γ T 1 1... T 1 γ 0 Σ =....... = γ 0..... γ γ T 1... γ T 1 0 γ 0... 1 1... ρ(t 1) Σ = σy 2 Ω = σy 2..... ρ(t 1)... 1 Rossi ARMA Estimation Financial Econometrics - 2013 10 / 1

ρ(j) = φ j Collecting the parameters of the model in θ = (c, φ, σ 2 ), the joint p.d.f. becomes: { f Y (y; θ) = (2πσy 2 ) T 2 Ω 1 2 exp 1 } 2σy 2 (y µ) Ω 1 (y µ) Collecting the parameters of the model in θ = (c, φ, σ 2 ), the sample log-likelihood function is given by L(θ) = T 2 log(2π) T 2 log(σ2 y ) 1 2 log( Ω ) 1 2σy 2 (y µ) Ω 1 (y µ) Rossi ARMA Estimation Financial Econometrics - 2013 11 / 1

Sequential Factorization The prediction-error decomposition uses the fact that the ε t are independent, identically distributed: T f (ε 2,..., ε T ) = f ε (ε t ). and: t=2 [ T ] g Y (y T,..., y 1 ) = g Yt Y t 1 (y t y t 1 ) g Y1 (y 1 ) t=2 by the Markov property. We assume that the marginal density of Y 1 is that of ε 1 with E[Y 1 ] = µ = E[(Y 1 µ) 2 ] = σ 2 y = c 1 φ σ2 1 φ 2 Rossi ARMA Estimation Financial Econometrics - 2013 12 / 1

Since then ε t = Y t (c + φy t 1 ) g Yt Y t 1 (y t y t 1 ) = f ε (y t y t 1 ) = f ε (ε t ) t = 2,..., T Rossi ARMA Estimation Financial Econometrics - 2013 13 / 1

Hence: {[ T ] } g Y (y T,..., y 1 ) = f ε (y t y t 1 ) f ε (y 1 ) t=2 For ε t NID(0, σ 2 ), the log-likelihood is given by: L(θ) = log L(θ) = log f ε (y t y t 1 ; θ) + log f y (y 1 ; θ) t=2 { = T 2 log (2π) T 1 2 log (σ 2 ) + 1 2σ 2 { 1 2 log (σ2 y ) + 1 } 2σy 2 (y 1 µ) 2 } ε t t=2 Rossi ARMA Estimation Financial Econometrics - 2013 14 / 1

where Y 1 N(µ, σy 2 ) with µ = c 1 φ and σ 2 2 y = σ2 1 φ. 2 Maximization of the exact log likelihood for an AR(1) process must be accomplished numerically. Rossi ARMA Estimation Financial Econometrics - 2013 15 / 1

MLE AR(p) Gaussian AR(p): Y t = c + φ 1 Y t 1 +... + φ p Y t p + ε t ε t NID(0, σ 2 ) θ = (c, φ 1,..., φ p, σ 2 ) Exact MLE Using the prediction-error decomposition, the joint p.d.f is given by: [ T ] f Y (y 1, y 2,..., y T ; θ) = t=p+1 f ε (y t y t 1 ; θ) f Y1,...,Y p (y 1,..., y p ; θ) but only the p most recent observations matter f ε (y t y t 1 ; θ) = f ε (y t y t 1,..., y t p ; θ) Rossi ARMA Estimation Financial Econometrics - 2013 16 / 1

MLE AR(p) The likelihood function for the complete sample is: [ T ] f Y (y 1, y 2,..., y T ; θ) = f ε (y t y t 1,..., y t p ; θ) f y (y 1,..., y p ; θ) t=p+1 With ε t NID(0, σ 2 ) [ 1 f ε (y t y t 1,..., y t p ; θ) = exp (yt c φ 1 y t 1... φ p y t p ) 2 ] 2πσ 2 2σ 2. The first p observations are viewed as the realization of a p-dimensional Gaussian variable with moments: E(Y p ) = µ p E [ (Y p µ p )(Y p µ p ) ] = Σ p Rossi ARMA Estimation Financial Econometrics - 2013 17 / 1

MLE AR(p) γ 0 γ 1... γ p 1 Σ p = σ 2 γ 1 γ 0... γ p 2 V p =...... γ p 1 γ p 2... γ 0 [ f y (y 1,..., y p ; θ) = (2π) p 2 σ 2 Vp 1 1 2 exp (Y ] p µ p ) Vp 1 (Y p µ p ) 2σ 2 Rossi ARMA Estimation Financial Econometrics - 2013 18 / 1

MLE AR(p) The log-likelihood is: L(θ) = log f Y (y 1, y 2,..., y T ; θ) = log f ε (y t y t 1,..., y t p ; θ) + log f y (y 1,..., y p ; θ) t=p+1 = T 2 log (2π) T 2 log (σ2 ) + 1 log V 1 p 2 1 2σ 2 (Y p µ p ) Vp 1 (Y p µ p ) (y t c φ 1 y t 1... φ p y t p ) 2 2σ 2 t=p+1 The exact MLE follows from: θ = arg max L(θ) θ Rossi ARMA Estimation Financial Econometrics - 2013 19 / 1

Conditional MLE AR(p) Conditional MLE = OLS Take y p = (y 1,..., y p ) as fixed pre-sample values Conditioning on Y p : θ = arg max f Y p+1,...,y T Y 1,...,Y p (y p+1,..., y T y p ; θ) θ T = arg max f ε (y t y t 1,..., y t p ; θ) θ t=p+1 L(θ) = log f Yp+1,...,Y T Y 1,...,Y p (y p+1,..., y T y p ; θ) = log f ε (ε t Y t 1 ; θ) t=p+1 = T p 2 log (2π) T p 2 log (σ 2 ) 1 2σ 2 t+p ε 2 t Rossi ARMA Estimation Financial Econometrics - 2013 20 / 1

Conditional MLE AR(p) where ε t = Y t (c + φ 1 Y t 1 +... + φ p Y t p ) Thus the MLE of (c, φ 1,..., φ p ) results by minimizing the sum of squared residuals: arg max L(c, φ 1,..., φ p ) = arg (c,φ 1,...,φ p) min (c,φ 1,...,φ p) t=p+1 The conditional ML estimate of σ 2 turns out to be: σ 2 = 1 T p t=p+1 ε 2 t ε 2 t (c, φ 1,..., φ p ) Rossi ARMA Estimation Financial Econometrics - 2013 21 / 1

Conditional MLE AR(p) The ML estimates γ = ( c, φ 1,..., φ p ) are equivalent to OLS estimates. ( c, φ 1,..., φ p ) are consistent estimators if {Y t } is stationary and T ( γ γ) is asymptotically normally distributed. The exact ML estimates and the conditional ML estimates have the same large-sample distribution. Rossi ARMA Estimation Financial Econometrics - 2013 22 / 1

Conditional MLE AR(p) Asymptotically equivalent MLE of the mean-adjusted model Y t µ = φ 1 (Y t 1 µ) +... + φ p (Y t p µ) + ε t where µ = (1 φ 1... φ p ) 1 c. OLS of (φ 1,..., φ p ) in the mean adjusted model, where µ = 1 T t=1 Y t Rossi ARMA Estimation Financial Econometrics - 2013 23 / 1

Yule-Walker estimation Yule-Walker estimation of (φ 1,..., φ p ) φ 1. φ p = γ 0... γ p 1....... γ p 1... γ 0 1 γ 1. γ p where and γ h = (T h) 1 T t=h+1 µ = y = 1 T (y t y)(y t h y) t=1 y t Rossi ARMA Estimation Financial Econometrics - 2013 24 / 1

MLE ARMA(p,q) Gaussian MA(q): Y t = µ + ε t + θ 1 ε t 1 +... + θ q ε t q ε t NID(0, σ 2 ) Conditional MLE = NLLS Conditioning on ε 0 = (ε 0, ε 1,..., ε 1 q ) = 0, we can iterate on: ε t = Y t (θ 1 ε t 1 +... + θ q ε t q ) for t = 1,..., T. The conditional likelihood is where θ = (µ, θ 1,..., θ q, σ 2 ). L(θ) = log f YT ε 0 =0(y T ε 0 = 0; θ) = T 2 log (2π) T 2 log (σ2 ) t=1 ε 2 t 2σ 2 Rossi ARMA Estimation Financial Econometrics - 2013 25 / 1

MLE ARMA(p,q) The MLE of (µ, θ 1,..., θ q ) results by minimizing the sum of squared residuals. Analytical expressions for MLE are usually not available due to highly non-linear FOCs. MLE requires to apply numerical optimization techniques. Rossi ARMA Estimation Financial Econometrics - 2013 26 / 1

MLE ARMA(p,q) Conditioning requires invertibility, i.e. the roots of 1 + θ 1 z + θ 2 z 2 +... + θ q z q = 0 lie outside the unit circle. For MA(1) process: t ε t = Y t µ θ 1 ε t 1 = ( θ 1 ) t ε 0 + ( θ 1 ) j [Y t j µ] j=1 Rossi ARMA Estimation Financial Econometrics - 2013 27 / 1

MLE ARMA(p,q) Y t = c + φ 1 Y t 1 +... + φ p Y t p + ε t + θ 1 ε t 1 +... + θ q ε t q ε t NID(0, σ 2 ) Conditional MLE = NLLS Conditioning on Y 0 = (Y 0, Y 1,..., Y p+1 ) and ε 0 = (ε 0, ε 1,..., ε q+1 ) = 0, the sequence {ε 1, ε 2,..., ε T } can be calculated from {Y 1, Y 2,..., Y T } by iterating on: ε t = Y t (c + φ 1 Y t 1 +... + φ p Y t p ) (θ 1 ε t 1 +... + θ q ε t q ) for t = 1,..., T. Rossi ARMA Estimation Financial Econometrics - 2013 28 / 1

The conditional log-likelihood is: L(θ) = log f YT Y 0,ε 0 (y T y 0, ε 0 ) = T 2 log (2π) T 2 log (σ2 ) t=1 ε 2 t 2σ 2 One option is to set the initial values equal to their expected values: Y s = (1 φ 1... φ p ) 1 c s = 0, 1,..., p + 1 ε s = 0 s = 0, 1,..., q + 1 Rossi ARMA Estimation Financial Econometrics - 2013 29 / 1

Box and Jenkins (1976) recommended setting ε s to zero but y s equal to their actual values. The iteration is started at date t = p + 1, with Y 1, Y 2,..., Y p set to the observed values and ε p = ε p 1 =... = ε p q+1 = 0 Rossi ARMA Estimation Financial Econometrics - 2013 30 / 1