Monte Carlo Methods. for Econometric Inference I. Institute on Computational Economics. July 19, John Geweke, University of Iowa

Σχετικά έγγραφα
Other Test Constructions: Likelihood Ratio & Bayes Tests

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Statistical Inference I Locally most powerful tests

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Fractional Colorings and Zykov Products of graphs

2 Composition. Invertible Mappings

Every set of first-order formulas is equivalent to an independent set

EE512: Error Control Coding

ST5224: Advanced Statistical Theory II

Homework 3 Solutions

Uniform Convergence of Fourier Series Michael Taylor

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

C.S. 430 Assignment 6, Sample Solutions

Mean-Variance Analysis

Lecture 21: Properties and robustness of LSE

Example Sheet 3 Solutions

Lecture 34 Bootstrap confidence intervals

4.6 Autoregressive Moving Average Model ARMA(1,1)

Finite Field Problems: Solutions

5. Choice under Uncertainty

12. Radon-Nikodym Theorem

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.

Problem Set 3: Solutions

Inverse trigonometric functions & General Solution of Trigonometric Equations

Solution Series 9. i=1 x i and i=1 x i.

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Numerical Analysis FMN011

Congruence Classes of Invertible Matrices of Order 3 over F 2

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Section 8.3 Trigonometric Equations

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University

Lecture 2. Soundness and completeness of propositional logic

6.3 Forecasting ARMA processes

derivation of the Laplacian from rectangular to spherical coordinates

Approximation of distance between locations on earth given by latitude and longitude

A Note on Intuitionistic Fuzzy. Equivalence Relation

6. MAXIMUM LIKELIHOOD ESTIMATION

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

Chapter 3: Ordinal Numbers

: Monte Carlo EM 313, Louis (1982) EM, EM Newton-Raphson, /. EM, 2 Monte Carlo EM Newton-Raphson, Monte Carlo EM, Monte Carlo EM, /. 3, Monte Carlo EM

P AND P. P : actual probability. P : risk neutral probability. Realtionship: mutual absolute continuity P P. For example:

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

Matrices and Determinants

Partial Differential Equations in Biology The boundary element method. March 26, 2013

An Inventory of Continuous Distributions

The challenges of non-stable predicates

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Tridiagonal matrices. Gérard MEURANT. October, 2008

Homework 8 Model Solution Section

Bounding Nonsplitting Enumeration Degrees

Areas and Lengths in Polar Coordinates

The Simply Typed Lambda Calculus

w o = R 1 p. (1) R = p =. = 1

The circle theorem and related theorems for Gauss-type quadrature rules

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Space-Time Symmetries

Concrete Mathematics Exercises from 30 September 2016

Reminders: linear functions

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Abstract Storage Devices

5.4 The Poisson Distribution.

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

Section 9.2 Polar Equations and Graphs

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016

SOME PROPERTIES OF FUZZY REAL NUMBERS

The Probabilistic Method - Probabilistic Techniques. Lecture 7: The Janson Inequality

Areas and Lengths in Polar Coordinates

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

TMA4115 Matematikk 3

Pg The perimeter is P = 3x The area of a triangle is. where b is the base, h is the height. In our case b = x, then the area is

Local Approximation with Kernels

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Statistics 104: Quantitative Methods for Economics Formula and Theorem Review

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω

Generating Set of the Complete Semigroups of Binary Relations

Μηχανική Μάθηση Hypothesis Testing

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)

Math221: HW# 1 solutions

Solutions to Exercise Sheet 5

DiracDelta. Notations. Primary definition. Specific values. General characteristics. Traditional name. Traditional notation

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

ΗΜΥ 220: ΣΗΜΑΤΑ ΚΑΙ ΣΥΣΤΗΜΑΤΑ Ι Ακαδημαϊκό έτος Εαρινό Εξάμηνο Κατ οίκον εργασία αρ. 2

CRASH COURSE IN PRECALCULUS

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

General 2 2 PT -Symmetric Matrices and Jordan Blocks 1

ΕΙΣΑΓΩΓΗ ΣΤΗ ΣΤΑΤΙΣΤΙΚΗ ΑΝΑΛΥΣΗ

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

Overview. Transition Semantics. Configurations and the transition relation. Executions and computation

Sequent Calculi for the Modal µ-calculus over S5. Luca Alberucci, University of Berne. Logic Colloquium Berne, July 4th 2008

Transcript:

Monte Carlo Methods for Econometric Inference I Institute on Computational Economics July 19, 2006 John Geweke, University of Iowa Monte Carlo Methods for Econometric Inference I 1

Institute on Computational Economics 2006 Geweke (2005) Sections 1.2-1.3, pp. 7-15 The problem p (θ, ω I) Distribution of interest θ Θ, ω Ω; I = Information p (θ, ω I) =p (θ I) p (ω θ,i) p (ω θ,i) p (θ I) Tractable for simulation Not so easy Leading example: I = {Model specification} {Data} Monte Carlo Methods for Econometric Inference I 2

Institute on Computational Economics 2006 Geweke (2005) Sections 1.2-1.3, pp. 7-15 Origins of the problem in econometric inference Complete model p (y θ,a) p (θ A) ) = p (θ y o,a) Vector of interest ω p (ω y o, θ,a) Simulation problem θ (m) p (θ y o,a) ω (m) p (ω y o, θ,a) Monte Carlo Methods for Econometric Inference I 3

Institute on Computational Economics 2006 Geweke (2005) Section 4.1, pp. 106-110 Direct Sampling Some familiar tools Strong law of large numbers: If ω (m) iid p (ω) and E[h (ω)] = R h (ω) p (ω) dν (ω) exists, then h (M) = M 1 M X m=1 h ³ ω (m) a.s. E[h (ω)] = h; that is, P ½ lim M h (M) ¾ = h =1, a stronger condition than h (M) p h. Monte Carlo Methods for Econometric Inference I 4

Institute on Computational Economics 2006 Geweke (2005) Section 4.1, pp. 106-110 Lindeberg-Levy central limit theorem: If in addition σ 2 h =var[h (ω)] exists, then M 1/2 h (M) h d N ³ 0,σ 2 h. Monte Carlo Methods for Econometric Inference I 5

Institute on Computational Economics 2006 Geweke (2005) Section 4.1, pp. 106-110 Generic notation for posterior simulation θ p (θ I) ; θ Θ ω p (ω θ,i) ; ω Ω I denotes the distribution of interest. Monte Carlo Methods for Econometric Inference I 6

Institute on Computational Economics 2006 Geweke (2005) Section 4.1, pp. 106-110 Direct sampling is possible if there are practical algorithms for θ (m) iid p (θ I) and ω (m) iid p ³ ω θ (m),i. Example: (1) Mother of all random number generators (Geweke 1996): u (m) iid uniform (0, 1) (2) Inverse c.d.f. transformation to simulate θ : P (θ c) =F (c): θ (m) = F 1 ³ u (m) θ (m) c u (m) = F ³ θ (m) F (c) = P ³ θ (m) c = P h u (m) F (c) i = F (c) Monte Carlo Methods for Econometric Inference I 7

Institute on Computational Economics 2006 Geweke (2005) Section 4.1, pp. 106-110 General case: θ (m) p (θ I) and ω (m) θ (m) p ³ ω θ (m),i where θ (m) Θ, ω (m) Ω, andalldrawsareindependent. For given h and M, leth (M) = M 1 P M m=1 h ³ ω (m). Theorem 4.1(a): Theorem 4.1(b): large numbers). ω (m) i.i.d. p (ω I) (M) a.s. If h =E[h (ω) I] exists then h h (strong law of Monte Carlo Methods for Econometric Inference I 8

Institute on Computational Economics 2006 Geweke (2005) Section 4.1, pp. 106-110 Theorem 4.1(c): If h =E[h (ω) I] and var [h (ω) I] =σ 2 exist, then M µh 1/2 (M) d ³ h N 0,σ 2 and bσ 2(M) = M 1 X M m=1 h ³ ω (m) h (M) 2 a.s. σ 2. (Lindeberg-Levy central limit theorem and strong law of large numbers) Monte Carlo Methods for Econometric Inference I 9

Institute on Computational Economics 2006 Geweke (2005) Section 4.1, pp. 106-110 Theorem 4.1(d): statements If for given p (0, 1), there is a unique h p such that the P [h (ω) h p I] p and P [h (ω) h p I] 1 p are both true, then bh (M) p a.s. h p where b h (M) p is any real number such that M 1 M X m=1 I ³ i h (M) h ³ ω (m) i p; M,b 1 h p M X m=1 I h hh ³ b(m) ω (m) i 1 p h p, (Follows from in Rao (1965), result 6f.2(i).) Monte Carlo Methods for Econometric Inference I 10

Institute on Computational Economics 2006 Geweke (2005) Section 4.1, pp. 106-110 Theorem 4.1(e): If for given p (0, 1), there is a unique h p such that the statements P [h (ω) h p I] p and P [h (ω) h p I] 1 p are both true, and moreover p, p [h (ω) =h p I] > 0, then M 1/2 b h (M) p h p d N n 0,p(1 p) /p [h (ω) =hp I] 2o. (Follows from in Rao (1965), result 6f.2(i).) Monte Carlo Methods for Econometric Inference I 11

Institute on Computational Economics 2006 Geweke (2005) Section 4.1, pp. 106-110 Approximation of Bayes actions by direct sampling General case The setting: n θ (m), ω (m)o i.i.d. with θ (m) p (θ I), ω (m) ³ θ (m),i p ³ ω θ (m),i. Loss function L (a, ω) 0 defined on Ω A, A an open subset of R q. The risk function R (a) = Z Z Ω Θ L (a, ω) p (θ I) p (ω θ,i) dθdω has a strict global minimum at ba A R m. Monte Carlo Methods for Econometric Inference I 12

Institute on Computational Economics 2006 Geweke (2005) Section 4.1, pp. 106-110 Let A M be the set of all roots of M 1 P M m=1 L ³ a, ω (m) / a = 0. Theorem 4.2(a): If (1) M 1 P M m=1 L ³ a, ω (m) almost surely; converges uniformly to R (a) for all a N (ba), (2) L(a, ω) / a exists and is a continuous function of a, forallω Ω and all a N (ba); then for any ε>0 lim P M " inf (a ba) 0 (a ba) >ε I a A M # =0. Monte Carlo Methods for Econometric Inference I 13

Institute on Computational Economics 2006 Geweke (2005) Section 4.1, pp. 106-110 Theorem 4.2 (b): If,inaddition, (3) 2 L (a, ω) / a a 0 exists and is a continuous function of a, forallω Ω and all a N (ba); (4) B =var [ L(a, ω) / a a=ba I] exists and is finite; (5) H =E h 2 L (a, ω) / a a 0 a=ba I i exists and is finite and nonsingular; (6) For any ε>0, thereexistsm ε such that P sup a N(ba) for all i, j, k =1,...,m; 3 L (a, ω) / a i a j a k <Mε I 1 ε Monte Carlo Methods for Econometric Inference I 14

Institute on Computational Economics 2006 Geweke (2005) Section 4.1, pp. 106-110 then: 1. M 1/2 (ba M ba) d N ³ 0, H 1 BH 1, 2. M 1 P M m=1 L ³ a, ω (m) / a a=bam L ³ a, ω (m) / a 0 p a=bam B, 3. M 1 P M m=1 2 L ³ a, ω (m) / a a 0 a=bam p H.... This all follows from Amemiya (1985), Chapter 4, in straightforward fashion. Monte Carlo Methods for Econometric Inference I 15

Institute on Computational Economics 2006 Section 4.2, p. 110 Acceptance and Importance Sampling Motivation and approach We have no practical method to simulate θ (m) iid p (θ I). We can simulate θ (m) iid p (θ S), wherep (θ S) is similar to p (θ I). Can we use θ (m) iid p (θ S) to simulate from p (θ I)? Yes, but the sense in which p (θ S) is similar to p (θ I) is critical. Monte Carlo Methods for Econometric Inference I 16

Institute on Computational Economics 2006 Section 4.2.1, pp. 111-114 Acceptance Sampling Density of interest: p (θ I) We cannot simulate θ (m) iid p (θ I), We can evaluate k (θ I) p (θ I) Source density: p (θ S) We can simulate θ (m) iid p (θ S) We can evaluate k (θ S) p (θ S) Monte Carlo Methods for Econometric Inference I 17

Institute on Computational Economics 2006 Section 4.2.1, pp. 111-114 Acceptance sampling: Key idea sup θ p (θ I) /p (θ S) =p (1.16 I) /p (1.16 S) =a =1.86 Draw θ from p (θ S), and accept the draw with probability p (θ I) / [a p (θ S)] Monte Carlo Methods for Econometric Inference I 18

Institute on Computational Economics 2006 Section 4.2.1, pp. 111-114 Theorem 4.2.1 Acceptance sampling. Let k (θ I) =c I p (θ I) be a kernel of the density of interest p (θ I) and let k (θ S) =c S p (θ S) be a kernel of the source density p (θ S). Letr =sup θ Θ k (θ I) /k (θ S) <. Suppose that θ (m) is drawn as follows. (1) Draw u uniform on [0, 1]. (2) Draw θ p (θ S). (3) If u>k(θ I) / [r k (θ S)] then return to step 1. (4) Set θ (m) = θ. If the draws in steps 1 and 2 are independent, then θ (m) p (θ I). Monte Carlo Methods for Econometric Inference I 19

Institute on Computational Economics 2006 Section 4.2.1, pp. 111-114 k (θ I) =c I p (θ I), k (θ S) =c S p (θ S), Θ = {θ :p (θ S) > 0} (1) Draw u uniform on [0, 1]. (2) Draw θ p (θ S). (3) If u>k(θ I) / [r k (θ S)] then return to step 1. (4) Set θ (m) = θ. P [(3) (4)] = P [(3) (4), θ A] = Z Θ {k (θ I) / [rk (θ S)]} p (θ S) dν (θ) =c I/rc S = Z ZA Z {k (θ I) / [rk (θ S)]} p (θ S) dν (θ) A k (θ I) dν (θ) /rc S P [ θ A (3) (4)] = k (θ I) dν (θ) /c I = A = P (θ A I) Z A p (θ I) dν (θ) Monte Carlo Methods for Econometric Inference I 20

Institute on Computational Economics 2006 Section 4.2.1, pp. 111-114 Efficiency of acceptance sampling k (θ I) =c I p (θ I), k (θ S) =c S p (θ S), r =sup θ Θ P [(3) (4)] = = Z k (θ I) k (θ S), a =sup θ Θ p (θ I) p (θ S) {k (θ I) / [rk (θ S)]} p (θ S) dν (θ) = c I Θ rc S c I c S sup θ Θ k (θ I) /k (θ S) = inf c I θ Θ c S k (θ I) /k (θ S) = inf θ Θ p (θ S) p (θ I) = a 1 Draws from p (θ S) Draws from p (θ I) a Monte Carlo Methods for Econometric Inference I 21

Institute on Computational Economics 2006 Section 4.2.1, pp. 111-114 Example: Sampling from a standard normal distribution truncated to the interval (a, b) Characteristics of the truncation Source distribution a<0 <b< a t 1 and b t 1 Uniform(a, b) a< t 1 or b>t 1 N (0, 1) 0 a<b< φ (a) /φ (b) t 2 Uniform(a, b) φ (a) /φ (b) >t 2 a t 3 N (0, 1) a>t 3 a + exp ³ a 1 b = a t 4 N (0, 1) a>t 4 a + exp ³ a 1 t 1 =0.375, t 2 =2.18, t 3 =0.725, andt 4 =0.45 2to6timesasfastasinversec.d.f.method Monte Carlo Methods for Econometric Inference I 22

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Importance Sampling Density of interest: p (θ I) We cannot simulate θ (m) iid p (θ I), We can evaluate k (θ I) p (θ I) Source density: p (θ S) We can simulate θ (m) iid p (θ S) We can evaluate k (θ S) p (θ S) Weighting function: w (θ) =k (θ I) /k (θ S) Monte Carlo Methods for Econometric Inference I 23

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Theorem 4.2 (a,b) Approximation of moments by importance sampling. Suppose E[h (ω) I] =h exists, the support of p (θ S) includes Θ, and ω (m) p ³ ω θ (m),i. Then h (M) = P Mm=1 w ³ θ (m) h ³ ω (m) P Mm=1 w ³ θ (m) a.s. h. Proof. w ³ θ (m) i.i.d., E[w (θ) S] = Z Θ k (θ I) k (θ S) p (θ S) dν (θ) =c I c S = w = w (M) = M 1 M X m=1 w ³ θ (m) a.s. w. Monte Carlo Methods for Econometric Inference I 24

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Proof (concluded) So far we have P Mm=1 w ³ θ (m) h ³ ω (m) h (M) = P Mm=1 w ³ θ (m), w (M) = M 1 M X m=1 w ³ θ (m) a.s. w Then w (θ) =k (θ I) /k (θ S), n w ³ θ (m), h ³ ω (m) o i.i.d. = E[w (θ) h (ω) S] = w (θ) Θ = (c I /c S ) Z Z Z Θ ΩZ = (c I /c S )E[h(ω) I] =w h = wh (M) = M 1 X M w ³ θ (m) Ω h (ω) p (ω θ,i) dν (ω) p (θ S) dν (θ) h (ω) p (ω θ,i) p (θ I) dν (ω) dν (θ) m=1 h ³ ω (m) a.s. w h Monte Carlo Methods for Econometric Inference I 25

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Theorem 4.2 (c,d) Suppose E[h (ω) I] =h and var [h (ω) I] =σ 2 exist, and w (θ) =k (θ I) /k (θ S) is bounded above on Θ. Then µ M 1/2 h (M) d ³ h N 0,τ 2 and bτ 2(M) = M P M m=1 h ³ ω (m) h PMm=1 w ³ θ (m) i 2 h (M) 2 w ³ θ (m) 2 a.s. τ 2. Monte Carlo Methods for Econometric Inference I 26

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Proof. E h w (θ) 2 h (ω) 2 S i = = Z Θ k (θ I) w (θ) k (θ S) =(c I /c S ) Z Z w Θ (θ)2 Z Z Z Ω h (ω)2 p (ω θ,i) dν (ω) Ω h (ω)2 p (ω θ,i) dν (ω) p (θ S) dν (θ) w (θ) h Θ Ω (ω)2 p (ω θ,i) dν (ω) p (θ I) dν (θ) (c I /c S )E h h (ω) 2 I i sup w (θ) < θ Θ p (θ S) dν (θ) Substitute h (ω) =1and conclude E h w (θ) 2 S i < as well. Monte Carlo Methods for Econometric Inference I 27

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Proof (continued). Now we know V =var " # w (θ) w (θ) h (ω) S is finite. Apply Lindeberg-Levy Central Limit Theorem: M 1/2 "Ã w (M) wh (M)! Ã w wh!# d N (0, V) Monte Carlo Methods for Econometric Inference I 28

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Proof (continued) wh (M) M 1/2 "Ã w (M) wh (M) w (M) = wh w + wh(m) wh w = M 1/2 wh(m)! Ã w wh!# d N (0, V), wh ³ w (M) w w 2 w (M) h d N ³ 0,τ 2 + o p ³ M 1/2 with τ 2 = w 2 n var [w (θ) h (ω) S] 2hcov S [w (θ) h (ω),w(θ) S] +h 2 var S [w (θ) S] o = w 2 var nh w (θ) h (ω) hw (θ) i S o. Monte Carlo Methods for Econometric Inference I 29

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Proof (concluded) τ 2 = w 2 n var [w (θ) h (ω) S] 2hcov S [w (θ) h (ω),w(θ) S] +h 2 var S [w (θ) S] o = w 2 var nh w (θ) h (ω) hw (θ) i S o bτ 2(M) = = M P M m=1 h ³ ω (m) h PMm=1 w ³ θ (m) i 2 h (M) 2 w ³ θ (m) 2 M 1 P M m=1 w ³ θ (m) h ³ ω (m) h M 1 P M m=1 w ³ θ (m) i 2 h (M) w ³ θ (m) 2 a.s. τ 2 Monte Carlo Methods for Econometric Inference I 30

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Example: Reweighting to a different prior distribution Model Prior Likelihood A 1 A 2 p (θ A A 1 ) p (θ A A 2 ) p (y θ A,A) p (y θ A,A) θ (m) A p (θ A y o,a 2 ) = p (θ A I) iid p (θ A y o,a 1 ) = p (θ A S) w (θ A )= p (θ A y o,a 2 ) p (θ A y o,a 1 ) = p (θ A A 1 ) p (y θ A,A) p (θ A A 2 ) p (y θ A,A) = p (θ A A 1 ) p (θ A A 2 ) Monte Carlo Methods for Econometric Inference I 31

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 A useful summary measure of computational efficiency var [h (ω) I] =σ 2 var h (M) S = τ 2 M w (θ) 1 θ p (θ S) =p (θ I) = τ 2 = σ 2 Relative numerical efficiency (RNE) ofthesimulatoris RNE = σ2 τ 2. σ 2 M = τ 2 M = M M = σ2 τ 2 = RNE Monte Carlo Methods for Econometric Inference I 32

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Theorem 4.3 (a) Approximation of Bayes actions by importance sampling. Conditions: (1) θ (m) iid p (θ S), ω (m) θ (m) p ³ ω θ (m),i (2) w (θ) =k (θ I) /k (θ S) <c< θ (3) L (a, ω) 0 defined on Ω A, A R q (4) R (a) = R Ω (5) R (ba) <R(a) a A R Θ L (a, ω) p (θ I) p (ω θ,i) dθdω Monte Carlo Methods for Econometric Inference I 33

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Theorem 4.3 (a) (concluded) More conditions: (A1) M 1 P M m=1 L ³ a, ω (m) R (a) uniformly a N (ba) (almost surely) (A2) L(a, ω) / a is a continuous function of a a N (ba) Denote A M = a X M :M 1 m=1 h ³ L a, ω (m) / a i w ³ θ (m) = 0 Then for any ε>0, lim P M " inf (a ba) 0 (a ba) >ε S a A M # =0. Monte Carlo Methods for Econometric Inference I 34

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Theorem 4.3 (b) Approximation of Bayes actions by importance sampling. Additional conditions: (B1) 2 L (a, ω) a a 0 is a continuous function of a a N (ba), ω Ω (B2) B =var h L(a, ω) / a a=ba w (θ) 1/2 S i exists and is finite (B3) H =E h 2 L (a, ω) / a a 0 a=ba S i exists and is finite and nonsingular (B4) For any ε>0, thereexistsm ε such that P sup a N(ba) 3 L (a, ω) / a i a j a k <Mε S 1 ε for all i, j, k =1,...,m. Monte Carlo Methods for Econometric Inference I 35

Institute on Computational Economics 2006 Section 4.2.2, pp. 114-119 Theorem 4.3 (b) (concluded) Then if ba M is any element of A M such that ba M p ba, (1) M 1/2 (ba M ba) d N ³ 0, H 1 BH 1 (2) M 1 P M m=1 w ³ θ (m) 2 ³ L a, ω (m) / a ³ a=bam L a, ω (m) / a 0 a=bam p B (3) M 1 P M m=1 w ³ θ (m) 2 L ³ a, ω (m) / a a 0 p a=bam H. Monte Carlo Methods for Econometric Inference I 36

Institute on Computational Economics 2006 Section 4.3, pp. 119-120 Markov Chain Monte Carlo Methods The central idea θ (m) p ³ θ θ (m 1),C (m =1, 2, 3,...) If C is specified correctly, then θ (m 1) p (θ I), θ (m) p ³ θ θ (m 1),C Better yet, if θ (m 1) p (θ J), θ (m) p ³ θ θ (m 1),C = θ (m) p (θ I). = θ (m) p (θ J) then J = I. And even better, p ³ θ (m) θ (0),C d p (θ I) θ (0) Θ. Monte Carlo Methods for Econometric Inference I 37

Institute on Computational Economics 2006 Section 4.3, pp. 119-120 The central idea (continued) If p ³ θ (m) θ (0),C d p (θ I) θ (0) Θ, then we can approximate E[h (ω) I] by (1) iterating the chain B ( burn-in ) times; (2) drawing ω (m) p ³ ω θ (m) (m =1,...,M); (3) computing h (M) = M 1 X M m=1 h ³ ω (m). Monte Carlo Methods for Econometric Inference I 38

Institute on Computational Economics 2006 Section 4.3.1 pp. 120-122 The Gibbs sampler Blocking: θ 0 = ³ θ 0 (1),...,θ0 (B). Some notation: Corresponding to any subvector θ (b), θ 0 <(b) = ³ θ 0 (1) (b 1),...,θ0 (b =2,...,B), θ<(1) = θ 0 >(b) = ³ θ 0 (b+1),...,θ0 (B) (b =1,...,B 1), θ>(b) = θ 0 (b) = ³ θ 0 <(b), θ0 >(b) Very important: Choose the blocking so that θ (b) p ³ θ (b) θ (b),i is possible. Monte Carlo Methods for Econometric Inference I 39

Institute on Computational Economics 2006 Section 4.3.1 pp. 120-122 Intuitive argument for the Gibbs sampler Imagine θ (0) p (θ I), and then in succession θ (1) µ (1) p θ (1) θ (0) (1),I, θ (1) (2) p θ (1) (3) p θ (1) (b). p µ θ (2) θ (1) <(2), θ(0) >(2),I, µ θ (3) θ (1) <(3), θ(0) >(3),I, µ θ (b) θ (1) <(b), θ(0) >(b),i. θ (1) µ (B) p θ (B) θ (1) (B),I We have θ (1) p (θ I). Monte Carlo Methods for Econometric Inference I 40

Institute on Computational Economics 2006 Section 4.3.1 pp. 120-122 Now repeat θ (2) µ (1) p θ (1) θ (1) (1),I θ (2) (2) θ (2) (3) θ (2) (b), µ p θ (2) θ (2) <(2), θ(1) >(2),I, µ p θ (3) θ (2) <(3), θ(1) >(3),I,. p µ θ (b) θ (2) <(b), θ(1) >(b),i. θ (2) µ (B) p θ (B) θ (2) (B),I. We have θ (2) p (θ I). Monte Carlo Methods for Econometric Inference I 41

Institute on Computational Economics 2006 Section 4.3.1 pp. 120-122 ThegeneralstepintheGibbssampleris θ (m) (b) p µ θ (b) θ (m) <(b), θ(m 1) >(b),i for b =1,...,B and m =1, 2,... This defines the Markov chain p ³ θ (m) θ (m 1),G = BY b=1 p θ (m) (b) θ (m) <(b), θ(m 1) >(b),i. Key property: θ (0) p (θ I) = θ (m) p (θ I) Monte Carlo Methods for Econometric Inference I 42

Institute on Computational Economics 2006 Section 4.3.1 pp. 120-122 Potential problems: Monte Carlo Methods for Econometric Inference I 43

Institute on Computational Economics 2006 Section 4.3.2 pp. 122-125 The Metropolis-Hastings Algorithm What it does: θ q ³ θ θ (m 1),H Then P ³ θ (m) = θ P ³ θ (m) = θ (m 1) = α ³ θ θ (m 1),H, = 1 α ³ θ θ (m 1),H where α ³ θ θ (m 1),H =min p (θ I) /q ³ θ θ (m 1),H p ³ θ (m 1) I /q ³ θ (m 1) θ 1,H,. Monte Carlo Methods for Econometric Inference I 44

Institute on Computational Economics 2006 Section 4.3.2 pp. 122-125 Some aspects of the Metropolis-Hastings algorithm If we define u (θ θ,h) =q (θ θ,h) α (θ θ,h) then P ³ θ (m) = θ (m 1) θ (m 1) = θ,h = r (θ H) =1 Z Θ u (θ θ,h) dν (θ ). Notice that P ³ θ (m) A θ (m 1) = θ,h = Z A (θ θ,h) dν (θ )+r (θ H) I A (θ). Monte Carlo Methods for Econometric Inference I 45

Institute on Computational Economics 2006 Section 4.3.2 pp. 122-125 u (θ θ,h) =q (θ θ,h) α (θ θ,h) We can write the transition density in one line making use of the Dirac delta function, an operator with the property Z A δ θ (θ ) f (θ ) dν (θ )=f (θ) I A (θ). Then p ³ θ (m) θ (m 1),H = u ³ θ (m) θ (m 1),H +r ³ θ (m 1) H δ θ (m 1) ³ θ (m). Monte Carlo Methods for Econometric Inference I 46

Institute on Computational Economics 2006 Section 4.3.2 pp. 122-125 Special cases of the Metropolis-Hastings algorithm α ³ θ θ (m 1) p (θ,h I) /q ³ θ θ (m 1),H =min p ³ θ (m 1) I /q ³ θ (m 1) θ 1,H, Special case 1, original Metropolis (1953): q (θ θ,h) =q (θ θ,h) = α ³ θ θ (m 1),H =min h p (θ I) /p ³ θ (m 1) I, 1 i Important example: random walk Metropolis chain q (θ θ,h) =q (θ θ H), where q ( H) is symmetric about zero. Monte Carlo Methods for Econometric Inference I 47

Institute on Computational Economics 2006 Section 4.3.2 pp. 122-125 Special cases of the Metropolis-Hastings algorithm α ³ θ θ (m 1) p (θ,h I) /q ³ θ θ (m 1),H =min p ³ θ (m 1) I /q ³ θ (m 1) θ 1,H, Special case 2, Metropolis independence chain: = α ³ θ θ (m 1),H =min q (θ θ,h) =q (θ H) =min p (θ I) /q (θ H) p ³ θ (m 1) I /q ³ θ (m 1) 1 H, w (θ ) w ³ θ (m 1), 1 where w (θ) =p (θ I) /q (θ H). Monte Carlo Methods for Econometric Inference I 48

Institute on Computational Economics 2006 Section 4.3.2 pp. 122-125 Why does the Metropolis-Hastings algorithm work? A two part argument Part 1: Suppose any transition probability density function p ³ θ (m) θ (m 1),T satisfies the reversibility condition p ³ θ (m 1) I p ³ θ (m) θ (m 1),T = p ³ θ (m) I p ³ θ (m 1) θ (m),t with respect to p (θ I). Then ZΘ p ³ θ (m 1) I p ³ θ (m) θ (m 1),T dν ³ θ (m 1) = ZΘ p ³ θ (m) I p ³ θ (m 1) θ (m),t dν ³ θ (m 1) = p ³ θ (m) Z I p ³ θ (m 1) θ (m),t dν ³ θ (m 1) = p ³ θ (m) I. Θ and so p (θ I) is an invariant density of the Markov chain. Monte Carlo Methods for Econometric Inference I 49

Institute on Computational Economics 2006 Section 4.3.2 pp. 122-125 Part 2 of the argument (How Hastings did it): Suppose we don t know the probability α ³ θ θ (m 1),H, butwewant p ³ θ (m) θ (m 1),H to be reversible with respect to p (θ I): p ³ θ (m 1) I p ³ θ (m) θ (m 1),H = p ³ θ (m) I p ³ θ (m 1) θ (m),h. Trivial if θ (m 1) = θ (m). For θ (m 1) 6= θ (m) we need p ³ θ (m 1) I q ³ θ θ (m 1),H α ³ θ θ (m 1),H = p (θ I) q ³ θ (m 1) θ,h α ³ θ (m 1) θ,h. Monte Carlo Methods for Econometric Inference I 50

Institute on Computational Economics 2006 Section 4.3.2 pp. 122-125 p ³ θ (m 1) I q ³ θ θ (m 1),H α ³ θ θ (m 1),H = p (θ I) q ³ θ (m 1) θ,h α ³ θ (m 1) θ,h Suppose without loss of generality that p ³ θ (m 1) I q ³ θ θ (m 1),H >p(θ I) q ³ θ (m 1) θ,h. Set α ³ θ (m 1) θ,h =1and α ³ θ θ (m 1),H = p (θ I) q ³ θ (m 1) θ,h p ³ θ (m 1) I q ³ θ θ (m 1),H. Monte Carlo Methods for Econometric Inference I 51