LAGE DEVIATIONS FO OCCUPATION MEASUES FO MAKOV POCESSES (discrete time, o compact case). LIPTSE Electrical Egieerig-Systems, Tel Aviv Uiversity, 69978 - amat Aviv, Tel Aviv, ISAEL ad Istitute for Problems of Iformatio Trasmissio, Academy of Sciece, Moscow, 0447, USSIA Abstract A simple proof of Dosker-Varadha s large deviatio priciple (LDP) for occupatio measure of Markov process, valued i, with the discrete time is give. A proof is based o a ew versio of Dupui-Ellis s large deviatio priciple for two-dimesioal occupatio measures. I our settig, a existece of the ivariat measure does ot assumed. This coditio is replaced (from poit of view of applicatios) o more atural oe. It is give a example of Markov process, defied by o liear recursio, for which sufficiet coditios of existig the large deviatio priciple are easy verified. Key words: Large Deviatios, Expoetial Tightess, Local Large Deviatios.. Itroductio. Mai esult.. It is well kow from Dosker ad Varadha [], the LDP for occupatio measures (π, ) π (A) = I(ξ k A) (.) k= of Markov process ξ = (ξ k ) k 0, valued i, with fixed iitial poit ξ 0 = x. It takes place uder the assumptios: (F) ξ = (ξ k ) k 0 is Feller s process; (I) there exists the uique ivariat measure α(dx): α(γ) = π(x, Γ)α(dx), Γ ; (H ) there exists o egative measurable fuctio v(x) such that for every N > 0 sup x N v(x) < ad for the fuctio w(x) = log e v(x) ev(y) π(x, dy)
the followig properties hold if x w(x) = w >, w ] = ; lim i if x >i [w(x) (M) there exists σ-fiite measure l = l(dx) such that π(x, dy) = p(y x)l(dx) α a.s. ad for every x p(y x) > 0, l a.s. Itroduce a metric space (S, ρ) (S is space of probability measures o, ρ is Levy- Prokhorov s metric. Theorem. (Dosker, Varadha). Assume (F), (I), (H ), ad (M). The the family π, obeys the LDP i (S, ρ) with the rate fuctio e u(x) J(µ) = sup log µ(dx), (.2) u N eu(y) π(x, dy) where N is set of compactly supported cotiuous fuctios. Level sets of J(µ), µ S are compacts i (S, ρ). 2. Here ad i the sequel the followig otatios are used λ γ(dx, dy) = π(x, dy)γ(dx), γ S ad S γγ for desigatig of a set of probability measures o 2 with the same margials γ. Let λ S µµ. Followig Dosker ad Varadha, a value H(λ λ µ) = { dλ 2 log (x, y)λ(dx, dy), λ λ dλ µ µ, otherwise (.3) is amed the coditioal etropy of λ with respect to λ µ. A difficult part of Dosker-Varadha s proof, cocerig to the lower boud, is idetity J(µ) if λ S µµ H(λ λ µ)(= J (µ)). (.4) It would be oted that the iequality J(µ) J (µ) is obvious while the proof of the opposite oe, eve for the compact case, (see Dosker ad Varadha [2]), seems sufficietly complicated. Later, Dosker ad Varadha have establish the LDP, avoidig the idetity J(µ) J (µ) ad usig Varadha s cotractio priciple [3] ad the LDP of occupatio measures for, so called, third level, [4]. 2
3. The aim of this paper is to obtai the LDP ot applyig either J(µ) J (µ) or the result for the third level. Our proof is based o a ew versio of Dupui-Ellis s large deviatio priciple for two-dimesioal occupatio measures. I the preset paper, istead of (I) ad (M) we itroduce assumptios: (I ) there exists a probability measure o 2 λ = λ (dx, dy), obeyig the same margials, say, α = α(dx), such that λ λ α ad H(λ λ α) < (it would be oted (I) implies (I ) with λ = λ α); (M ) trasitio probabilities π(x, dy) ad π (x, dy) (λ (dx, dy) = π (x, dy)α(dx)) obey coditioal desities with respect to some σ-fiite measure l = l(dx), say, p(y x) ad p (y x) such that for every x π(x, dy) = p(y x)l(dy), π (x, dy) = p (y x)l(dy), α a.s. p (y x) > 0, (l α) a.s. I this paper, ξ 0 is a radom variable distributed with α 0 = α 0 (dx) for which the followig coditio is assumed (H 0 ) Let v(x) be from (H ) ad u(x) = v(x) + 2 [w(x) w ]. There exists b > 0 such that e bu(x) α 0 (dx) < p(y x)α 0 (dx) > 0 l a.s. We give ew proof of Dosker-Varadha s type theorem. Theorem. Uder (F), (I ), (H ), (M ) ad (H 0 ), the family (π, ) obeys the LDP i the metric space (S, ρ) with rate fuctio J (µ) = if H(λ λ λ S µµ µ) µ S. 4. We derive the statemet of this theorem by usig Varadha s cotractio priciple ad LDP for two-dimesioal occupatio measures π 2 (dx, dy),, where 3
π(a 2 B) = I(ξ k A, ξ k B). (.5) k= Followig Varadha [3], the LDP for the family (π 2, ) i the metric space (S 2, ρ 2 ) (S 2 is a space of probabilistic measures o 2 ad ρ 2 is Levy-Prokhorov s metric) is defied as: (0) there exists (rate) fuctio J 2 (λ), λ S 2 values i [0, ] level sets of which are compacts; () for every closed (ope) i the metric ρ 2 set F 2 (G 2 ) from S 2 lim sup lim if log P (π2 F 2 ) if (λ); λ F 2 log P (π2 G 2 ) if (λ). λ G 2 Theorem.2 (comp. Ellis [5] ad Dupui ad Ellis [6], [9]) Uder (F), (I ), (H ), (M ), ad (H 0 ), the family (π 2, ) obeys the LDP i (S 2, ρ 2 ) with the rate fuctio {, margials of λ are differet J 2 (λ) = H(λ λ γ), margials of λ are the same (= γ). 5. Despite Theorem.2 is closed to correspodig results from [5] [6] its proof is esseially differet. It requires the followig otios. Defiitio. ([7]) The family (π 2, ) is said to be expoetially tight i (S 2, ρ 2 ), if there exists a sequece of compacts K 2 j S 2, j such that lim sup log P (π2 S 2 \ K 2 j ) = (.6) Defiitio 2. ([8]) The family (π 2, ) is said to be LD relatively compact i (S 2, ρ 2 ), if ay of ifiite subsequece (π 2 ) of (π2 ) cotais further subsequece (π 2 ñ), which satisfies the LDP i (S 2, ρ 2 ) with some rate fuctio J 2 = J 2 (λ). 4
Defiitio 3. (comp. [9]) The family (π, 2 ) is said to be satisfied the local LDP i (S 2, ρ 2 ) with local rate fuctio I = I(λ), if for every λ S 2 lim if δ 0 lim if log P (ρ2 (π, 2 λ) δ) log P (ρ2 (π, 2 λ) δ) = lim sup lim sup δ 0 = I(λ)). (.7) The expoetial tightess for the family (π 2, ), uder (H ) ad (H 0 ), is established i Theorem 2. (Sectio 2). Theorems 3. ad 4. (Sectios 3 ad 4 respectively) brig the local LDP for this family with I(λ) J 2 (λ). The ext lik i provig Theorem 2. is Puhalskii s theorem [8] or, more exactly, its straightforward statemet: expoetial tightess = LD relative compactess. (.8) Thus, a scheme of the proof for Theorem.2 is the followig. By Theorem 2. the family (π 2, ) is expoetially tight. By (.8) oe ca take a subsequece (π 2 ñ), obeyig the LDP with rate fuctio J 2 (λ). All these facts imply the local LDP for (π 2 ñ): for every λ S 2 lim if δ 0 lim if ñ ñ log P (ρ2 (πñ, 2 λ) δ) ñ log P (ρ2 (πñ, 2 λ) δ) = lim sup lim sup δ 0 ñ = J 2 (λ)). (.9) O the other had by virtue of the local LDP for the origial sequece with the local rate fuctio I(λ) from (.7) ad obvious iequalities lim if lim ifñ lim supñ lim sup, we arrive to a idetity J 2 (λ) J 2 (λ), what i tur implies that J 2 (λ) is rate fuctio (the same method has bee used i [0]). The upper ad lower bouds from () ad (2) are checked also by usig Puhalskii theorem (see (.8)). Other approaches for provig LDP for (π ) ad (π) 2 ca be foud i Akosta [], Gärter [2], Orey ad Pelika [3], Vereteikov [4]. 6. As was metioed above, (.8) is implied by Theorems 3. ad 4., which give us the upper ad lower bouds i the local LDP for the family (π 2, ). A proof of the lower boud requires oly coditios (I ) (M ). It uses a chage of probability 5
measures, proposed by Dosker ad Varadha i [], [2] ad a regularizatio method borrowed from recet Wu s paper [5]. Other approach ca be foud i Jai [6]. A proof for the upper boud i the local LDP requires coditio (F) ad the expoetial tightess for family (π 2, ). 2. Expoetial tightess. Due to Defiitio, (.6) has to be verified. To this ed, we eed a few auxiliary results. Lettig γ = γ(y), y 0 a positive decreasig fuctio such that lim y γ(y) = 0, put Kj 2 = {λ S 2 : i j ( x >i) ( y >i) λ(dx, dy) γ(i)}. (2.) The set K 2 j is tight ad, by virtue of Prokhorov s theorem (see e.g. [7]), is relatively compact. Sice {x, y : ( x > l) ( y > l)} is ope set, a limit of each covergig sequece from K 2 j belogs to K 2 j too, i.e. K 2 j is compact. Evidetly K 2 j K 2 j+. For j λ S 2, put L(j, λ) = mi (i j : ( x >i) ( y >i) λ(dx, dy) > γ(i)). (2.2) Lemma 2.. The family (π 2, ) is expoetially tight i (S 2, ρ 2 ), if lim j lim sup log P (L(j, π2 ) < ) =. (2.3) Proof: Takig defied i (2.) compact K 2 j ad oticig that {π 2 S 2 \ K 2 j } = {L 2 (j, π ) < }, we derive (.6) from (2.3). Lemma 2.2. For every measurable sets A, ad B,i,, i obeyig a property lim i lim sup log P (B,i) =, the followig equality holds lim sup log P (A ) = lim sup i lim sup log P (A, Ω \ B,i ). 6
Proof: The required statemet follows from obvious iequalities: P (A ) P (A, Ω\ B,i ), P (A ) 2[P (A, Ω \ B,i ) P (B,i )]. Theorem 2.. Uder (H ) ad (H 0 ) the family (π 2, ) is expoetially tight i (S 2, ρ 2 ). Proof: Put γ(y) = if x >y [w(x) w ], (2.4) where w(x) is the fuctio from (H ). Show that with such γ(y) (2.3) holds. I fact, by virtue of (H 0 ) ad Cheroff s iequality P (u(ξ 0 ) > i) e (i)b ebu(x) α 0 (dx). Cosequetly, for B,i = {v(ξ 0 ) > i} we get lim sup log P (B,i) ib, i. Therefore by virtue of Lemmas 2. ad 2.2, the expoetial tightess of (π, 2 ) takes place provided that lim i lim sup lim sup j log P (L(j, π2 ) <, v(ξ 0 ) i) =. (2.5) To verify (2.5), use a chai of iclusios (recall that L(j, π 2 ) j): {L(j, π) 2 < } { { + {2 The (2.5) holds provided that ( x >L(j,π)) ( y >L(j,π 2 )) 2 ( x >L(j,π)) 2 ( y >L(j,π)) 2 ( x >L(j,π 2 )) π 2 (dx, dy) π 2 (dx, dy) > γ(l(j, π 2 ))} π 2 (dx, dy) > γ(l(j, π 2 ))} π (dx) + I( ξ 0 > L(j, π 2 )) > γ(l(j, π 2 ))}. (2.6) lim sup lim sup j log P ( π (dx) x >j + 2 I( ξ 0 > L(j, π)) 2 > γ(l(j, π)), 2 u(ξ 0 ) i ) =. (2.7) 7
Put Z = k=0 e v(ξ k+ ) E(e v(ξ k+ ) ξ k ), where v(x) is from (H ). Due to Markovia property E(e v(ξ k+) ξ k ) = E(e v(ξ k+) ξ 0,..., ξ k ), P -a.s. ad, so EZ = what implies a obviously iequality EI( π (dx) + x >L(j,π 2 ) 2 I( ξ 0 > L(j, π)) 2 > γ(l(j, π), 2 u(ξ 0 ) i)z. (2.8) From the defiitio of Z ad w(x) (see (H )), it follows log Z = v(ξ k ) log e v(y) π(ξ k, dy) k= k= = v(ξ ) v(ξ 0 ) + w(ξ k ) k= = v(ξ ) v(ξ 0 ) + w + [w(x) w ]π (dx)). Evaluate ow from below the value log Z o a set { x >L(j,π 2 ) π (dx) + 2 I( ξ 0 > L(j, π 2 )) > γ(l(j, π 2 )}. Arguig L(j, π 2 ) j, we fid log Z v(ξ 0 ) + w + if [w(y) w ] ( y >L(j,π) 2 2 γ(l(j, π2 )) 2 I( ξ 0 > L(j, π) ) 2 v(ξ 0 ) 2 if [w(y) w ] + if y >L(j,π 2 )[w(y) w ] y > ξ 0 2 if y >L(j,π 2 )[w(y) w ] + w = v(ξ 0 ) 2 if [w(y) w ] + if y > ξ 0 2 [w(y) w ] + w y >j i 2 if [w(y) w ] + if y >i 2 [w(x) w ] + w. y >j Hece, we arrive to log P ( π (dx) x >L(j,π ) 2 + 2 I( ξ 0 > L(j, π)) 2 > γ(l(j, π) ) 2 i + 2 if [w(y) w ] if y >i 2 [w(y) w ] w y >j,, j, 8
that is (2.7) holds. Corollary. lim i lim sup ( x >i) ( y >i) log P ( π(dx, 2 dy) > q) =, q > 0. 3. Upper boud for local LDP. Theorem 3. Assume (F), (H ), ad (H 0 ). The for every λ S 2 {, margials are differet lim sup lim sup δ 0 log P (ρ2 (π, 2 λ) δ) H(λ λ µ), margials ofλ(= µ), where H(λ λ µ) is the coditioal etropy defied i (.3). Proof of this theorem is based o a sequece of auxiliary results formulated below as lemmas. Let P ad Q be probability measures o measurable space (Ω, F) ad let U m (Q) = {u(ω) : Ω u(ω)dq = } be a set of o egative F-measurable fuctios. Due to (.3), the coditioal etropy H(P Q) of measure P with respect to measure Q is defied as: H(P Q) = { dp Ω log (ω)dp, P Q dq, otherwise. { x log x + x, x > 0 Put V (x) = It is clear, V (x) is covex o egative cotiuous fuctio ad the followig formula, x = 0. holds H(P Q) = { Ω V ( dp (ω))dq, P Q dq, otherwise. (3.) For u U m, let G(u) = Ω log u(ω)dp. It is well kow (see Dosker ad Varadha [2]) that H(P Q) = sup u U m G(u). I fact, for P Q (h = dp ) ad ay fuctio dq u U m (Q) we have G(u) = Ω h log udq = Ω (h log u + u)dq ad, sice for x 0 sup y 0 x log y + y = V (x), iequality holds G(u) Ω V (h)dq = G(h), i.e. sup u U m G(u) is attaied at poit h U m (Q). If P Q fails, deote by P s the 9
sigular part of P with respect to Q. The oe ca choose a set Γ such that Q(Γ) = 0 ad P s (Ω \ Γ) = 0. Takig u N (ω) = + NI Γ (ω) from U m (Q) we fid G(u N ) = Ω log(+ni Γ (ω))dp Lemma 3. Let Ω = 2. The Ω H(P Q) = log(+ni Γ (ω))dp s = log(+n), N. sup G(u), u U c (Q) where U c (Q) is subset of U m (Q) of cotiuous fuctio. Proof: Assume P Q. Put h(ω) = dp (ω) ad dq f N (ω) c N h N (ω) h(ω)i ]N,N](h(ω)) + ( I ]N,N](h(ω))) = ( Ω f NdQ) c N f N (ω). Evidetly h N U m (Q) ad also G(h N ) = Ω log h N (ω)dp = h(ω) log h N (ω)dq Ω = log c N + h(ω) log h(ω)dq /N<h(ω) N = log c N + h(ω) log h(ω)dq + /N<h(ω) Sice lim N c N =, by Lebesgue domiated theorem we have lim N /N<h(ω) ad by Beppo-Levy s theorem lim N <h(ω) N Therefore lim N G(h N ) = G(h). h(ω) log h(ω)dq = h(ω) log h(ω)dq = h(ω) <h(ω) <h(ω) N h(ω) log h(ω)dq h(ω) log h(ω)dq. h log h(ω)dq. Let N be fixed. Choose a sequece of cotiuous fuctio u N,, such that u N, 2N ad Q lim u N, = h N (ad so lim Ω u N,dQ = ). Put u N, = 2N 0
u N, / Ω u N,dQ ad ote that u N, U c (Q). Due to P Q, P lim u N, = h N. The, by Lebesgue domiated theorem lim G(u N, ) = G(h N ). Hece, for ay ε > 0 there exist N(ε) ad (ε) such that G(h N(ε) ) + ε G(h) ad G(u N(ε),(ε) ) + ε G(h N(ε) ). Thus, for ay ε > 0 there exists a fuctio u ε = u N(ε),(ε) U c (Q) such that G(u ε ) + 2ε G(h) = H(P Q). The last meas that uder P Q the required result holds. If P Q fails, the for fixed N choose a sequece u N,, from U c such that /2 u N, 2N, ad P s lim u N, = u N, where u N (ω) = + NI Γ (ω). The G(u N, ) = Ω Ω log u N, dp = log[(u N, + /2) /2]dP Ω log(u N, + /2)dP log 2 log(u N, + /2)dP s log 2 Ω ad so lim if G(u N, ) log(3/2 + N) log 2 that is sup u U c G(u) =. Lemma 3.2. Let λ αβ ad λ µν be probability measures { from S 2 with margials ρ(α, µ) α, β ad µ, ν respectively. The ρ 2 (λ αβ, λ µν ) ad for ay δ > 0 ρ(β, ν) {ρ 2 (λ αβ, λ µν ) δ} {ρ(µ, ν) 2δ + ρ(α, β)}. Proof: Put F αβ ad F µν distributio fuctios correspodig to measures λ αβ ad λ µν respectively. By the defiitio of Levy-Prokhorov s metric ρ 2 (λ αβ, λ µν ) = sup mi [u : F αβ (x u, y u) u F µν (x, y) F αβ (x + u, y + u) + u] x,y sup mi [u : F αβ (x u, ) u F µν (x, ) F αβ (x + u, ) + u] x = ρ(α, µ) ad aalogously ρ 2 (λ αβ, λ µν ) ρ(β, ν). Cosequetly {ρ 2 (λ αβ, λ µν ) δ} {ρ(α, µ) δ} {ρ(β, ν) δ}. By the triagular iequality ρ(α, β) ρ(α, µ) + ρ(µ, β), ρ(µ, ν) ρ(µ, β) + ρ(ν, β) ad so, {ρ 2 (λ αβ, λ µν ) δ} {ρ(β, µ) δ + ρ(α, β)} {ρ(β, ν) δ + ρ(α, β)} {ρ(µ, ν) 2δ + ρ(α, β))}.
Let ν, ν 2 be probability measures from S 2 ad F, F 2 their distributio fuctios respectively. Put a = ρ 2 (ν, ν 2 ) ad x = (x, x 2 ). Lemma 3.3. For compactly supported ad cotiuously differetiable (by 2 x x 2 ) fuctio f = f(x) f(x)[ν (dx) ν 2 (dx)] 2 2 2 a f(x) dx + f(x) [F (x + a) F (x a)]dx, 2 x x 2 2 x x 2 where F (x) is ay of F (x), F 2 (x). Proof: Itegratig by parts, we obtai 2 f(x)df i(x) = 2 2 x x 2 f(x)f i (x)dx. Thereby f(x)[ν (dx) ν 2 (dx)] 2 2 2 x x 2 f(x) [F (x) F 2 (x)]dx. Due to the property of Levy-Prokhorov s metric, we have F (x a) a F 2 (x) ad F 2 (x) F (x+a)+a, that is the required result holds with F = F : F (x) F 2 (x) a + F (x + a) F (x a). For F = F 2, the proof is similar. Corollary. If f = f(x) is compactly supported ad cotiuous oly, the, approximatig it sup x f(x) f ε (x) ε/2, where for each ε fuctio f ε (x) satisfies assumptios of the lemma, we get f(x)[ν (dx) ν 2 (dx)] 2 2 ε + a f ε (x) dx + 2 x x 2 2 2 x x 2 f ε (x) [F (x + a) F (x a)]dx The ext lemma plays substatial role i provig Theorem 3.. Lemma 3.4. Assume (F), (H ), ad (H 0 ). The, for every λ from S µµ, lim lim sup lim sup lim sup q 0 i δ 0 log P (ρ2 (π, 2 λ) δ, H(λ λ µ). ( x >i) ( y >i) π 2 (dx, dy) q) 2
Proof: Put A(, i, q) = {ρ 2 (π, 2 λ) δ, ( x >i) ( y >i) π2 (dx, dy) q}. Takig compactly supported cotiuous fuctio v(x, y), put u(x, y) = v(x,y). By virtue ev(x,z) π(x,dz) e of (F) u(x, y) is cotiuous bouded fuctio such that u(x, 2 y)λ µ(dx, dy) =. The last meas that u(x, y) U c (λ µ). Let Z = k= u(ξ k, ξ k ). By Markovia property E(u(ξ k, ξ k ) ξ k ) = E(u(ξ k, ξ k ) ξ k,..., ξ 0 ) P a.s. what implies EZ = ad i tur a obvious iequality EI A(,i,q) Z. (3.2) Evaluate ow from bellow, o the set A(, i, q), the value of log Z : = log Z log u(ξ k, ξ k ) = log u(x, y)π(dx, 2 dy) k= 2 log u(x, y)λ(dx, dy) log u(x, y)[λ(dx, dy) π(dx, 2 dy)]. 2 2 Choosig o egative cotiuous fuctio φ i (x, y) such that φ i (x, y) = o { x i } { y i } ad φ i (x, y) = 0 o { x i} { y i}, we get log Z log u(x, y)λ(dx, dy) φ i (x, y) log u(x, y)[λ(dx, dy) π(dx, 2 dy)] 2 2 sup log u(x, y) λ(dx, dy) π(dx, 2 dy) x,y ( x i) ( y i) ( x i) ( y i) = log u(x, y)λ(dx, dy) r (i) r (i) r (i, ). 2 Deote by F (x, y) the distributio fuctio correspodig to λ(dx, dy). By the corollary to Lemma 3.3, the followig estimates (o A(, i, q)) hold: r (i) 2 ε + δ f ε (x, y) dxdy 2 x x 2 2 + f ε (x, y) [F (x + δ, y + δ) F (x δ, y δ)]dxdy ( ε + q (i, ε)), 2 x x 2 where f ε (x, y) is ε/2-approximatio of φ i (x, y) log u(x, y); 3
r (i) = sup log u(x, y) λ(dx, dy) (= q (i)); x,y ( x i) ( y i) r (i, ) sup log u(x, y) q. x,y These estimates ad (3.2) imply a iequality log P ( A(, i, q) ) log u(x, y)λ(dx, dy) + ε + q (i, ε) + q (i) + sup log u(x, y) q, (3.3) 2 x,y correspodig to which lim q 0 lim sup i lim sup lim sup δ 0 log P (A(, i, q) ε log u(x, y)λ(dx, dy). 2 Thus, by virtue of sup u U c log u(x, y)λ(dx, dy) = 2 H(λ λ µ), Lemma 3., ad a arbitrariess of ε the required result holds. The proof of Theorem 3.: Assume λ from S 2 has differet margials, say, µ ad ν. Deote by π ad π margials of π: 2 π (A) = k= I(ξ k A), π (A) = k= I(ξ k A). Sice the total variatio π π 2/, we get ρ(π, π ) 2/. By Lemma 3.2 {ρ 2 (π, 2 λ) δ} {ρ(µ, ν) 2δ + 2/} ad so, uder 2δ + 2/ ρ(µ, ν), log P (ρ(π, 2 λ) δ) =. Assume λ from S 2 has the same margials. Put A() = {ρ 2 (π, 2 λ) δ} B(, i, q) = { π(dx, 2 dy) > q}. ( x >i) ( y >i) The required result follows from Lemma 3.4, the corollary to Theorem 2., ad Lemma 2.2, which, uder lim q 0 lim sup lim sup i log P (B(, i, q)) = A() (Ω \ B(, i, q)) = A(, i, q), 4
ca be reformulated as lim sup lim sup log P (A()) δ 0 = lim lim sup q 0 i lim sup lim sup δ 0 log P (A(), Ω \ B(, i, q)). 4. Lower boud for local LDP. Theorem 4. Assume (I ) ad (M ). The for every λ from S 2 ad δ > 0 lim if log P (ρ2 (π 2, λ) δ) J 2 (λ). Proof of this theorem cosists i a few steps formulated below as lemmas. Lemma 4.. Assume (I ) ad (M ). The α l. Proof: By virtue of (I ) ad (M ) we get α(dy) = l(dy) p (y x)α(dx), (4.) that is α l. Also ote dα > 0 l-a.s. what implies, by Lebesgue decompositio of l dl with respect to α, that l α. Lemma 4.2 Assume (I ) ad (M ). The p(x y) = dλ dλ α (x, y)p (y x) (l α) a.s. ad p(y x) > 0 (l α)-a.s. Proof: By virtue of (I ) λ λ α ad, so π(x, dy)α(dx) = dλ α (x, y)p (y x)l(dy)α(dx). dλ The first statemet holds, sice for every x we have π(x, dy) = p(y x)l(dy). To prove the secod, show that dλ α > 0 (l α)-a.s. To this ed, use dλ dλ α dλ > 0 λ a.s. λ (dx, dy) = p (y x)l(dy)α(dx) p (y x) > 0 (l α) a.s. Thereby λ (A) = 0 implies (l α)(a) = 0, that is (l α) λ -a.s. ad the required iequality holds. 5
Lemma 4.3 Assume (I ) ad (M ). If H(λ λ µ) <, λ S µµ, the µ l. Proof: Due to (.3), λ λ µ is implied by H(λ λ µ) <. I tur, (M ) gives as the cosequece λ µ(dx, dy) = p(y x)l(dy)µ(dx). Let l(a) = 0. The λ µ( A) = p(y x)µ(dx)l(dy) = 0. Cosequetly µ(a) = λ( A) = 0. A Lemma 4.4. Let η = (η k ) k 0 be a statioary Markov process values i, havig the margial measure µ(dx) ad trasitio probability π µ (x, dy). Assume The η is µ-ergodic process. π µ (x, dy) = p µ (y x)l(dy) µ a.s. p µ (y x) > 0 (l µ) a.s. Proof: By Theorem 4 (see, Ch. 4 [8]), µ-ergodicity of η holds provided that a equatio f(x) = f(y)π µ (x, dy) µ a.s. (4.2) obeys uique µ-a.s. (to withi a multiplicative costat) bouded solutio. Let f(x) be some solutio of (4.2). Show that Sice µ is ivariat measure we have (4.2) ad (4.4) imply µ(x : sig f(x) = cost) =. (4.3) µ(dy) = l(dy) p µ (y x)µ(dx). (4.4) f(x) µ(dx) = ad, thus f(y)p µ (y x)l(dy) µ(dx) f(y) p µ (y x)l(dy)µ(dx) = f(y) µ(dy) 6
{ f(y) p µ (y x)l(dy) f(y)p µ (y x)l(dy) }µ(dx) = 0. The last meas f + (y)>0 f + (y)p µ (y x)l(dy) f (y)p µ (y x)l(dy) = 0 µ a.s., (4.5) f (y)>0 where a + = max(0, a) ad a = mi(a, 0). By Lemma 4. l µ. Thereby the violatio of (4.3) cotradicts to (4.5). Thus, (4.3) holds. If f (x) ad f 2 (x) are solutios of (4.2), the for ay costats c ad c 2 the fuctio f(x) = c f (x) + c 2 f 2 (x) is solutio of (4.2) too. Costats c ad c 2 ca be chose such that to violate (4.3). Thus (4.2) obeys the desired uique solutio. Lemma 4.5. Assume (I ) ad (M ). For every λ from S µµ with H(λ λ µ) < there exist families λ ε ad µ(ε) (ε (0, )) from S 2 such that ad λ ε S µ(ε)µ(ε) λ ε λ µ(ε) (4.6) H(λ ε λ µ(ε)) < (4.7) lim ε 0 ρ2 (λ ε, λ) = 0 (4.8) lim ε 0 H(λε λ µ(ε)) = H(λ λ µ ) (4.9) Proof: Let λ be the measure, ivolvig i (I ), with both margials α. Put µ(ε) λ ε λ µ(ε) ( ε)µ + εα = ( ε)λ + ελ ( ε)λ µ + ελ α ad ote that λ ε obeys the same margials ( ε)µ + εα = µ(ε). Proof of (4.6). Let λ µ(ε) (A) = 0. The λ µ(a) = 0 ad λ α(a) = 0. Sice H(λ λ µ) <, we get λ λ µ ad so, λ(a) = 0. By virtue of (I ) λ λ α, what implies λ (A) = 0. Cosequetly λ ε (A) = 0. O the other had, if λ ε (A) = 0, the λ(a) = 0 ad λ (A) = 0, while the equivalece of λ ad λ α has as a cosequece λ α(a) = 0. 7
Therefore λ µ(ε) (A) = 0 holds provided that λ µ(a) = 0. The last holds by virtue of λ α(a) = 0 ad λ α(dx, dy) = p(y x)l(dy)α(dx) ad λ µ(dx, dy) = p(y x)l(dy)µ(dx) ad also by µ α (Lemma 4.3)) ad λ µ λ α. Thus, λ ε λ µ(ε). Proof of (4.7): Evidetly µ µ(ε) ad α µ(ε). Deote by h α (x, y) h(x, y) g µ (x) g(x) = dλ (x, y) dλ α dλ (x, y) dλ µ ( ε) dµ (x) dµ(ε) ε dα (x). dµ(ε) The dλ(ε) (x, y) = h(x, y)g(x) + h dλ α (x, y)g µ (x) (= h ε (x, y)). µ(ε) Due to the defiitio of the coditioal etropy (3.) we obtai H(λ ε λ µ(ε)) = V (h ε (x, y))λ µ(ε)(dx, dy). 2 (4.0) Fuctios g(x) ad g α (x) are chose such that to satisfy g(x) + g α (x) = µ(ε)-a.s. (ad λ µ(ε) -a.s.). V (x) is covex fuctio ad so, by Jese s iequality V (h ε (x, y)) g µ (x)v (h(x, y)) + g(x)v (h α (x, y)) λ µ(ε) a.s. (4.) The last ad (3.) imply H(λ ε λ µ(ε)) ( ε) V (h(x, y)) dµ 2 dµ(ε) λ µ(ε)(dx, dy) + ε V (h α (x, y)) dα 2 dµ(ε) λ µ(ε)(dx, dy) = ( ε) V (h(x, y))λ µ(dx, dy) + +ε V (h α (x, y))λ α(dx, dy) 2 2 = ( ε)h(λ λ µ) + εh(λ λ α) <. (4.2) Proof of (4.8): It is clear that the total variatio λ ε λ 2ε, i.e. ρ 2 (λ ε, λ) 0, ε 0. 8
Proof of (4.9): Due to (4.2), lim sup ε 0 H(λ ε λ µ(ε) ) H(λ λ µ). The opposite iequality lim if ε 0 H(λ ε λ µ(ε) ) H(λ λ µ) is derived from (4.0) by Fatou s lemma. The proof of Theorem 4.: It is clear that oly the case J 2 (λ) < has to be checked. We use the fact that i this case Presuppose first that J 2 (λ) = H(λ λ µ) λ S µµ λ λ µ. ad deote by λ λ µ, α 0 α. (4.3) h(x, y) = dλ (x, y). (4.4) dλ µ Let us defie, o a mesurable space (, B( )) ( = (x 0, x,...), B( ) is the Borel σ-algebra), probability measures Q ad Q µ, where Q is the distributio of the origial Markov process ξ ad Q µ correspods to a statioary Markov process with the margial distributio µ ad the trasitio probability π µ (x, dy) = h(x, y)π(x, dy)(= h(x, y)p(y x)l(dy)). (4.5) Due to (4.3) ad Lemma 4.3, we have µ α 0 ad p µ (y x) > 0 (l µ)-a.s. Thereby, applyig Lemma 4.4, oe ca coclude that Markov process (x k, Q µ ) k 0 is µ-ergodic. Deote by Q ad Q µ restrictios, o σ-algebra B( + ), of Q ad Q µ respectively. Sice µ α 0, we get Q µ Q, 0 herewith a process of local desity (Z (x 0) = Q µ Q, = 0,,... is give by the formula: Z (x 0) = dµ (x 0 ) exp ( log h(x k, x k )). (4.6) dα 0 k= Lebesgue s decompositio of Q with respect to Q µ implies a iequality: for each from B( + ) 9
Q( ) Z (x 0)dQ µ. (4.7) Itroduce ow occupatio measures π 2 (x 0): π 2 (A B)(x 0) = k= I(x k A, x k B) ad defie sets = {ρ 2 (π(x 2 0), λ) δ} 2 = { h(x k, x k ) H(λ λ µ) β} k= c = { dµ dα 0 (x 0 ) > /c}. (4.8) Put,c = 2 c. The, takig ito accout (4.7), we fid Hece P (ρ 2 (π, 2 λ) δ) = Q(ρ 2 (π(x 2 0), λ) δ) (x 0)dQ µ Z,c cq µ (,c ) exp( H(λ λ µ) β). Assume lim if log P (ρ 2 (π 2, λ) δ) H(λ λ µ) β + lim if log Q µ (,c ). lim µ(x 0 : dµ c dα (x 0) > /c) = lim Q µ ( ) = lim Q µ ( 2 ) =. (4.9) The, accordigly to a arbitrariess of β, the required lower boud holds. The first part of (4.9) takes place sice, as it was metioed above, µ α 0. The secod part is implied by the ergodicity of (x k, Q µ ) k 0 as log as the two-dimesioal distributio of it is λ ad so, due to Birkhoff-Khichi s theorem, lim ρ 2 (π(x 2 0), λ) = 0 Q µ - a.s. By makig assumptio > H(λ λ µ) = 2 log h(x, y)λ(dx, dy) ad so, due to Birkhoff-Khichi s theorem, the last part holds too. Thus, uder (4.3), the lower boud holds. 20
Let the secod coditio (α 0 α) i (4.3) be valid ad let λ ε ad µ(ε) be measures from Lemma 4.5. For these measures, the first part of (4.3) is valid. Show that µ(ε) α 0. By virtue of Lemmas 4. ad 4.3, µ α what implies µ(ε) α. The last, due to the secod part i (4.3), implies the required absolute cotiuity. By proved above lim if log P (ρ2 (π, 2 λ ε ) δ) H(λ ε λ µ(ε) ). Accordigly to Lemma 4.5, oe ca choose ε 0 such that for fixed δ ad ε ε 0 a iequality takes place: ρ 2 (λ, λ ε ) δ/2. The, takig ito accout Lemma 4.5, we arrive to lim if log P (ρ2 (π, 2 λ) δ) lim if log P (ρ2 (π, 2 λ ε ) δ/2) H(λ ε λ µ(ε)) H(λ λ µ), ε 0, that is the required iequality takes place provided that the secod part i (4.9) holds. To reliquish o it, itroduce, followig Dosker ad Varadha [], a ew Markov process ξ = (ξk) k 0 with ξk ξ k+. It has the same trasitio probability π(x, dy) ad iitial distributio α0(dy) = π(x, dy)α 0(dx) = l(dy) p(y x)α 0(dx). Hece ad by virtue of (H 0 ), it follows that α0 l ad so, by Lemma 4. α 0 α. Therefore, by the previous proof, the lower boud holds for π(ξ 2 ), where π(a 2 B(ξ ) = I(ξk A, ξk B) = k= that is I(ξ k A, ξ k+ B), k= lim if log P (ρ2 (π 2 (ξ ), λ) δ) H(λ λ µ). (4.20) The required result ow follows from (4.20) ad the fact that the total variatio π 2 π 2 (ξ ) 2/. 5. Proof of Theorem.2. By Theorem 2. the family (π 2 ) is expoetially tight. The by (.8) (Puhalskii s theorem) there exists a subsequece (π 2 ñ), obeyig the LDP with the rate fuctio J 2 (λ) ad so, for δ > 0 the upper ad lower bouds hold: 2
lim if ñ ñ log P (ρ2 (πñ, 2 λ) δ) lim if ñ if ν:ρ 2 (ν,λ)<δ ñ log P (ρ2 (πñ, 2 λ) < δ) J 2 (ν) J 2 (ν). (5.) ad lim sup ñ ñ log P (ρ2 (πñ, 2 λ) δ) if ν:ρ 2 (ν,λ)<δ J 2 (ν). (5.2) From the defiitio of if it follows that for every ε > 0 ad δ > 0 there exists a measure ν δ ε such that ρ 2 (ν δ ε, λ) δ ad if ν:ρ 2 (ν,λ)<δ J 2 (ν) J 2 (ν δ ε) ε. (5.3) Sice J 2 (λ) is rate fuctio (see coditio (0) from the defiitio of LDP), it is semicotiuous from below. The last implies lim if δ 0 J 2 (ν δ ε) J 2 (λ) ad so, accordigly to (5.2) ad (5.3) lim sup ñ ñ log P (ρ2 (π 2 ñ, λ) δ) lim sup[ε J 2 (νε)] δ δ 0 = ε lim if J 2 (νε) δ δ 0 ε J 2 (λ) J 2 (λ), ε 0. (5.4) Thus, (5.) ad (5.4) have as a sequece the local LDP (see (.9) for (π 2 ñ). As it was metioed i the itroductio, the idetity J 2 (λ) J 2 (λ) takes place. Usig this fact, oe ca establish the LDP upper ad lower bouds for (π 2 ). For closed set F 2 S 2, choose subsequeces (π 2 ) ad (π2 ) such that lim sup log P (π2 F 2 ) = lim log P (π2 F 2 ). By (.8), oe ca choose a subsequece (π 2 ñ) from (π 2 ) such that (π2 ñ) obeys the LDP with rate fuctio J 2 (λ) J 2 (λ). The 22
lim sup log P (π2 F 2 ) = lim = limñ The lower boud is proved i the same way. log P (π2 F 2 ) ñ log P (π2 ñ F 2 ) if J 2 (λ). λ F 2 6. Proof of Theorem.. Let λ λ 2 be from { S 2. Their margials are deoted by (µ, ν ), (µ 2, ν 2 ). By Lemma 3.2 ρ(µ ρ 2 (λ, λ 2 ), µ 2 ) ρ(ν, ν 2 Therefore, a mappig λ ). (µ, ν ) is uiformly cotiuous i Levy-Prokhorov s metric. Hece, by the cotractio priciple of Varadha [3], the family (π, ) obeys the LDP i (S, ρ) with rate fuctio J (µ) = if J 2 (λ), where if is take over all measures λ from S 2 if oly oe margial is µ. O the other had, sice J 2 (λ) = for ay λ with differet margials we arrive to J (µ) = if λ Sµµ H(λ λ µ). 7. Example Let ξ = (ξ k ) k 0 be defied by a recursio ξ k+ = f(ξ k ) + g(ξ k )ζ k+, where (ζ k ) k is i.i.d. sequece of radom variable, which is idepedet of ξ 0. Assume. f(x) ad g(x) are cotiuous fuctio such that f(x) a < /r g(x) x r, r > 0; 2. the distributio of radom variable ζ obeys a desity p ζ (y) with respect to Lebesgue measure ad what is more p ζ (y) is cotiuous fuctio such that r exp( r 2 y q ) p ζ (y) r 3 exp( r 4 y ) r i > 0, i =,..., 4, q ; 3. There exists b > 0 such that Ee b ξ 0 <. Show that, uder above-metioed coditios, both LDP s i (S 2, ρ 2 ) ad (S, ρ) take place. Due to Theorems. ad.2, the followig implicatio has to be checked oly. {., 2., 3.} = {(F), (I ), (H ), (M ), (H 0 )}. 23
By 2., for every x the trasitio probability π(x, dy) obeys a desity with respect to Lebesgue measure: p(y x) = p ζ ( y f(x) ). Thereby {., 2.} = {(F)}. As λ, oe ca g(x) take the two-dimesioal distributio of a statioary Gaussia Markov process (ξ k) k 0 defied by a liear recursio ξ k+ = aξ k + ζ k+, where (ζ k) k is i.i.d. sequece of (0,)-Gaussia radom variables, idepedet of ξ 0, which is (0, /( a 2 ))-Gaussia radom variable. The dα(x) = a 2 x2 exp ( ) ad dx 2π 2/( a 2 ) λ (dx, dy) = 2π exp ( (y ax) 2 )dyα(dx). O the other had, we get λ (dx, dy) = p 2 ζ ( y f(x) )dyα(dx) ad so, g(x) λ λ α with dλ dλ α (x, y) = dλ dλ α a 2 2π (y ax)2 exp( ) 2 p ζ ( y f(x) g(x) ). The, correspodigly to., 2. (x, y) exp (r 2 y f(x) q ) exp ( r 2 r g(x) r r [ y + a x ]q ), what has as a cosequece H(λ λ α) log r + 2 r 2 r [ y + a x ] q λ (dx, dy) <, i.e. {., 2.} = {(I ))}. To verify (H ), takig v(x) = c x c (a r 4 r, r 4 r ), fid w(x) = c x log Hece ad due to 2. we arrive to a iequality: w(x) log r 3 + c x log log r 3 + c x log exp(c y )p ζ ( y f(x) )dy. g(x) = (c a r 4 r ) x log r 3 log exp (c y r 4 y f(x) )dy g(x) exp (c y r 4 r y + ar 4 r x )dy exp ( [ r 4 r c] y )dy, i.e. {2.} = {(H )} ad, what is more, {2.} = {(M )} ad {2., 3.} = {(H 0 )}. efereces [] M.D. Dosker ad S..S Varadha, Asymptotic evaluatio of certai Markov process expectatios for large time, III. Commuicatio of Pure ad Applied Mathematics, XXIX, 976, pp. 389 462. [2] M.D. Dosker ad S..S Varadha, Asymptotic evaluatio of certai Markov process expectatios for large time, I. Commuicatio of Pure ad Applied Mathematics, XXVIII, 975, pp. 7. 24
[3] S..S Varadha, Large Deviatios ad Applicatios. SIAM, Philadelphia, 984. [4] M.D. Dosker ad S..S Varadha, Asymptotic evaluatio of certai Markov process expectatios for large time, IV. Commuicatio of Pure ad Applied Mathematics, XXXVI, 983, pp. 83 22. [5].S. Ellis, Large deviatios for the empirical of a Markov chai with a applicatio to the multivariate empirical measure. A. Probab., 6, 988, pp. 496 508. [6] P. Dupuis ad.s. Ellis, A stochastic optimal cotrol approach to the theory of large deviatios. Preprit, 992. [7] J.D. Deuschel ad D.W. Stroock, Large Deviatios. Academic Press, New York, 989. [8] A.A Puhalskii, O fuctioal priciple of large deviatios. New treds i Probability ad Statistics., Vilius, Lithuaia, VSP/Mokslas, 99, pp. 98-28. [9] M.I. Freidli ad A.D. Wetzell, adom Perturbatios of Dyamical Systems. N.Y. Spriger. 984. [0].S. Liptser ad A.A. Pukhalskii, Limit theorems o large deviatios for semimartigales. Stochastics ad Stochastic eports, 38, 992, pp. 20-249. [] De A. Acosta, Large deviatios for empirical measures of Markov chais.j. Theoret. Probab., 3, 990, pp. 395 43. [2] J. Gärteer, O large deviatios for the ivariat measure. Theory of probability ad its applicatio, 22, N, 977, pp. 27 42. [3] S. Orey ad S. Pelica, Large deviatio priciples for statioary processes, A. Probab., 6, 988, pp. 48 495. [4] A.Yu. Vereteikov, O large deviatios for ergodic process empirical measures. Advaces i Soviet Mathematics 2, 992, pp. 25 33. [5] L. Wu, Grades déviatio pour les processus de Markov essetiellemet irréductibles. I. Temps discret. C.. Acad. Sci. Paris, 32, N, 99, pp. 609 64. [6] N.C. Jai, Large deviatio lower bouds for additive fuctioals of Markov processes. Aals of Probability, 8, N 3, 990, pp. 07-098. 25
[7] P. Billigsley, Covergece of probability measures., Joh Willey ad Sos, New York-Lodo-Sydey-Toroto, 968. [8] A.V. Skorokhod, Asymptotics method i the theory of stochastic differetial equatios. Naukova Dumka, Kiev, 987. [9] P. Dupuis ad.s. Ellis, A Weak Covergece Approach to the Theory of Large Deviatios. Joh Willey ad Sos, to be published. 26