Limit theorems under sublinear expectations and probabilities Xinpeng LI Shandong University & Université Paris 1 Young Researchers Meeting on BSDEs, Numerics and Finance 4 July, Oxford 1 / 25
Outline Introduction Law of large numbers Central limit theorem 2 / 25
Introduction Capacity and Choquet expectation Marinacci, M. (1999) Limit laws for non-additive probabilities and their frequentist interpretation. J. Econom. Theory, 84, 145-195. Maccheroni, F. and Marinacci, M. (2005) A strong law of large numbers for capacities. The Annals of Probability, 33, 1171-1178. etc. Sublinear expectation Peng, S. (2008) A new central limit theorem under sublinear expectations. in arxiv:0803.2656v1. Chen, Z. (2010) Strong laws of large numbers for capacities. in arxiv:1006.0749v1. etc. 3 / 25
Let (Ω, F) be a measurable space and M be the set of all probabilities on Ω. For each non-empty subsets P M, we can define: Upper probability V (A) := sup P P P (A), A F Lower probability v(a) := inf P P P (A), A F Sublinear expectation ^E[X] := sup P P E P [X], i.e. (1) ^E[X] ^E[Y ] if X Y (2) ^E[c] = c, c R (3) ^E[X + Y ] ^E[X] + ^E[Y ] (4) ^E[λX] = λ^e[x], λ 0 4 / 25
Definition (Independence) X n is said to be independent of (X 1,, X n 1 ), if for each φ C b,lip (R n ) (all bounded and Lipschitz functions on R n ), ^E[φ(X 1,, X n )] = ^E[^E[φ(x 1,, x n 1, X n )] (x1,,x n 1 )=(X 1,,X n)]. X n is said to be product independent of (X 1,, X n 1 ) if for each nonnegative bounded Lipschitz function φ k, ^E[ n φ k (X k )] = k=1 n ^E[φ k (X k )]. k=1 X n is said to be sum independent of (X 1,, X n 1 ) if for each φ C b,lip (R), ^E[φ( n k=1 X k )] = ^E[^E[φ(x + X n )] x= n 1 k=1 X k ]. 5 / 25
Example: We consider X and Y such that ^E[ X] < ^E[X] = 0 and ^E[ Y ] < ^E[Y ] = 0. Independent case: ^E[XY ] = ^E[^E[xY ] x=x ] = ^E[(x + ^E[Y ] + x ^E[ Y ]) x=x ] = ^E[X ^E[Y ] + X (^E[Y ] + ^E[ Y ])] > 0 6 / 25
Definition (Identical distribution) X 2 is said to be identically distributed with X 1, denoted by X 1 X 2, if for each φ C b,lip (R), ^E[φ(X 1 )] = ^E[φ(X 2 )]. 7 / 25
Definition (G-normal distribution) ξ is said to be G-normal distributed, denoted by ξ N (0, [σ 2, σ 2 ]), where σ 2 = ^E[ξ 2 ] and σ 2 = ^E[ ξ 2 ], if a, b 0, aξ + b ξ a 2 + b 2 ξ, where ξ is independent of ξ and ξ ξ. Remark If ξ + ξ 2ξ, then ξ is G-normal distributed. 8 / 25
Proposition If ξ is G-normal distributed with σ 2 = ^E[ξ 2 ] and σ 2 = ^E[ ξ 2 ], for each φ C b,lip(r), we define u(t, x) = ^E[φ(x + tξ)], (t, x) [0, ] R, then u(t, x) is the unique viscosity solution of the following G-heat PDE: t u G( 2 xxu) = 0, u t=0 = φ, where G(α) = 1 2 (σ2 α + σ 2 α ), α R. Proposition Let ξ N (0, [σ 2, σ 2 ]), if φ is a convex function, then ^E[φ(ξ)] = E Z [φ(σz)], where Z N (0, 1). 9 / 25
Definition (Maximal distribution) η is said to be maximal distributed, if a, b 0, aη + b η (a + b)η. Remark If η + η 2η, then η is maximal distributed. Proposition If η is maximal distributed with μ = ^E[η] and μ = ^E[ η], then for each φ C b,lip(r), ^E[φ(η)] = max μ μ μ φ(μ). 10 / 25
Law of large numbers Weak law of large numbers (Peng) Let {X n } be a sequence of i.i.d random variables with finite means μ = ^E[X 1 ] and μ = ^E[ X 1 ]. Suppose ^E[ X 1 1+α ] < for some α > 0, let S n = n k=1 X k, then lim n ^E[φ( S n )] = max n φ(μ), μ μ μ φ C b,lip(r). 11 / 25
Strong law of large numbers (Chen) Let {X n } be a sequence of i.i.d random variables with finite means μ = ^E[X 1 ] and μ = ^E[ X 1 ]. Suppose ^E[ X 1 1+α ] < for some α > 0, let S n = n k=1 X k, then S (I) v(μ lim inf n S n n lim sup n n n μ) = 1. (II) Furthermore, if V is upper continuous, i.e., V (A n ) V (A), if A n A, S then V (lim sup n S n n = μ) = 1, V (lim inf n n n = μ) = 1. (III) Suppose that V is upper continuous, and C({x n }) is the cluster set of a sequence of {x n } in R, i.e., C({x n }) = {x there exists a subsequence {x nk } of {x n } such that x nk x}, then V (C({ Sn n }) = [μ, μ]) = 1. 12 / 25
Law of large numbers We assume P is weakly compact. Let {X k } k=1 be a sequence of random variables satisfying: sup k 1 ^E[ Xk 1+α ] <, for some α > 0, and ^E[X k ] μ, ^E[ X k ] μ, k = 1, 2,. Set S n = n k=1 X k. (i) If {X k } k=1 is product independent, then v(μ lim inf n S n n lim sup n S n n μ) = 1. (ii) If {X k } k=1 is product and sum independent, then lim n ^E[φ( S n )] = max n φ(μ). μ μ μ (iii) If {X k } k=1 is sum independent, and V ( ) is upper continuous, then μ [μ, μ], V ( lim n S n n = μ) = 1. 13 / 25
Lemma We assume that P is weakly compact. Let P be a probability measures such that for each φ C b,lip (R n ), E P [φ(x 1,, X n )] ^E[φ(X 1,, X n )], (*) Then (*) also holds for each bounded measurable function φ. Proof: (*) holds for bounded u.s.c. function φ since φ k C b,lip (R n ), s.t. φ k φ (φ k = sup y R n{φ(y) k x y }). P is weakly compact, we have ^E[φ k (X 1,, X n )] ^E[φ(X 1,, X n )] (Theorem 31 in Denis, Hu, Peng(2008)), then E P [φ(x 1,, X n )] = lim k E P [φ k (X 1,, X n )] ^E[φ(X 1,, X n )]. If φ is a bounded measurable function, then E P [φ(x 1,, X n )] = sup{e P [φ(x 1,, X n )] : φ is bounded upper semi-continuous and φ φ} ^E[φ(X 1,, X n )]. 14 / 25
Lemma If for each nonnegative bounded Lipschitz function φ i, i = 1,, n, n ^E[ φ i (X i )] = i=1 n ^E[φ i (X i )], then for each nonnegative bounded measurable function φ i, i = 1,, n, n ^E[ φ i (X i )] = i=1 i=1 n ^E[φ i (X i )]. i=1 15 / 25
Sketch proof of main theorem: (i) Similar to Proof of Chen (2010) Consider Y k = X k I { Xk k β } ^E[X 1 k I { Xk k β }], where 1+α < β < 1. n k=1 V ( Y k n e Y k n β e x 1 + x + x 1+α e 2 x, x R, 0 < α < 1. Y k e2 β(1+α) 1 + Y k n β + Y k 1+α n ^E[ n k=1 e Y k n β ] = nk=1 Y k ε) = V (e n β n β 1 + Y k n β + Y k 1+α e 4. n ^E[e Y k n β ] 1 + C n. n k=1 ^E[e Y k n β ] (1 + C n )n e C. e εn1 β ) e εn1 β ^E[ n k=1 e Y k n β ] e εn1 β e C. 16 / 25
Corollary Let {X n } be a sequence of bounded and sum independent random variables satisfying: sup k 1 ^E[ Xk 1+α ] <, for some α > 0, and ^E[X k ] μ, ^E[ X k ] μ, k = 1, 2,. Set S n = n k=1 X k. (i) v(μ lim inf n S n n lim sup n S n n μ) = 1. (ii) lim n ^E[φ( S n n )] = max μ μ μ φ(μ). (iii) Furthermore, if V is upper continuous, then μ [μ, μ], S n V ( lim n n = μ) = 1. 17 / 25
Central limit theorem Central limit theorem (Peng) Let {X n } be a sequence of i.i.d. random variables such that ^E[X 1 ] = ^E[ X 1 ] = 0 and ^E[ X 1 q ] < for some q > 2. Let S n = n k=1 X k. Then for all φ C(R) with quadratic growth condition, lim ^E[φ( S n )] = ^E[φ(ξ)], n n where ξ N (0, [σ 2, σ 2 ]) with σ 2 = ^E[X 2 1 ], σ2 = ^E[ X 2 1 ]. 18 / 25
Let M q n([σ 2, σ 2 ], K) denote the set of n-stages martingale S with filtration F, such that for all k, both relations hold: Let E[S k ] = 0 σ 2 E[ S k+1 S k 2 F k ] σ 2 E[ S k+1 S k q F k ] K q V n [φ] := Theorem (Central limit theorem) sup E[φ( S n )]. S M q n([σ 2,σ 2 ],K) n We assume that q > 2. Then for all φ C(R) with quadratic growth condition, we have where ξ N (0, [σ 2, σ 2 ]). lim V n[φ] = ^E[φ(ξ)], n 19 / 25
Maximal L p variation problem Let M n (μ) be the set of n-stage martingales whose terminal distribution is μ. We define the L p -variation of length n of the martingale (L k ) k=1,,n as V p n(l) = E[ n (E[ L k L k 1 p (L i, i k 1)]) 1 p ]. k=1 The value function is denoted by V n (μ) = 1 sup V n n(l). p L M n(μ) 20 / 25
Theorem For p [1, 2) and μ Δ(K), where K is a compact subset of R. We have lim V n(μ) = E[f μ (Z)Z], n where Z N(0, 1) and f μ : R R is a increasing function such that f μ (Z) μ, i.e., f μ (x) = Fμ 1 (F N (x)) with Fμ 1 = inf{s : F μ (s) > y}. 21 / 25
Mertens, J.F. and Zamir, S. The normal distribution and repeated games. International Journal of Game Theory, 5, 187-197, 1976. Mertens, J.F. and Zamir, S. The maximal variation of a bounded martingale. Israel Journal of Mathematics, 27, 252-276, 1977. De Meyer, B. The maximal variation of a bounded martingale and the central limit theorem. Ann. Inst. Henri Poincaré, 34, 49-59, 1998. De Meyer, B. Price dynamics on a stock market with asymmetric information. Games and Economic Behavior, 69, 42-71, 2010. Gensbittel, F. Covariance contral problems over martingales with fixed terminal distribution arising from game theory. Preprint. 22 / 25
Sketch Proof of Theorem: sup E[L S n ] V n (μ) sup E[L S n ]. L μ,s M n n L μ,s M q n([0,1],2) n sup E[L S n ] = inf E[φ(L) + L μ,s n n φ Conv(K) φ* ( S n )], n sup E[L S n ] = L μ,s M q n([0,1],2) n sup S M q n([0,1],2) = inf E[φ(L) + φ Conv(K) inf E[φ(L) + φ Conv(K) φ* ( S n )] n sup S M q n([0,1],2) E[φ * ( S n n )]]. 23 / 25
Since φ * is a convex function, we have lim sup n S M q n([0,1],2) E[φ * ( S n n )] = E[φ * (Z)], Z N(0, 1). lim V n(μ) = inf E[φ(L) + lim n φ Conv(K) n sup S M q n([0,1],2) = inf φ Conv(K) E[φ(L) + φ* (Z)] = E[f μ (Z)Z]. E[φ * ( S n n )]] 24 / 25
Thank you for your attention! 25 / 25