Deep factorisation of the stable process: Part I: Lagrange lecture, Torino University Part II: Seminar, Collegio Carlo Alberto.

Σχετικά έγγραφα
2 Composition. Invertible Mappings

Second Order Partial Differential Equations

4.6 Autoregressive Moving Average Model ARMA(1,1)

Example Sheet 3 Solutions

Other Test Constructions: Likelihood Ratio & Bayes Tests

EE512: Error Control Coding

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Every set of first-order formulas is equivalent to an independent set

Partial Differential Equations in Biology The boundary element method. March 26, 2013

P AND P. P : actual probability. P : risk neutral probability. Realtionship: mutual absolute continuity P P. For example:

ST5224: Advanced Statistical Theory II

C.S. 430 Assignment 6, Sample Solutions

Homework 3 Solutions

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Statistical Inference I Locally most powerful tests

Finite Field Problems: Solutions

12. Radon-Nikodym Theorem

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Congruence Classes of Invertible Matrices of Order 3 over F 2

5. Choice under Uncertainty

5.4 The Poisson Distribution.

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω

Homework 8 Model Solution Section

A Bonus-Malus System as a Markov Set-Chain. Małgorzata Niemiec Warsaw School of Economics Institute of Econometrics

Areas and Lengths in Polar Coordinates

F19MC2 Solutions 9 Complex Analysis

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

The Simply Typed Lambda Calculus

Concrete Mathematics Exercises from 30 September 2016

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

The challenges of non-stable predicates

Lecture 21: Properties and robustness of LSE

Tridiagonal matrices. Gérard MEURANT. October, 2008

Uniform Convergence of Fourier Series Michael Taylor

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Inverse trigonometric functions & General Solution of Trigonometric Equations

Lecture 15 - Root System Axiomatics

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

Fractional Colorings and Zykov Products of graphs

Solution Series 9. i=1 x i and i=1 x i.

derivation of the Laplacian from rectangular to spherical coordinates

Problem Set 3: Solutions

Numerical Analysis FMN011

6.3 Forecasting ARMA processes

Space-Time Symmetries

The ε-pseudospectrum of a Matrix

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

Bounding Nonsplitting Enumeration Degrees

Solutions to Exercise Sheet 5

Partial Trace and Partial Transpose

( y) Partial Differential Equations

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

The Probabilistic Method - Probabilistic Techniques. Lecture 7: The Janson Inequality

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

Approximation of distance between locations on earth given by latitude and longitude

Areas and Lengths in Polar Coordinates

On a four-dimensional hyperbolic manifold with finite volume

Math221: HW# 1 solutions

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.

Matrices and Determinants

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 24/3/2007

Reminders: linear functions

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Orbital angular momentum and the spherical harmonics

Lecture 34 Bootstrap confidence intervals

6. MAXIMUM LIKELIHOOD ESTIMATION

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Homomorphism in Intuitionistic Fuzzy Automata

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Distances in Sierpiński Triangle Graphs

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

Section 8.3 Trigonometric Equations

Lecture 2. Soundness and completeness of propositional logic

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

A Note on Intuitionistic Fuzzy. Equivalence Relation

DIRECT PRODUCT AND WREATH PRODUCT OF TRANSFORMATION SEMIGROUPS

Lecture 10 - Representation Theory III: Theory of Weights

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 6/5/2006

Parametrized Surfaces

Homework for 1/27 Due 2/5

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER

PROPERTIES OF CERTAIN INTEGRAL OPERATORS. a n z n (1.1)

w o = R 1 p. (1) R = p =. = 1

Notes on the Open Economy

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.

Depth versus Rigidity in the Design of International Trade Agreements. Leslie Johns

A Two-Sided Laplace Inversion Algorithm with Computable Error Bounds and Its Applications in Financial Engineering

TMA4115 Matematikk 3

On the Galois Group of Linear Difference-Differential Equations

Srednicki Chapter 55

Limit theorems under sublinear expectations and probabilities

Transcript:

Deep factorisation of the stable process: Part I: Lagrange lecture, Torino University Part II: Seminar, Collegio Carlo Alberto. Andreas E. Kyprianou, University of Bath, UK.

Stable processes Definition A Lévy process X is called (strictly) α-stable if it satisfies the scaling property ( d cxc t) α = X Pcx, c > 0. Px t 0 Necessarily α (0, 2]. [α = 2 BM, exclude this.] The quantity ρ = P 0 (X t 0) will frequently appear as will ˆρ = 1 ρ. The characteristic exponent Ψ(θ) := t 1 log E(e iθxt ) satisfies Ψ(θ) = θ α (e πiα( 1 2 ρ) 1 (θ>0) + e πiα( 1 2 ρ) 1 (θ<0) ), θ R. Assume jumps in both directions.

The Wiener Hopf factorisation For a given characteristic exponent of a Lévy process, Ψ, there exist unique Bernstein functions, κ and ˆκ such that, up to a multiplicative constant, Ψ(θ) = ˆκ(iθ)κ( iθ), θ R. As Bernstein functions, κ and ˆκ can be seen as the Laplace exponents of (killed) subordinators. The probabilistic significance of these subordinators, is that their range corresponds precisely to the range of the running maximum of X and of X respectively.

The Wiener Hopf factorisation Explicit Wiener-Hopf factorisations are extremely rare! For the stable processes we are interested in we have κ(λ) = λ αρ and ˆκ(λ) = λ αˆρ, λ 0 where 0 < αρ, αˆρ < 1. Hypergeometric Lévy processes are another recently discovered family of Lévy processes for which the factorisation are known explicitly: For appropriate parameters (β, γ, ˆβ, ˆγ) Ψ(z) = Γ(1 β + γ iz) Γ(1 β iz) Γ( ˆβ + ˆγ + iz). Γ( ˆβ + iz)

Deep factorisation of the stable process Another factorisation also exists, which is more deeply embedded in the stable process. Based around the representation of the stable process as a real-valued self-similar Markov process (rssmp): An R-valued regular strong Markov process (X t : t 0) with probabilities P x, x R, is a rssmp if, there is a stability index α > 0 such that, for all c > 0 and x R, (cx tc α : t 0) under P x is P cx.

Markov additive processes (MAPs) E is a finite state space (J(t)) t 0 is a continuous-time, irreducible Markov chain on E process (ξ, J) in R E is called a Markov additive process (MAP) with probabilities P x,i, x R, i E, if, for any i E, s, t 0: Given {J(t) = i}, (ξ(t + s) ξ(t), J(t + s)) {(ξ(u), J(u)) : u t}, (ξ(t + s) ξ(t), J(t + s)) d = (ξ(s), J(s)) with (ξ(0), J(0)) = (0, i).

Pathwise description of a MAP The pair (ξ, J) is a Markov additive process if and only if, for each i, j E, there exist a sequence of iid Lévy processes (ξ n i ) n 0 and a sequence of iid random variables (U n ij ) n 0, independent of the chain J, such that if T 0 = 0 and (T n ) n 1 are the jump times of J, the process ξ has the representation ξ(t) = 1 (n>0) (ξ(t n ) + U n J(T n ),J(T n) ) + ξn J(T n) (t T n), for t [T n, T n+1 ), n 0.

rssmps, MAPs, Lamperti-Kiu (Chaumont, Panti, Rivero) Take the statespace of the MAP to be E = {1, 1}. Let where and X t = x e ξ(τ(t)) J(τ(t)) 0 t < T 0, τ(t) = inf { s } s > 0 : exp(αξ(u))du > t x α 0 T 0 = x α e αξ(u) du. Then X t is a real-valued self-similar Markov process in the sense that the law of (cx tc α : t 0) under P x is P cx. The converse (within a special class of rssmps) is also true. 0

Characteristics of a MAP Denote the transition rate matrix of the chain J by Q = (q ij ) i,j E. For each i E, the Laplace exponent of the Lévy process ξ i will be written ψ i (when it exists). For each pair of i, j E, define the Laplace transform G ij (z) = E(e zu ij ) of the jump distribution U ij (when it exists). Write G(z) for the N N matrix whose (i, j)th element is G ij (z). Let F(z) = diag(ψ 1 (z),..., ψ N (z)) + Q G(z), (when it exists), where indicates elementwise multiplication. The matrix exponent of the MAP (ξ, J) is given by E i (e zξ(t) ; J(t) = j) = ( e F(z)t), i, j E, i,j (when it exists).

An α-stable process is a rssmp An α-stable process is a rssmp. Remarkably (thanks to work of Chaumont, Panti and Rivero) we can compute precisely its matrix exponent explicitly Denote the underlying MAP (ξ, J), we prefer to give the matrix exponent of (ξ, J) as follows: F(z) = Γ(α z)γ(1 + z) Γ(αˆρ z)γ(1 αˆρ + z) Γ(α z)γ(1 + z) Γ(αρ)Γ(1 αρ) Γ(α z)γ(1 + z) Γ(αˆρ)Γ(1 αˆρ) Γ(α z)γ(1 + z) Γ(αρ z)γ(1 αρ + z), for Re(z) ( 1, α).

Ascending ladder MAP Observe the process (ξ, J) only at times of increase of new maxima of ξ. This gives a MAP, say (H + (t), J + (t)) t 0, with the property that H is non-decreasing with the same range as the running maximum. Its exponent can be identified by κ( z), where κ(λ) = diag(φ 1 (λ),, Φ N (λ)) Λ K(λ), λ 0. Here, for i = 1,, N, Φ i are Bernstein functions (exponents of subordinators), Λ = (Λ i,j ) i,j E is the intensity matrix of J + and K(λ) i,j = E[e λu+ i,j ], where U + i,j 0 are the additional discontinuities added to the path of ξ each time the chain J + switches from i to j, and U + i,i := 0, i E.

MAP WHF Theorem For θ R, up to a multiplicative factor, F(iθ) = 1 π ˆκ(iθ) T π κ( iθ), where π = diag(π), π is the stationary distribution of Q, ˆκ plays the role of κ, but for the dual MAP to (ξ, J). The dual process, or time-reversed process is equal in law to the MAP with exponent ˆF(z) = 1 π F( z) T π,

α (0, 1] Define the family of Bernstein functions κ q+i,p+j (λ) := 0 (1 e λx ((q + i) (p + j) 1) ) (1 e x ) q+i (1 + e x ) p+j e αx dx, where q, p {αρ, αˆρ} and i, j {0, 1} such that q + p = α and i + j = 1.

Deep Factorisation α (0, 1] Theorem Fix α (0, 1]. Up to a multiplicative constant, the ascending ladder MAP exponent, κ, is given by κ αρ+1,α ˆρ (λ) + sin(πα ˆρ) sin(παρ) κ α ˆρ,αρ+1 (0+) sin(παρ) κ αρ,α ˆρ+1 (λ) sin(πα ˆρ) λ sin(πα ˆρ) κ α ˆρ,αρ+1 (λ) sin(παρ) λ κ α ˆρ+1,αρ (λ) + sin(παρ) sin(πα ˆρ) κ αρ,α ˆρ+1 (0+) Up to a multiplicative constant, the dual ascending ladder MAP exponent, ˆκ is given by κ α ˆρ+1,αρ (λ + 1 α) + sin(παρ) sin(πα ˆρ) κ αρ,α ˆρ+1 (0+) κ αρ,α ˆρ+1(λ + 1 α) λ + 1 α κ α ˆρ,αρ+1(λ + 1 α) λ + 1 α κ αρ+1,α ˆρ (λ + 1 α) + sin(πα ˆρ) sin(παρ) κ α ˆρ,αρ+1 (0+)

α (1, 2) Define the family of Bernstein functions by φ q+i,p+j (λ) = 0 (1 e λu ) { ((q + i) (p + j) 1) (1 e u ) q+i (1 + e u ) p+j (α 1) 2(1 e u ) q (1 + e u ) p } e u du, for q, p {αρ, αˆρ} and i, j {0, 1} such that q + p = α and i + j = 1.

Deep Factorisation α (1, 2) Theorem Fix α (1, 2). Up to a multiplicative constant, the ascending ladder MAP exponent, κ, is given by sin(παρ)φ αρ+1,α ˆρ (λ + α 1) + sin(παρ)φ α ˆρ,αρ+1 (0+) sin(πα ˆρ) φ α ˆρ,αρ+1(λ + α 1) λ + α 1 sin(παρ) φ αρ,α ˆρ+1(λ + α 1) λ + α 1 sin(πα ˆρ)φ α ˆρ+1,αρ (λ + α 1) + sin(πα ˆρ)φ αρ,α ˆρ+1 (0+) for λ 0. Up to a multiplicative constant, the dual ascending ladder MAP exponent, ˆκ, is given by sin(πα ˆρ)φ α ˆρ+1,αρ (λ) + sin(πα ˆρ)φ αρ,α ˆρ+1 (0+) sin(παρ) φ α ˆρ,αρ+1(λ) λ sin(πα ˆρ) φ αρ,α ˆρ+1(λ) λ sin(παρ)φ αρ+1,α ˆρ (λ) + sin(παρ)φ α ˆρ,αρ+1 (0+) for λ 0.

Tools: 1 Recall that [ ] Λ κ(λ) = diag(φ 1 (λ), Φ 1 (λ)) 1, 1 Λ 1, 1 e λx F + 1, 1 (dx) Λ 1,1 e λx F + 1,1 (dx) Λ 1,1 In general, we can write Φ i (λ) = n i (ζ = ) + 0 (1 e λx )n i (ε ζ dx, J(ζ) = i, ζ < ), where ζ = inf{s 0 : ε(s) > 0} for the canonical excursion ε of ξ from its maximum.

Tools: 1 Lemma Let T a = inf{t > 0 : ξ(t) > a}. Suppose that lim sup t ξ(t) = (i.e. the ladder height process (H +, J + ) does not experience killing). Then for x > 0 we have up to a constant lim P 0,i(ξ(T a a) a dx, J(T a) = 1) [ ] = π 1n 1(ε(ζ) > x, J(ζ) = 1, ζ < ) + π 1Λ 1,1(1 F 1,1(x)) + dx. (π 1, π 1 ) is easily derived by solving πq = 0. We can work with the LHS in the above lemma e.g. via lim P 0,1(ξ(T a ) a > u, J(T a ) = 1) a = lim P e a(x a τ + 1 τ > e u ; τ 1 + < τ 1 ). 1

Tools: 2 The problem with applying the Markov additive renewal in the case that α (1, 2) is that (H +, J + ) does experience killing. It turns out that detf(z) = 0 has a root at z = α 1. Moreover the exponent of a MAP (Esscher transform of F) F (z) = 1 π F(z + α 1) π, where π = (sin(παˆρ), sin(παρ)) is the stationary distribution of F (0). And κ (λ) = 1 π κ(λ α + 1) π does not experience killing. However, in order to use Markov additive renewal theory to compute κ, need to know something about the rssmp to which the MAP with exponent F corresponds.

Riesz-Bogdan-Zak transform Theorem (Riesz Bogdan Zak transform) Suppose that X is a stable process as outlined in the introduction. Define η(t) = inf{s > 0 : s 0 X u 2α du > t}, t 0. Then, for all x R\{0}, ( 1/X η(t) ) t 0 under P x is equal in law to (X, P 1/x ), where dp ( ) x sin(παρ) + sin(πα ˆρ) (sin(παρ) sin(πα ˆρ))sgn(Xt ) X t α 1 = dp x 1 F sin(παρ) + sin(πα ˆρ) (sin(παρ) sin(πα ˆρ))sgn(x) x (t<τ {0} ) t and F t := σ(x s : s t), t 0. Moreover, the process (X, P x), x R\{0} is a self-similar Markov process with underlying MAP via the Lamperti-Kiu transform given by F (z).

(X, P) Riesz-Bogdan-Zak Doob h-transform (X, P ) Lamperti-Kiu MAP: F(z) Esscher Lamperti-Kiu MAP: F (z)

Computing Φ 1 (λ) from κ (λ) If we write X t = sup s t X s and X t = inf s t X s, t 0, then we also have π1n 1(ε(ζ) > u, J(ζ) = 1, ζ < ) = d ( ) du lim x 0 P x X τ + > e u, X 1 τ + 1 > X τ +, τ + 1 1 < τ 1 d 1 = lim = lim du x 0 1 x 0 0 1 = lim x 0 0 0 d dy d dy P x(x τ + 1 > e u, X τ + 1 dz, τ + 1 < τ z ) d du P x(x τ + > e u, X 1 τ + 1 y, τ 1 + < τ z ) y=z dz d du P x(x τ + y > e u, τ + y < τ z ) y=z dz

Computing Φ 1 (λ) from κ (λ) For 0 < x < y < 1 and u > 0, where d ( ) du lim x 0 P x X τ + y > e u, τ y + < τ z = d du lim x 0 P 1/x(X τ ( 1/y,1/z) ( e u, 0)) = d du lim ˆP 1/x (X x 0 τ ( 1/z,1/y) (0, e u )) = ˆp ± ( 2yze u z + y y + z ) 2yz y + z e u, ˆp ± (y) = 2 α 1 Γ(2 α) Γ(1 αˆρ)γ(1 αρ) (1 + y) αˆρ (1 y) αρ was computed recently in a paper by K. Pardo & Watson (2014).

Computing Φ 1 (λ) from κ (λ) Putting the pieces together, we have, up to a constant Φ 1(λ) = = λ 0 0 = φ αρ+1,αˆρ (λ) (1 e λx )n 1(ε ζ dx, J(ζ) = 1, ζ < ) e λx n 1(ε ζ > x, J(ζ) = 1, ζ < )

Thank you!