Stochastic Target Games with Controlled Loss

Σχετικά έγγραφα
SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Other Test Constructions: Likelihood Ratio & Bayes Tests

Statistical Inference I Locally most powerful tests

2 Composition. Invertible Mappings

4.6 Autoregressive Moving Average Model ARMA(1,1)

Models for Probabilistic Programs with an Adversary

Credit Risk. Finance and Insurance - Stochastic Analysis and Practical Methods Spring School Jena, March 2009

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.

ST5224: Advanced Statistical Theory II

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Bounding Nonsplitting Enumeration Degrees

EE512: Error Control Coding

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

5. Choice under Uncertainty

Uniform Convergence of Fourier Series Michael Taylor

Example Sheet 3 Solutions

Second Order Partial Differential Equations

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Homework 3 Solutions

C.S. 430 Assignment 6, Sample Solutions

Solution Series 9. i=1 x i and i=1 x i.

Every set of first-order formulas is equivalent to an independent set

Section 8.3 Trigonometric Equations

Areas and Lengths in Polar Coordinates

Mean-Variance Hedging on uncertain time horizon in a market with a jump

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Lecture 34 Bootstrap confidence intervals

derivation of the Laplacian from rectangular to spherical coordinates

Areas and Lengths in Polar Coordinates

P AND P. P : actual probability. P : risk neutral probability. Realtionship: mutual absolute continuity P P. For example:

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

The Simply Typed Lambda Calculus

6.3 Forecasting ARMA processes

Lecture 26: Circular domains

( y) Partial Differential Equations

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

Homework 8 Model Solution Section

Approximation of distance between locations on earth given by latitude and longitude

Abstract Storage Devices

Reminders: linear functions

Matrices and Determinants

Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03)..

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Exercises to Statistics of Material Fatigue No. 5

An Introduction to Signal Detection and Estimation - Second Edition Chapter II: Selected Solutions

ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ. Ψηφιακή Οικονομία. Διάλεξη 7η: Consumer Behavior Mαρίνα Μπιτσάκη Τμήμα Επιστήμης Υπολογιστών

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

Fractional Colorings and Zykov Products of graphs

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Solutions to Exercise Sheet 5

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.

12. Radon-Nikodym Theorem

D Alembert s Solution to the Wave Equation

Problem Set 3: Solutions

Instruction Execution Times

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University

MATH423 String Theory Solutions 4. = 0 τ = f(s). (1) dτ ds = dxµ dτ f (s) (2) dτ 2 [f (s)] 2 + dxµ. dτ f (s) (3)

Srednicki Chapter 55

Math 6 SL Probability Distributions Practice Test Mark Scheme

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Exercise 2: The form of the generalized likelihood ratio

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

Variational Wavefunction for the Helium Atom

ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ. Ψηφιακή Οικονομία. Διάλεξη 9η: Basics of Game Theory Mαρίνα Μπιτσάκη Τμήμα Επιστήμης Υπολογιστών

The challenges of non-stable predicates

6. MAXIMUM LIKELIHOOD ESTIMATION

Durbin-Levinson recursive method

Lecture 21: Properties and robustness of LSE

Inverse trigonometric functions & General Solution of Trigonometric Equations

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 6/5/2006

1) Formulation of the Problem as a Linear Programming Model

Concrete Mathematics Exercises from 30 September 2016

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Numerical Analysis FMN011

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

DESIGN OF MACHINERY SOLUTION MANUAL h in h 4 0.

Section 7.6 Double and Half Angle Formulas

Finite difference method for 2-D heat equation

Math221: HW# 1 solutions

5.4 The Poisson Distribution.

Solution Concepts. Παύλος Στ. Εφραιµίδης. Τοµέας Λογισµικού και Ανάπτυξης Εφαρµογών Τµήµα Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών

Depth versus Rigidity in the Design of International Trade Agreements. Leslie Johns

HW 3 Solutions 1. a) I use the auto.arima R function to search over models using AIC and decide on an ARMA(3,1)

Limit theorems under sublinear expectations and probabilities

Optimal Fund Menus. joint work with. Julien Hugonnier (EPFL) 1 / 17

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER

Test Data Management in Practice

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Μηχανική Μάθηση Hypothesis Testing

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Transcript:

Stochastic Target Games with Controlled Loss B. Bouchard Ceremade - Univ. Paris-Dauphine, and, Crest - Ensae-ParisTech USC, February 2013 Joint work with L. Moreau (ETH-Zürich) and M. Nutz (Columbia)

Plan of the talk Problem formulation

Plan of the talk Problem formulation Examples

Plan of the talk Problem formulation Examples Assumptions for this talk

Plan of the talk Problem formulation Examples Assumptions for this talk Geometric dynamic programming

Plan of the talk Problem formulation Examples Assumptions for this talk Geometric dynamic programming Application to the monotone case

Problem formulation and Motivations

Problem formulation Provide a PDE characterization of the viability sets [ ] Λ(t) := {(z, p) : u U s. t. E l(z u[ϑ],ϑ t,z (T )) m ϑ V} In which : V is a set of admissible adverse controls U is a set of admissible strategies Z u[ϑ],ϑ t,z is an adapted R d -valued process s.t. Z u[ϑ],ϑ t,z (t) = z l is a given loss/utility function m a threshold.

Application in finance Z u[ϑ],ϑ t,z = (X u[ϑ],ϑ t,x, Y u[ϑ],ϑ ) where t,x,y X u[ϑ],ϑ t,x models financial assets or factors with dynamics depending on ϑ Y u[ϑ],ϑ t,x,y models a wealth process ϑ is the control of the market : parameter uncertainty (e.g. volatility), adverse players, etc... u[ϑ] is the financial strategy given the past observations of ϑ. Flexible enough to embed constraints, transaction costs, market impact, etc...

Examples [ ] Λ(t) := {(z, p) : u U s.t. E l(z u[ϑ],ϑ t,z (T )) m ϑ V} Expected loss control for l(z) = [y g(x)] Give sense to problems that would be degenerate under P a.s. constraints : B. and Dang (guaranteed VWAP pricing).

Examples Constraint in probability : [ Λ(t) := {(z, p) : u U s.t. P Z u[ϑ],ϑ t,z ] (T ) O m ϑ V} for l(z) = 1 z O, m (0, 1). Quantile-hedging in finance for O := {y g(x)}.

Examples Matching a P&L distribution = Multiple constraints in probability : [ ( ) ] { u U s.t. P dist (T ), O γ i m i i I, ϑ V} Z u[ϑ],ϑ t,z (see B. and Thanh Nam)

Examples Almost sure constraint : Λ(t) := {z : u U s.t. Z u[ϑ],ϑ t,z (T ) O P a.s. ϑ V} for l(z) = 1 z O, m = 1. Super-hedging in finance for O := {y g(x)}. Unfortunately not covered by our assumptions...

Setting for this talk (see the paper for an abstract version)

Brownian diffusion setting

Brownian diffusion setting State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformly Lipschitz in space) Z(s) = z + s t µ(z(r), u[ϑ] r, ϑ r ) dr + s t σ(z(r), u[ϑ] r, ϑ r ) dw r.

Brownian diffusion setting State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformly Lipschitz in space) Z(s) = z + s t µ(z(r), u[ϑ] r, ϑ r ) dr + Controls and strategies : s t σ(z(r), u[ϑ] r, ϑ r ) dw r.

Brownian diffusion setting State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformly Lipschitz in space) Z(s) = z + s t µ(z(r), u[ϑ] r, ϑ r ) dr + s t σ(z(r), u[ϑ] r, ϑ r ) dw r. Controls and strategies : V is the set of predictable processes with values in V R d.

Brownian diffusion setting State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformly Lipschitz in space) Z(s) = z + s t µ(z(r), u[ϑ] r, ϑ r ) dr + Controls and strategies : s t σ(z(r), u[ϑ] r, ϑ r ) dw r. V is the set of predictable processes with values in V R d. U is set of non-anticipating maps u : ϑ V U, i.e. {ϑ 1 = (0,s] ϑ 2 } {u[ϑ 1 ] = (0,s] u[ϑ 2 ]} ϑ 1, ϑ 2 V, s T, where U is the set of predictable processes with values in U R d.

Brownian diffusion setting State process : Z u[ϑ],ϑ solves (µ and σ continuous, uniformly Lipschitz in space) Z(s) = z + s t µ(z(r), u[ϑ] r, ϑ r ) dr + Controls and strategies : s t σ(z(r), u[ϑ] r, ϑ r ) dw r. V is the set of predictable processes with values in V R d. U is set of non-anticipating maps u : ϑ V U, i.e. {ϑ 1 = (0,s] ϑ 2 } {u[ϑ 1 ] = (0,s] u[ϑ 2 ]} ϑ 1, ϑ 2 V, s T, where U is the set of predictable processes with values in U R d. The loss function l has polynomial growth and is continuous.

The game problem Worst expected loss for a given strategy : [ ( ) ] J(t, z, u) := ess inf E l Z u[ϑ],ϑ t,z (T ) F t ϑ V

The game problem Worst expected loss for a given strategy : [ ( ) ] J(t, z, u) := ess inf E l Z u[ϑ],ϑ t,z (T ) F t ϑ V The viability sets are given by Λ(t) := {(z, p) : u U s.t. J(t, z, u) p P a.s.}. Compare with the formulation of games in Buckdahn and Li (2008).

Geometric dynamic programming principle How are the properties (z, m) Λ(t) and (Z u[ϑ],ϑ t,z (θ),?) Λ(θ) related?

First direction : Take (z, m) Λ(t) and u U such that [ ( ) ] ess inf E l Z u[ϑ],ϑ t,z (T ) F t m ϑ V

First direction : Take (z, m) Λ(t) and u U such that [ ( ) ] ess inf E l Z u[ϑ],ϑ t,z (T ) F t m ϑ V To take care of the evolution of the worst case scenario conditional expectation, we introduce : [ ( Sr ϑ := ess inf E l Z u[ϑ r ϑ],ϑ ) ] r ϑ t,z (T ) F r. ϑ V

First direction : Take (z, m) Λ(t) and u U such that [ ( ) ] ess inf E l Z u[ϑ],ϑ t,z (T ) F t m ϑ V To take care of the evolution of the worst case scenario conditional expectation, we introduce : [ ( Sr ϑ := ess inf E l Z u[ϑ r ϑ],ϑ ) ] r ϑ t,z (T ) F r. ϑ V Then S ϑ is a submartingale and S ϑ t m for all ϑ V,

First direction : Take (z, m) Λ(t) and u U such that [ ( ) ] ess inf E l Z u[ϑ],ϑ t,z (T ) F t m ϑ V To take care of the evolution of the worst case scenario conditional expectation, we introduce : [ ( Sr ϑ := ess inf E l Z u[ϑ r ϑ],ϑ ) ] r ϑ t,z (T ) F r. ϑ V Then S ϑ is a submartingale and S ϑ t m for all ϑ V, and we can find a martingale M ϑ such that S ϑ M ϑ and M ϑ t = S ϑ t m.

We have for all stopping times θ (may depend on u and ϑ) [ ( ess inf E l Z u[ϑ θ ϑ],ϑ ) ] θ ϑ t,z (T ) F θ = Sθ ϑ ϑ V Mϑ θ.

We have for all stopping times θ (may depend on u and ϑ) [ ( ess inf E l Z u[ϑ θ ϑ],ϑ ) ] θ ϑ t,z (T ) F θ = Sθ ϑ ϑ V Mϑ θ. If the above is usc in space uniformly in the strategies, θ takes finitely many values, we can find a covering formed of B i (t i, z i ), i 1, such that J(t i, z i ; u) M ϑ θ ε on A i := {(θ, Z u[ϑ],ϑ t,z (θ)) B i }.

We have for all stopping times θ (may depend on u and ϑ) [ ( ess inf E l Z u[ϑ θ ϑ],ϑ ) ] θ ϑ t,z (T ) F θ = Sθ ϑ ϑ V Mϑ θ. If the above is usc in space uniformly in the strategies, θ takes finitely many values, we can find a covering formed of B i (t i, z i ), i 1, such that Hence J(t i, z i ; u) M ϑ θ ε on A i := {(θ, Z u[ϑ],ϑ t,z (θ)) B i }. K(t i, z i ) M ϑ θ ε on A i. where [ ( ) ] K(t i, z i ) := esssup ess inf E l Z u[ϑ],ϑ t u U ϑ V i,z i (T ) F ti is deterministic.

We have for all stopping times θ (may depend on u and ϑ) [ ( ess inf E l Z u[ϑ θ ϑ],ϑ ) ] θ ϑ t,z (T ) F θ = Sθ ϑ ϑ V Mϑ θ. If the above is usc in space uniformly in the strategies, θ takes finitely many values, we can find a covering formed of B i (t i, z i ), i 1, such that Hence J(t i, z i ; u) M ϑ θ ε on A i := {(θ, Z u[ϑ],ϑ t,z (θ)) B i }. K(t i, z i ) M ϑ θ ε on A i. where [ ( ) ] K(t i, z i ) := esssup ess inf E l Z u[ϑ],ϑ t u U ϑ V i,z i (T ) F ti If K is lsc is deterministic. K(θ(ω), Z u[ϑ],ϑ t,z (θ)(ω)) K(t i, z i ) ε M ϑ θ (ω) 2ε on A i.

We have for all stopping times θ (may depend on u and ϑ) [ ( ess inf E l Z u[ϑ θ ϑ],ϑ ) ] θ ϑ t,z (T ) F θ = Sθ ϑ ϑ V Mϑ θ. If the above is usc in space uniformly in the strategies, θ takes finitely many values, we can find a covering formed of B i (t i, z i ), i 1, such that Hence J(t i, z i ; u) M ϑ θ ε on A i := {(θ, Z u[ϑ],ϑ t,z (θ)) B i }. K(t i, z i ) M ϑ θ ε on A i. where [ ( ) ] K(t i, z i ) := esssup ess inf E l Z u[ϑ],ϑ t u U ϑ V i,z i (T ) F ti If K is lsc (Z u[ϑ],ϑ t,z (θ(ω)), Mθ ϑ (ω) 3ε) Λ(θ(ω)) is deterministic.

To get rid of ε, and for non-regular cases (in terms of K and J), we work by approximation : One needs to start from (z, m ι) Λ(t) and obtain where Λ(t) := (Z u[ϑ],ϑ t,z (θ), Mθ ϑ ) Λ(θ) P a.s. ϑ V { (z, m) : there exist (tn, z n, m n ) (t, z, m) such that (z n, m n ) Λ(t n ) and t n t for all n 1 }

To get rid of ε, and for non-regular cases (in terms of K and J), we work by approximation : One needs to start from (z, m ι) Λ(t) and obtain where Λ(t) := (Z u[ϑ],ϑ t,z (θ), Mθ ϑ ) Λ(θ) P a.s. ϑ V { (z, m) : there exist (tn, z n, m n ) (t, z, m) such that (z n, m n ) Λ(t n ) and t n t for all n 1 Remark : M ϑ = M αϑ t,p := p + t αϑ s dw s, with α ϑ A, the set of predictable processes such that the above is a martingale. }

To get rid of ε, and for non-regular cases (in terms of K and J), we work by approximation : One needs to start from (z, m ι) Λ(t) and obtain where Λ(t) := (Z u[ϑ],ϑ t,z (θ), Mθ ϑ ) Λ(θ) P a.s. ϑ V { (z, m) : there exist (tn, z n, m n ) (t, z, m) such that (z n, m n ) Λ(t n ) and t n t for all n 1 Remark : M ϑ = M αϑ t,p := p + t αϑ s dw s, with α ϑ A, the set of predictable processes such that the above is a martingale. Remark : The bounded variation part of S is useless : optimal adverse player control should turn S in a martingale. }

Reverse direction : Assume that {θ ϑ, ϑ V} takes finitely many values and that (Z u[ϑ],ϑ t,z (θ ϑ ), Mt,m(θ αϑ ϑ )) Λ(θ ϑ ) P a.s. ϑ V.

Reverse direction : Assume that {θ ϑ, ϑ V} takes finitely many values and that (Z u[ϑ],ϑ t,z (θ ϑ ), Mt,m(θ αϑ ϑ )) Λ(θ ϑ ) P a.s. ϑ V. Then K(θ ϑ, Z u[ϑ],ϑ t,z (θ ϑ )) Mt,m(θ αϑ ϑ ).

Reverse direction : Assume that {θ ϑ, ϑ V} takes finitely many values and that Then (Z u[ϑ],ϑ t,z (θ ϑ ), Mt,m(θ αϑ ϑ )) Λ(θ ϑ ) P a.s. ϑ V. K(θ ϑ, Z u[ϑ],ϑ t,z (θ ϑ )) Mt,m(θ αϑ ϑ ). Play again with balls + concatenation of strategies (assuming smoothness) to obtain ū U such that [ ( ) ] E l Z (u θ ϑ ū)[ϑ],ϑ t,z (T ) F θ Mt,m(θ αϑ ϑ ) ε P a.s. ϑ V.

Reverse direction : Assume that {θ ϑ, ϑ V} takes finitely many values and that Then (Z u[ϑ],ϑ t,z (θ ϑ ), Mt,m(θ αϑ ϑ )) Λ(θ ϑ ) P a.s. ϑ V. K(θ ϑ, Z u[ϑ],ϑ t,z (θ ϑ )) Mt,m(θ αϑ ϑ ). Play again with balls + concatenation of strategies (assuming smoothness) to obtain ū U such that [ ( ) ] E l Z (u θ ϑ ū)[ϑ],ϑ t,z (T ) F θ Mt,m(θ αϑ ϑ ) ε P a.s. ϑ V. By taking expectation [ ( ) ] E l Z (u θ ϑ ū)[ϑ],ϑ t,z (T ) F t m ε P a.s. ϑ V

Reverse direction : Assume that {θ ϑ, ϑ V} takes finitely many values and that Then (Z u[ϑ],ϑ t,z (θ ϑ ), Mt,m(θ αϑ ϑ )) Λ(θ ϑ ) P a.s. ϑ V. K(θ ϑ, Z u[ϑ],ϑ t,z (θ ϑ )) Mt,m(θ αϑ ϑ ). Play again with balls + concatenation of strategies (assuming smoothness) to obtain ū U such that [ ( ) ] E l Z (u θ ϑ ū)[ϑ],ϑ t,z (T ) F θ Mt,m(θ αϑ ϑ ) ε P a.s. ϑ V. By taking expectation [ ( ) ] E l Z (u θ ϑ ū)[ϑ],ϑ t,z (T ) F t m ε P a.s. ϑ V so that (z, m ε) Λ(t).

To cover the general case by approximations, we need to start with where (Z u[ϑ],ϑ t,z (θ ϑ ), Mt,m(θ αϑ ϑ )) Λ ι (θ ϑ ) P a.s. ϑ V, Λ ι (t) := { (z, p) : (t, z, p ) B ι (t, z, p) implies (z, p ) Λ(t ) }.

To cover the general case by approximations, we need to start with where (Z u[ϑ],ϑ t,z (θ ϑ ), Mt,m(θ αϑ ϑ )) Λ ι (θ ϑ ) P a.s. ϑ V, Λ ι (t) := { (z, p) : (t, z, p ) B ι (t, z, p) implies (z, p ) Λ(t ) }. To concatenate while keeping the non-anticipative feature : Martingale strategies : α ϑ replaced by a : ϑ V a[ϑ] A in a non-anticipating way (corresponding set A) Non-anticipating stopping times : θ[ϑ] (typically first exit time of (Z u[ϑ],ϑ ), M a[ϑ] ) from a ball.

The geometric dynamic programming principle (GDP1) : If (z, m ι) Λ(t) for some ι > 0, then u U and {α ϑ, ϑ V} A such that (Z u[ϑ],ϑ t,z (θ), Mt,m(θ)) αϑ Λ(θ) P a.s. ϑ V. (GDP2) : If (u, a) U A and ι > 0 are such that (Z u[ϑ],ϑ t,z (θ[ϑ]), M a[ϑ] t,m (θ[ϑ])) Λ ι (θ[ϑ]) P a.s. ϑ V for some family (θ[ϑ], ϑ V) of non-anticipating stopping times, then (z, m ε) Λ(t), ε > 0. Rem : Relaxed version of Soner and Touzi (2002) and B., Elie and Touzi (2009).

Application to the monotone case

Monotone case : Z u[ϑ],ϑ t,x,y R d R with X u[ϑ],ϑ t,x = (X u[ϑ],ϑ t,x, Y u[ϑ],ϑ ) with values in t,x,y independent of y and Y u[ϑ],ϑ t,x,y y.

Monotone case : Z u[ϑ],ϑ t,x,y R d R with X u[ϑ],ϑ t,x = (X u[ϑ],ϑ t,x, Y u[ϑ],ϑ ) with values in t,x,y independent of y and Y u[ϑ],ϑ t,x,y y. The value function is : ϖ(t, x, m) := inf{y : (x, y, m) Λ(t)}.

Monotone case : Z u[ϑ],ϑ t,x,y R d R with X u[ϑ],ϑ t,x = (X u[ϑ],ϑ t,x, Y u[ϑ],ϑ ) with values in t,x,y independent of y and Y u[ϑ],ϑ t,x,y y. The value function is : ϖ(t, x, m) := inf{y : (x, y, m) Λ(t)}. Remark :

Monotone case : Z u[ϑ],ϑ t,x,y R d R with X u[ϑ],ϑ t,x = (X u[ϑ],ϑ t,x, Y u[ϑ],ϑ ) with values in t,x,y independent of y and Y u[ϑ],ϑ t,x,y y. The value function is : ϖ(t, x, m) := inf{y : (x, y, m) Λ(t)}. Remark : If ϕ is lsc and ϖ ϕ, then Λ(t) {(x, y, m) : y ϕ(t, x, m)}.

Monotone case : Z u[ϑ],ϑ t,x,y R d R with X u[ϑ],ϑ t,x = (X u[ϑ],ϑ t,x, Y u[ϑ],ϑ ) with values in t,x,y independent of y and Y u[ϑ],ϑ t,x,y y. The value function is : ϖ(t, x, m) := inf{y : (x, y, m) Λ(t)}. Remark : If ϕ is lsc and ϖ ϕ, then Λ(t) {(x, y, m) : y ϕ(t, x, m)}. If ϕ is usc and ϕ ϖ then Λ ι (t) {(x, y, m) : y ϕ(t, x, m) + η ι } for some η ι > 0.

Thm :

Thm : (GDP1) : Let ϕ C 0 be such that arg min(ϖ ϕ) = (t, x, m). Assume that y > ϖ(t, x, m ι) for some ι > 0. Then, there exists (u, a) U A that Y u[ϑ],ϑ t,x,y (θ ϑ ) ϕ(θ ϑ, X u[ϑ],ϑ t,x (θ ϑ ), M a[ϑ] t,m (θ ϑ )) P a.s. ϑ V.

Thm : (GDP1) : Let ϕ C 0 be such that arg min(ϖ ϕ) = (t, x, m). Assume that y > ϖ(t, x, m ι) for some ι > 0. Then, there exists (u, a) U A that Y u[ϑ],ϑ t,x,y (θ ϑ ) ϕ(θ ϑ, X u[ϑ],ϑ t,x (θ ϑ ), M a[ϑ] t,m (θ ϑ )) P a.s. ϑ V. (GDP2) : Let ϕ C 0 be such that arg max(ϖ ϕ) = (t, x, m). Assume that (u, a) U A and η > 0 are such that Y u[ϑ],ϑ t,x,y (θ[ϑ]) ϕ(θ[ϑ], X u[ϑ],ϑ t,x (θ[ϑ]), M a[ϑ] t,m (θ[ϑ]))+η P a.s. ϑ V. Then, y ϖ(t, x, m ε), ε > 0.

Thm : (GDP1) : Let ϕ C 0 be such that arg min(ϖ ϕ) = (t, x, m). Assume that y > ϖ(t, x, m ι) for some ι > 0. Then, there exists (u, a) U A that Y u[ϑ],ϑ t,x,y (θ ϑ ) ϕ(θ ϑ, X u[ϑ],ϑ t,x (θ ϑ ), M a[ϑ] t,m (θ ϑ )) P a.s. ϑ V. (GDP2) : Let ϕ C 0 be such that arg max(ϖ ϕ) = (t, x, m). Assume that (u, a) U A and η > 0 are such that Y u[ϑ],ϑ t,x,y (θ[ϑ]) ϕ(θ[ϑ], X u[ϑ],ϑ t,x (θ[ϑ]), M a[ϑ] t,m (θ[ϑ]))+η P a.s. ϑ V. Then, y ϖ(t, x, m ε), ε > 0. Remark : In the spirit of the weak dynamic programming principle (B. and Touzi, and, B. and Nutz).

PDE characterization - waving hands version Assuming smoothness, existence of optimal strategies... y = ϖ(t, x, m) implies Y u[ϑ],ϑ (t+) ϖ(t+, X u[ϑ],ϑ (t+), M a[ϑ] (t+)) for all ϑ.

PDE characterization - waving hands version Assuming smoothness, existence of optimal strategies... y = ϖ(t, x, m) implies Y u[ϑ],ϑ (t+) ϖ(t+, X u[ϑ],ϑ (t+), M a[ϑ] (t+)) for all ϑ. This implies dy u[ϑ],ϑ (t) dϖ(t, X u[ϑ],ϑ (t), M a[ϑ] (t)) for all ϑ

PDE characterization - waving hands version Assuming smoothness, existence of optimal strategies... y = ϖ(t, x, m) implies Y u[ϑ],ϑ (t+) ϖ(t+, X u[ϑ],ϑ (t+), M a[ϑ] (t+)) for all ϑ. This implies dy u[ϑ],ϑ (t) dϖ(t, X u[ϑ],ϑ (t), M a[ϑ] (t)) for all ϑ Hence, for all ϑ, µ Y (x, y, u[ϑ] t, ϑ t ) L u[ϑ]t,ϑt,a[ϑ]t X,M ϖ(t, x, m) σ Y (x, y, u[ϑ] t, ϑ t ) = σ X (x, u[ϑ] t, ϑ t )D x ϖ(t, x, p) with y = ϖ(t, x, m) +a[ϑ] t D m ϖ(t, x, m)

PDE characterization - waving hands version Supersolution property where inf v V sup (u,a) N v ϖ ( µ Y (, ϖ, u, v) L u,v,a X,M ϖ ) 0 N v ϖ := {(u, a) U R d : σ Y (, ϖ, u, v) = σ X (, u, v)d x ϖ + ad m ϖ}.

PDE characterization - waving hands version Supersolution property where inf v V sup (u,a) N v ϖ ( µ Y (, ϖ, u, v) L u,v,a X,M ϖ ) 0 N v ϖ := {(u, a) U R d : σ Y (, ϖ, u, v) = σ X (, u, v)d x ϖ + ad m ϖ}. Subsolution property ( ) sup inf µ Y (, ϖ, u[v], v) L u[v],v,a[v] X,M ϖ 0 (u[ ],a[ ]) N [ ] ϖ v V where N [ ] ϖ := {loc. Lip. (u[ ], a[ ]) s.t. (u[ ], a[ ]) N ϖ( )}.

References R. Buckdahn and J. Li (2008). Stochastic differential games and viscosity solutions of Hamilton-Jacobi-Bellman-Isaacs equations. SIAM SICON, 47(1) :444-475. H.M. Soner and N. Touzi (2002). Dynamic programming for stochastic target problems and geometric flows, JEMS, 4, 201-236. B., R. Elie and N. Touzi (2009). Stochastic Target problems with controlled loss, SIAM SICON, 48 (5), 3123-3150. B. and M. Nutz (2012). Weak Dynamic Programming for Generalized State Constraints, SIAM SICON, to appear. L. Moreau (2011), Stochastic target problems with controlled loss in a jump diffusion model, SIAM SICON, 49, 2577-2607, 2011. B. and N. M. Dang (2010). Generalized stochastic target problems for pricing and partial hedging under loss constraints - Application in optimal book liquidation, F&S, online. B. and V. Thanh Nam (2011). A stochastic target approach for P&L matching problems, MOR, 37(3), 526-558.