When using the normal approximation to a discrete distribution, use the continuity correction.

Σχετικά έγγραφα
An Inventory of Continuous Distributions

Solution Series 9. i=1 x i and i=1 x i.

ST5224: Advanced Statistical Theory II

5.4 The Poisson Distribution.

2 Composition. Invertible Mappings

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.

Other Test Constructions: Likelihood Ratio & Bayes Tests

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Math 6 SL Probability Distributions Practice Test Mark Scheme

Homework 3 Solutions

Statistical Inference I Locally most powerful tests

C.S. 430 Assignment 6, Sample Solutions

EE512: Error Control Coding

6.3 Forecasting ARMA processes

APPENDICES APPENDIX A. STATISTICAL TABLES AND CHARTS 651 APPENDIX B. BIBLIOGRAPHY 677 APPENDIX C. ANSWERS TO SELECTED EXERCISES 679

Every set of first-order formulas is equivalent to an independent set

Μηχανική Μάθηση Hypothesis Testing

Section 8.3 Trigonometric Equations

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

The Simply Typed Lambda Calculus

4.6 Autoregressive Moving Average Model ARMA(1,1)

Approximation of distance between locations on earth given by latitude and longitude

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Matrices and Determinants

FORMULAS FOR STATISTICS 1

Example Sheet 3 Solutions

Homework for 1/27 Due 2/5

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Instruction Execution Times

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

6. MAXIMUM LIKELIHOOD ESTIMATION

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 24/3/2007

Concrete Mathematics Exercises from 30 September 2016

Inverse trigonometric functions & General Solution of Trigonometric Equations

Areas and Lengths in Polar Coordinates

Strain gauge and rosettes

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

Lecture 34 Bootstrap confidence intervals

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

HOMEWORK#1. t E(x) = 1 λ = (b) Find the median lifetime of a randomly selected light bulb. Answer:

Solutions to Exercise Sheet 5

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω

derivation of the Laplacian from rectangular to spherical coordinates

Aquinas College. Edexcel Mathematical formulae and statistics tables DO NOT WRITE ON THIS BOOKLET

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Second Order RLC Filters

Second Order Partial Differential Equations

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

CRASH COURSE IN PRECALCULUS

PARTIAL NOTES for 6.1 Trigonometric Identities

Areas and Lengths in Polar Coordinates

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Srednicki Chapter 55

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

5. Choice under Uncertainty

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 6/5/2006

Exercises to Statistics of Material Fatigue No. 5

Homework 8 Model Solution Section

DiracDelta. Notations. Primary definition. Specific values. General characteristics. Traditional name. Traditional notation

Section 7.6 Double and Half Angle Formulas

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

ΕΙΣΑΓΩΓΗ ΣΤΗ ΣΤΑΤΙΣΤΙΚΗ ΑΝΑΛΥΣΗ

Mean bond enthalpy Standard enthalpy of formation Bond N H N N N N H O O O

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Fractional Colorings and Zykov Products of graphs

Gaussian related distributions

Finite Field Problems: Solutions

Probability and Random Processes (Part II)

Πρόβλημα 1: Αναζήτηση Ελάχιστης/Μέγιστης Τιμής

Queensland University of Technology Transport Data Analysis and Modeling Methodologies

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Section 1: Listening and responding. Presenter: Niki Farfara MGTAV VCE Seminar 7 August 2016

Math221: HW# 1 solutions

Theorem 8 Let φ be the most powerful size α test of H

Tridiagonal matrices. Gérard MEURANT. October, 2008

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Συστήματα Διαχείρισης Βάσεων Δεδομένων

Overview. Transition Semantics. Configurations and the transition relation. Executions and computation

Lecture 2. Soundness and completeness of propositional logic

Partial Trace and Partial Transpose

Problem Set 3: Solutions

Numerical Analysis FMN011

These derivations are not part of the official forthcoming version of Vasilaky and Leonard

Part III - Pricing A Down-And-Out Call Option

Example of the Baum-Welch Algorithm

Μορφοποίηση υπό όρους : Μορφή > Μορφοποίηση υπό όρους/γραμμές δεδομένων/μορφοποίηση μόο των κελιών που περιέχουν/

DESIGN OF MACHINERY SOLUTION MANUAL h in h 4 0.

Main source: "Discrete-time systems and computer control" by Α. ΣΚΟΔΡΑΣ ΨΗΦΙΑΚΟΣ ΕΛΕΓΧΟΣ ΔΙΑΛΕΞΗ 4 ΔΙΑΦΑΝΕΙΑ 1

Section 9.2 Polar Equations and Graphs

Durbin-Levinson recursive method

Fundamentals of Probability: A First Course. Anirban DasGupta

Mean-Variance Analysis

Transcript:

Tables for Exam C

The reading material for Exam C includes a variety of textbooks. Each text has a set of probability distributions that are used in its readings. For those distributions used in more than one text, the choices of parameterization may not be the same in all of the books. This may be of educational value while you study, but could add a layer of uncertainty in the examination. For this latter reason, we have adopted one set of parameterizations to be used in examinations. This set will be based on Appendices A & B of Loss Models: From Data to Decisions by Klugman, Panjer and Willmot. A slightly revised version of these appendices is included in this note. A copy of this note will also be distributed to each candidate at the examination. Each text also has its own system of dedicated notation and terminology. Sometimes these may conflict. If alternative meanings could apply in an examination question, the symbols will be defined. For Exam C, in addition to the abridged table from Loss Models, sets of values from the standard normal and chi-square distributions will be available for use in examinations. These are also included in this note. When using the normal distribution, choose the nearest z-value to find the probability, or if the probability is given, choose the nearest z-value. No interpolation should be used. Example: If the given z-value is 0.759, and you need to find Pr(Z < 0.759) from the normal distribution table, then choose the probability for z-value = 0.76: Pr(Z < 0.76) = 0.7764. When using the normal approximation to a discrete distribution, use the continuity correction. 1 2 1 x 2 The density function for the standard normal distribution is φ( x) = e. 2π

Excerpts from the Appendices to Loss Models: From Data to Decisions, 3rd edition August 27, 2009

Appendix A An Inventory of Continuous Distributions A.1 Introduction The incomplete gamma function is given by Also, define Γ(α; x) = 1 with = G(α; x) = Z x 0 Z 0 Z x t α 1 e t dt, α > 0, x>0 t α 1 e t dt, α > 0. t α 1 e t dt, x > 0. At times we will need this integral for nonpositive values of α. Integration by parts produces the relationship G(α; x) = xα e x + 1 G(α +1;x) α α This can be repeated until the first argument of G is α + k, a positive number. Then it can be evaluated from G(α + k; x) =Γ(α + k)[1 Γ(α + k; x)]. The incomplete beta function is given by β(a, b; x) = Γ(a + b) Γ(a)Γ(b) Z x 0 t a 1 (1 t) b 1 dt, a > 0, b>0, 0 <x<1. 1

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 2 A.2 Transformed beta family A.2.2 A.2.2.1 Three-parameter distributions Generalized Pareto (beta of the second kind) α, θ, τ Γ(α + τ) θ α x τ 1 Γ(τ) (x + θ) α+τ F (x) =β(τ,α; u), u = x x + θ E[X k ] = θk Γ(τ + k)γ(α k), τ <k<α Γ(τ) E[X k ] = θk τ(τ +1) (τ + k 1), if k is an integer (α 1) (α k) E[(X x) k ] = θk Γ(τ + k)γ(α k) β(τ + k, α k; u)+ x k [1 F (x)], k > τ Γ(τ) mode = θ τ 1 α +1, τ > 1, else0 A.2.2.2 Burr (Burr Type XII, Singh-Maddala) α, θ, γ αγ(x/θ) γ x[1 + (x/θ) γ ] α+1 F (x) =1 u α 1, u = 1+(x/θ) γ E[X k ] = θk Γ(1 + k/γ)γ(α k/γ), γ <k<αγ VaR p (X) = θ[(1 p) 1/α 1] 1/γ E[(X x) k ] = θk Γ(1 + k/γ)γ(α k/γ) β(1 + k/γ, α k/γ;1 u)+ x k u α, k > γ µ 1/γ γ 1 mode = θ, γ > 1, else 0 αγ +1 A.2.2.3 Inverse Burr (Dagum) τ,θ,γ τγ(x/θ) γτ x[1 + (x/θ) γ ] τ+1 F (x) =u τ, u = (x/θ)γ 1+(x/θ) γ E[X k ] = θk Γ(τ + k/γ)γ(1 k/γ), τγ<k<γ Γ(τ) VaR p (X) = θ(p 1/τ 1) 1/γ E[(X x) k ] = θk Γ(τ + k/γ)γ(1 k/γ) β(τ + k/γ, 1 k/γ; u)+ x k [1 u τ ], Γ(τ) µ 1/γ τγ 1 mode = θ, τγ > 1, else 0 γ +1 k > τγ

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 3 A.2.3 A.2.3.1 Two-parameter distributions Pareto (Pareto Type II, Lomax) α, θ αθ α µ α θ (x + θ) α+1 F (x) =1 x + θ E[X k ] = θk Γ(k +1)Γ(α k), 1 <k<α E[X k ] = θ k k!, (α 1) (α k) if k is an integer VaR p (X) = θ[(1 p) 1/α 1] θ(1 p) 1/α TVaR p (X) = VaR p (X)+, α > 1 α 1 " µ # α 1 θ θ E[X x] = 1, α 6= 1 α 1 x + θ µ θ E[X x] = θ ln, α =1 x + θ µ α E[(X x) k ] = θk Γ(k +1)Γ(α k) θ β[k +1,α k; x/(x + θ)] + x k, all k x + θ mode = 0 A.2.3.2 Inverse Pareto τ,θ τθx τ 1 µ τ x (x + θ) τ+1 F (x) = x + θ E[X k ] = θk Γ(τ + k)γ(1 k), τ <k<1 Γ(τ) E[X k θ k ( k)! ] =, if k is a negative integer (τ 1) (τ + k) VaR p (X) = θ[p 1/τ 1] 1 Z x/(x+θ) µ τ x E[(X x) k ] = θ k τ y τ+k 1 (1 y) k dy + x 1 k, k > τ 0 x + θ mode = θ τ 1, τ > 1, else 0 2 A.2.3.3 Loglogistic (Fisk) γ,θ γ(x/θ) γ x[1 + (x/θ) γ ] 2 F (x) =u, u = (x/θ)γ 1+(x/θ) γ E[X k ] = θ k Γ(1 + k/γ)γ(1 k/γ), γ <k<γ VaR p (X) = θ(p 1 1) 1/γ E[(X x) k ] = θ k Γ(1 + k/γ)γ(1 k/γ)β(1 + k/γ, 1 k/γ; u)+ x k (1 u), k > γ µ 1/γ γ 1 mode = θ, γ > 1, else 0 γ +1

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 4 A.2.3.4 Paralogistic α, θ This is a Burr distribution with γ = α. α 2 (x/θ) α x[1 + (x/θ) α ] α+1 F (x) =1 u α 1, u = 1+(x/θ) α E[X k ] = θk Γ(1 + k/α)γ(α k/α), α <k<α 2 VaR p (X) = θ[(1 p) 1/α 1] 1/α E[(X x) k ] = θk Γ(1 + k/α)γ(α k/α) β(1 + k/α, α k/α;1 u)+ x k u α, k > α µ 1/α α 1 mode = θ α 2, α > 1, else 0 +1 A.2.3.5 Inverse paralogistic τ,θ This is an inverse Burr distribution with γ = τ. τ 2 (x/θ) τ 2 x[1 + (x/θ) τ ] τ+1 F (x) =u τ, u = (x/θ)τ 1+(x/θ) τ E[X k ] = θk Γ(τ + k/τ)γ(1 k/τ), τ 2 <k<τ Γ(τ) VaR p (X) = θ(p 1/τ 1) 1/τ E[(X x) k ] = θk Γ(τ + k/τ)γ(1 k/τ) β(τ + k/τ, 1 k/τ; u)+ x k [1 u τ ], k > τ 2 Γ(τ) mode = θ (τ 1) 1/τ, τ > 1, else 0 A.3 Transformed gamma family A.3.2 A.3.2.1 Two-parameter distributions Gamma α, θ (x/θ)α e x/θ F (x) =Γ(α; x/θ) x M(t) = (1 θt) α, t < 1/θ E[X k ]= θk Γ(α + k), k > α E[X k ] = θ k (α + k 1) α, if k is an integer E[(X x) k ] = θk Γ(α + k) Γ(α + k; x/θ)+x k [1 Γ(α; x/θ)], k > α = α(α +1) (α + k 1)θ k Γ(α + k; x/θ)+ x k [1 Γ(α; x/θ)], k an integer mode = θ(α 1), α > 1, else 0

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 5 A.3.2.2 Inverse gamma (Vinci) α, θ (θ/x)α e θ/x F (x) =1 Γ(α; θ/x) x E[X k ] = θk Γ(α k), k < α E[X k θ k ]=, if k is an integer (α 1) (α k) E[(X x) k ] = θk Γ(α k) [1 Γ(α k; θ/x)] + x k Γ(α; θ/x) = θk Γ(α k) G(α k; θ/x)+x k Γ(α; θ/x), all k mode = θ/(α +1) A.3.2.3 Weibull θ, τ τ(x/θ)τ e (x/θ)τ x F (x) =1 e (x/θ)τ E[X k ] = θ k Γ(1 + k/τ), k > τ VaR p (X) = θ[ ln(1 p)] 1/τ E[(X x) k ] = θ k Γ(1 + k/τ)γ[1 + k/τ;(x/θ) τ ]+x k e (x/θ)τ, k > τ µ 1/τ τ 1 mode = θ, τ > 1, else 0 τ A.3.2.4 Inverse Weibull (log Gompertz) θ, τ τ(θ/x)τ e (θ/x)τ x E[X k ] = θ k Γ(1 k/τ), k < τ VaR p (X) = θ( ln p) 1/τ F (x) =e (θ/x)τ E[(X x) k ] = h i θ k Γ(1 k/τ){1 Γ[1 k/τ;(θ/x) τ ]} + x k 1 e (θ/x)τ, all k = h i θ k Γ(1 k/τ)g[1 k/τ;(θ/x) τ ]+x k 1 e (θ/x)τ mode = µ 1/τ τ θ τ +1

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 6 A.3.3 A.3.3.1 One-parameter distributions Exponential θ e x/θ θ F (x) =1 e x/θ M(t) = (1 θt) 1 E[X k ]=θ k Γ(k +1), k > 1 E[X k ] = θ k k!, if k is an integer VaR p (X) = θ ln(1 p) TVaR p (X) = θ ln(1 p)+θ E[X x] = θ(1 e x/θ ) E[(X x) k ] = θ k Γ(k +1)Γ(k +1;x/θ)+x k e x/θ, k > 1 = θ k k!γ(k +1;x/θ)+x k e x/θ, k an integer mode = 0 A.3.3.2 Inverse exponential θ A.5 Other distributions θe θ/x x 2 E[X k ] = θ k Γ(1 k), k < 1 VaR p (X) = θ( ln p) 1 F (x) =e θ/x E[(X x) k ] = θ k G(1 k; θ/x)+x k (1 e θ/x ), all k mode = θ/2 A.5.1.1 Lognormal μ,σ (μ can be negative) 1 xσ 2π exp( z2 /2) = φ(z)/(σx), E[X k ] = exp(kμ + k 2 σ 2 /2) E[(X x) k ] = µ ln x μ kσ exp(kμ + k 2 σ 2 2 /2)Φ σ mode = exp(μ σ 2 ) z = ln x μ σ + x k [1 F (x)] F (x) =Φ(z)

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 7 A.5.1.2 Inverse Gaussian μ, θ A.5.1.3 µ 1/2 θ 2πx 3 exp µ θz2, z = x μ 2x μ " µ # 1/2 µ " µ # 1/2 θ 2θ θ F (x) = Φ z +exp Φ y, y = x + μ x μ x μ " Ã r!# θ M(t) = exp 1 1 2tμ2, t < θ μ θ 2μ 2, E[X] =μ, Var[X] =μ3 /θ " µ # 1/2 µ " µ # 1/2 θ 2θ θ E[X x] = x μzφ z μy exp Φ y x μ x log-t r, μ, σ (μ can be negative) Let Y have a t distribution with r degrees of freedom. Then X = exp(σy + μ) has the log-t distribution. Positive moments do not exist for this distribution. Just as the t distribution has a heavier tail than the normal distribution, this distribution has a heavier tail than the lognormal distribution. µ r +1 Γ xσ ³ r " πrγ 1+ 1 2 r µ ln x μ F (x) = F r σ F (x) = 2 µ # 2 (r+1)/2, ln x μ σ with F r (t) the cdf of a t distribution with r d.f., 1 2 β r 2, 1 2 ; r µ 2 ln x μ, 0 <x eμ, r + σ 1 1 2 β r 2, 1 2 ; r µ 2 ln x μ, x eμ. r + σ A.5.1.4 Single-parameter Pareto α, θ αθα x α+1, x > θ F(x) =1 (θ/x)α, x > θ VaR p (X) = θ(1 p) 1/α αθ(1 p) 1/α TVaR p (X) =, α > 1 α 1 E[X k ] = αθk α k, k < α E[(X x)k ]= αθk α k kθ α,x θ (α k)xα k mode = θ Note: Although there appears to be two parameters, only α is a true parameter. The value of θ must be set in advance.

APPENDIX A. AN INVENTORY OF CONTINUOUS DISTRIBUTIONS 8 A.6 Distributions with finite support For these two distributions, the scale parameter θ is assumed known. A.6.1.1 Generalized beta a, b, θ, τ Γ(a + b) Γ(a)Γ(b) ua (1 u) b 1 τ, x 0 <x<θ, u=(x/θ)τ F (x) = β(a, b; u) E[X k ] = θk Γ(a + b)γ(a + k/τ) Γ(a)Γ(a + b + k/τ), k > aτ E[(X x) k ] = θk Γ(a + b)γ(a + k/τ) Γ(a)Γ(a + b + k/τ) β(a + k/τ, b; u)+xk [1 β(a, b; u)] A.6.1.2 beta a, b, θ Γ(a + b) Γ(a)Γ(b) ua (1 u) b 1 1, x 0 <x<θ, u= x/θ F (x) = β(a, b; u) E[X k ] = θk Γ(a + b)γ(a + k) Γ(a)Γ(a + b + k), k > a E[X k ] = E[(X x) k ] = θ k a(a +1) (a + k 1), if k is an integer (a + b)(a + b +1) (a + b + k 1) θ k a(a +1) (a + k 1) β(a + k, b; u) (a + b)(a + b +1) (a + b + k 1) +x k [1 β(a, b; u)]

Appendix B An Inventory of Discrete Distributions B.1 Introduction The 16 models fall into three classes. The divisions are based on the algorithm by which the probabilities are computed. For some of the more familiar distributions these formulas will look different from the ones you may have learned, but they produce the same probabilities. After each name, the parameters are given. All parameters are positive unless otherwise indicated. In all cases, p k is the probability of observing k losses. For finding moments, the most convenient form is to give the factorial moments. The jth factorial moment is μ (j) = E[N(N 1) (N j +1)].WehaveE[N] =μ (1) and Var(N) =μ (2) + μ (1) μ 2 (1). The estimators which are presented are not intended to be useful estimators but rather for providing starting values for maximizing the likelihood (or other) function. For determining starting values, the following quantities are used [where n k is the observed frequency at k (if, for the last entry, n k represents the number of observations at k or more, assume it was at exactly k) andn is the sample size]: ˆμ = 1 n X kn k, k=1 ˆσ 2 = 1 n X k 2 n k ˆμ 2. k=1 When the method of moments is used to determine the starting value, a circumflex (e.g., ˆλ) isused. For any other method, a tilde (e.g., λ) is used. When the starting value formulas do not provide admissible parameter values, a truly crude guess is to set the product of all λ and β parameters equal to the sample mean and set all other parameters equal to 1. If there are two λ and/or β parameters, an easy choice is to set each to the square root of the sample mean. The last item presented is the probability generating function, B.2 The (a, b, 0) class P (z) =E[z N ]. B.2.1.1 Poisson λ p 0 = e λ, a =0, b = λ p k = e λ λ k k! E[N] = λ, Var[N] =λ P(z) =e λ(z 1) 9

APPENDIX B. AN INVENTORY OF DISCRETE DISTRIBUTIONS 10 B.2.1.2 Geometric β p 0 = 1 1+β, a = β 1+β, b =0 p β k k = (1 + β) k+1 E[N] = β, Var[N] =β(1 + β) P (z) =[1 β(z 1)] 1. This is a special case of the negative binomial with r =1. B.2.1.3 Binomial q, m, (0 <q<1, man integer) p 0 = (1 q) m, a = q (m +1)q, b = 1 q 1 q µ m p k = q k (1 q) m k, k =0, 1,...,m k E[N] = mq, Var[N] =mq(1 q) P (z) =[1+q(z 1)] m. B.2.1.4 Negative binomial β,r p 0 = (1+β) r, a = β (r 1)β, b = 1+β 1+β p k = B.3 The (a, b, 1) class r(r +1) (r + k 1)βk k!(1 + β) r+k E[N] = rβ, Var[N] =rβ(1 + β) P (z) =[1 β(z 1)] r. To distinguish this class from the (a, b, 0) class, the probabilities are denoted Pr(N = k) =p M k or Pr(N = k) =p T k depending on which subclass is being represented. For this class, pm 0 is arbitrary (that is, it is a parameter) and then p M 1 or p T 1 is a specified function of the parameters a and b. Subsequent probabilities are obtained recursively as in the (a, b, 0) class: p M k =(a+b/k)pm k 1, k =2, 3,..., with the same recursion for pt k There are two sub-classes of this class. When discussing their members, we often refer to the corresponding member of the (a, b, 0) class. This refers to the member of that class with the same values for a and b. The notation p k will continue to be used for probabilities for the corresponding (a, b, 0) distribution. B.3.1 The zero-truncated subclass The members of this class have p T 0 =0and therefore it need not be estimated. These distributions should only be used when a value of zero is impossible. The first factorial moment is μ (1) =(a + b)/[(1 a)(1 p 0 )], where p 0 is the value for the corresponding member of the (a, b, 0) class. For the logarithmic distribution (which has no corresponding member), μ (1) = β/ln(1+β). Higher factorial moments are obtained recursively with the same formula as with the (a, b, 0) class. The variance is (a+b)[1 (a+b+1)p 0 ]/[(1 a)(1 p 0 )] 2.For those members of the subclass which have corresponding (a, b, 0) distributions, p T k = p k/(1 p 0 ).

APPENDIX B. AN INVENTORY OF DISCRETE DISTRIBUTIONS 11 B.3.1.1 Zero-truncated Poisson λ p T 1 = λ e λ, 1 a =0, b = λ, p T k = λ k k!(e λ 1), E[N] = λ/(1 e λ ), Var[N] =λ[1 (λ +1)e λ ]/(1 e λ ) 2, λ = ln(nˆμ/n 1 ), P (z) = eλz 1 e λ 1. B.3.1.2 Zero-truncated geometric β p T 1 = 1 1+β, a = β 1+β, b =0, p T k = β k 1 (1 + β) k, E[N] = 1+β, Var[N] =β(1 + β), ˆβ = ˆμ 1, P (z) = [1 β(z 1)] 1 (1 + β) 1 1 (1 + β) 1. This is a special case of the zero-truncated negative binomial with r =1. B.3.1.3 Logarithmic β p T 1 = p T k = β (1 + β)ln(1+β), a = β 1+β, b = β 1+β, β k k(1 + β) k ln(1 + β), E[N] = β/ln(1 + β), Var[N] = β = nˆμ n 1 1 P (z) = 1 or 2(ˆμ 1), ˆμ ln[1 β(z 1)]. ln(1 + β) β[1 + β β/ln(1 + β)], ln(1 + β) This is a limiting case of the zero-truncated negative binomial as r 0.

APPENDIX B. AN INVENTORY OF DISCRETE DISTRIBUTIONS 12 B.3.1.4 Zero-truncated binomial q, m, (0 <q<1, man integer) p T 1 = m(1 q)m 1 q 1 (1 q) m, a = q 1 q m p T k q k (1 q) m k k =, k =1, 2,...,m, 1 (1 q) m E[N] = mq 1 (1 q) m, Var[N] = mq[(1 q) (1 q + mq)(1 q)m ] [1 (1 q) m ] 2, q = ˆμ m, P (z) = [1 + q(z 1)]m (1 q) m 1 (1 q) m. B.3.1.5 Zero-truncated negative binomial β,r,(r > 1,r 6= 0) p T 1 = p T k = E[N] = rβ (1 + β) r+1 (1 + β), a = β r(r +1) (r + k 1) k![(1 + β) r 1] rβ 1 (1 + β) r, µ β 1+β (m +1)q, b =, 1 q 1+β k, Var[N] = rβ[(1 + β) (1 + β + rβ)(1 + β) r ] [1 (1 + β) r ] 2, β = ˆσ2 ˆμ 1, r = ˆμ2 ˆσ 2 ˆμ, P (z) = [1 β(z 1)] r (1 + β) r 1 (1 + β) r., b = (r 1)β 1+β, This distribution is sometimes called the extended truncated negative binomial distribution because the parameter r can extend below 0. B.3.2 The zero-modified subclass A zero-modified distribution is created by starting with a truncated distribution and then placing an arbitrary amount of probability at zero. This probability, p M 0, is a parameter. The remaining probabilities are adjusted accordingly. Values of p M k can be determined from the corresponding zero-truncated distribution as p M k =(1 pm 0 )pt k or from the corresponding (a, b, 0) distribution as pm k =(1 pm 0 )p k/(1 p 0 ).Thesame recursion used for the zero-truncated subclass applies. The mean is 1 p M 0 times the mean for the corresponding zero-truncated distribution. The variance is 1 p M 0 times the zero-truncated variance plus p M 0 (1 p M 0 ) times the square of the zero-truncated mean. The probability generating function is P M (z) =p M 0 +(1 p M 0 )P (z), wherep (z) is the probability generating function for the corresponding zero-truncated distribution. The maximum likelihood estimator of p M 0 is always the sample relative frequency at 0.