Properties of Matrix Variate Hypergeometric Function Distribution

Σχετικά έγγραφα
Congruence Classes of Invertible Matrices of Order 3 over F 2

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

2 Composition. Invertible Mappings

Matrices and Determinants

Statistical Inference I Locally most powerful tests

The k-α-exponential Function

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

EE512: Error Control Coding

Other Test Constructions: Likelihood Ratio & Bayes Tests

A summation formula ramified with hypergeometric function and involving recurrence relation

Math221: HW# 1 solutions

Solutions to Exercise Sheet 5

Example Sheet 3 Solutions

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Solution Series 9. i=1 x i and i=1 x i.

4.6 Autoregressive Moving Average Model ARMA(1,1)

derivation of the Laplacian from rectangular to spherical coordinates

Finite Field Problems: Solutions

Second Order Partial Differential Equations

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

ST5224: Advanced Statistical Theory II

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

ON NEGATIVE MOMENTS OF CERTAIN DISCRETE DISTRIBUTIONS

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Problem Set 3: Solutions

Tridiagonal matrices. Gérard MEURANT. October, 2008

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Problem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ.

On the k-bessel Functions

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Section 8.3 Trigonometric Equations

A Note on Intuitionistic Fuzzy. Equivalence Relation

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Evaluation of some non-elementary integrals of sine, cosine and exponential integrals type

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Homework 3 Solutions

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

C.S. 430 Assignment 6, Sample Solutions

Bivariate Generalization of the Gauss Hypergeometric Distribution

Homomorphism in Intuitionistic Fuzzy Automata

A Laplace Type Problem for Lattice with Cell Composed by Four Isoscele Triangles and the Test Body Rectangle

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Homomorphism of Intuitionistic Fuzzy Groups

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Exercises to Statistics of Material Fatigue No. 5

Coefficient Inequalities for a New Subclass of K-uniformly Convex Functions

Risk! " #$%&'() *!'+,'''## -. / # $

Arithmetical applications of lagrangian interpolation. Tanguy Rivoal. Institut Fourier CNRS and Université de Grenoble 1

Areas and Lengths in Polar Coordinates

The k-bessel Function of the First Kind

MINIMAL CLOSED SETS AND MAXIMAL CLOSED SETS

Homework 8 Model Solution Section

Areas and Lengths in Polar Coordinates

SOME PROPERTIES OF FUZZY REAL NUMBERS

Every set of first-order formulas is equivalent to an independent set

Commutative Monoids in Intuitionistic Fuzzy Sets

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Reminders: linear functions

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

6.3 Forecasting ARMA processes

Statistics 104: Quantitative Methods for Economics Formula and Theorem Review

Section 7.6 Double and Half Angle Formulas

Lecture 15 - Root System Axiomatics

w o = R 1 p. (1) R = p =. = 1

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

On Numerical Radius of Some Matrices

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

Uniform Convergence of Fourier Series Michael Taylor

Math 6 SL Probability Distributions Practice Test Mark Scheme

Partial Trace and Partial Transpose

DIRECT PRODUCT AND WREATH PRODUCT OF TRANSFORMATION SEMIGROUPS

Srednicki Chapter 55

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Notes on the Open Economy

Lecture 21: Properties and robustness of LSE

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

The ε-pseudospectrum of a Matrix

Numerical Analysis FMN011

( y) Partial Differential Equations

( ) 2 and compare to M.

On a four-dimensional hyperbolic manifold with finite volume

MATRICES

Bessel functions. ν + 1 ; 1 = 0 for k = 0, 1, 2,..., n 1. Γ( n + k + 1) = ( 1) n J n (z). Γ(n + k + 1) k!

Fractional Colorings and Zykov Products of graphs

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

New bounds for spherical two-distance sets and equiangular lines

Space-Time Symmetries

Probability and Random Processes (Part II)

Quadratic Expressions

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

Inverse trigonometric functions & General Solution of Trigonometric Equations

SOLVING CUBICS AND QUARTICS BY RADICALS

PROPERTIES OF CERTAIN INTEGRAL OPERATORS. a n z n (1.1)

Lecture 13 - Root Space Decomposition II

A Bonus-Malus System as a Markov Set-Chain. Małgorzata Niemiec Warsaw School of Economics Institute of Econometrics

D Alembert s Solution to the Wave Equation

Aquinas College. Edexcel Mathematical formulae and statistics tables DO NOT WRITE ON THIS BOOKLET

Transcript:

Applied Mathematical Sciences, Vol. 11, 2017, no. 14, 677-692 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2017.7254 Properties of Matrix Variate Hypergeometric Function Distribution Daya K. Nagar and Juan Carlos Mosquera-Benítez Instituto de Matemáticas, Universidad de Antioquia Calle 67, No. 53 108, Medellín, Colombia (S.A.) Copyright c 2017 Daya K. Nagar and Juan Carlos Mosquera-Benítez. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract In this article, we study several properties of matrix variate hypergeometric function distribution which is a generalization of the matrix variate beta distribution. We also define a bimatrix-variate hypergeometric function distribution and study its properties. Mathematics Subject Classification: 62H99, 60E05 Keywords: Beta function; gamma function; Gauss hypergeometric function; matrix variate; probability distribution; zonal polynomial 1 Introduction The random variable X is said to have a hypergeometric function type I distribution, denoted as X H I (ν, α, β, γ), if its p.d.f. (probability density function) is given by (Gupta and Nagar 2], Nagar and Alvarez 10, 11]), Γ(γ + ν α)γ(γ + ν β) Γ(γ)Γ(ν)Γ(γ + ν α β) xν 1 (1 x) γ 1 2F 1 (α, β; γ; 1 x), 0 < x < 1, (1) where ν > 0, γ > 0, γ + ν α β > 0 and 2 F 1 is the Gauss hypergeometric function (Luke 8]). The hypergeometric function type I distribution occurs as the distribution of the product of two independent beta variables (Gupta and Nagar 2], Nagar and Alvarez 10]). For α = γ, the density (1) reduces

678 Daya K. Nagar and Juan Carlos Mosquera-Benítez to a beta type 1 density with parameters ν β and γ. Likewise, for β = γ, the hypergeometric function type I density slides to a beta type 1 density with parameters ν α and γ. Further, for α = 0 or β = 0, the hypergeometric function type I density simplifies to a beta type 1 density with parameters ν and γ. Nagar and Alvarez 10, 11] have studied several properties and stochastic representations of the hypergeometric function type I distribution. They have also derived the density function of the product of two independent random variables each having hypergeometric function type I distribution. The bivariate generalization of the hypergeometric function type I distribution, denoted by (X 1, X 2 ) H I (ν 1, ν 2 ; α, β, γ), is defined by the density C(ν 1, ν 2 ; α, β, γ)x ν 1 1 1 x ν 2 1 2 (1 x 1 x 2 ) γ 1 2F 1 (α, β; γ; 1 x 1 x 2 ), (2) where x 1 > 0, x 2 > 0, x 1 +x 2 < 1, ν 1 > 0, ν 2 > 0, γ > 0, ν 1 +ν 2 +γ α β > 0, and C(ν 1, ν 2 ; α, β, γ) = Γ(ν 1 + ν 2 + γ α)γ(ν 1 + ν 2 + γ β) Γ(ν 1 )Γ(ν 2 )Γ(γ)Γ(ν 1 + ν 2 + γ α β). For α = 0 or β = 0, the density (2) slides to a Dirichlet type 1 density with parameters ν 1, ν 2 and γ. Nagar and Bran-Cardona 12] showed that if (X 1, X 2 ) H I (ν 1, ν 2 ; α, β, γ), then X 1 H I (ν 1, α, β, ν 2 + γ) and X 2 H I (ν 2, α, β, ν 1 + γ). Further, they have shown that X 1 + X 2 and X 1 /(X 1 + X 2 ) are independent, X 1 + X 2 H I (ν 1 + ν 2, α, β, γ), and X 1 /(X 1 + X 2 ) has a beta type 1 distribution with parameters ν 1 and ν 2. They have also derived the density of the product X 1 X 2. In this article, we study matrix variate generalizations of (1) and (2). The matrix variate generalization of (1) is defined in Gupta and Nagar 2]. The matrix variate generalization of (2) is defined in this article. 2 Some Known Results and Definitions We begin with a brief review of some definitions and notations. We adhere to standard notations(cf. Gupta and Nagar 2]). Let A = (a ij ) be an m m matrix. Then, A denotes the transpose of A; tr(a) = a 11 + +a mm ; etr(a) = exp(tr(a)); det(a) = determinant of A; A = norm of A; A > 0 means that A is symmetric positive definite; 0 < A < I m indicates that both A and I A are symmetric positive definite and A 1/2 denotes the unique symmetric positive definite square root of A > 0. The multivariate gamma function which

Properties of matrix variate hypergeometric function distribution 679 is frequently used in multivariate statistical analysis is defined by Γ m (a) = etr( X) det(x) a (m+1)/2 dx X>0 = π m(m 1)/4 m i=1 ( Γ a i 1 ), Re(a) > m 1. (3) 2 2 The multivariate generalization of the beta function is defined by B m (a, b) = Im 0 = Γ m(a)γ m (b) Γ m (a + b) det(x) a (m+1)/2 det(i m X) b (m+1)/2 dx = B m (b, a), (4) where Re(a) > (m 1)/2 and Re(b) > (m 1)/2. The generalized hypergeometric coefficient (a) ρ is defined by (a) ρ = m i=1 ( a i 1 ), (5) 2 r i where ρ is the ordered partition of r defined as ρ = (r 1,..., r m ), r 1 r m 0, r 1 + + r m = r and (a) k = a(a + 1) (a + k 1), k = 1, 2,... with (a) 0 = 1. The generalized hypergeometric function of one matrix is defined by pf q (a 1,..., a p ; b 1,..., b q ; X) = (a 1 ) κ (a p ) κ C κ (X), (6) (b 1 ) κ (b q ) κ k! k=0 κ k where a i, i = 1,..., p, b j, j = 1,..., q are arbitrary complex numbers, X is a complex symmetric matrix of order m and κ k denotes summation over all ordered partitions κ of k. Conditions for convergence of the series in (6) are available in the literature. From (6) it follows that 2F 1 (a, b; c; X) = k=0 κ k (a) κ (b) κ (c) κ C κ (X), X < 1. (7) k! The integral representations of the Gauss hypergeometric function 2 F 1 is given by 1 Im det(r) a (m+1)/2 det(i m R) c a (m+1)/2 2F 1 (a, b; c; X) = dr B m (a, c a) 0 det(i m XR) b (8) where Re(a) > (m 1)/2 and Re(c a) > (m 1)/2.

680 Daya K. Nagar and Juan Carlos Mosquera-Benítez From (8), it is easy to see that F (a, b; c; I m ) = Γ m(c)γ m (c a b) Γ m (c a)γ m (c b). (9) Further, for Re(α) > (m 1)/2 and Re(β) > (m 1)/2, we have Im 0 det(r) α (m+1)/2 det (I m R) β (m+1)/2 pf q (a 1,..., a p ; b 1,..., b q ; XR) dr = B m (α, β) p+1 F q+1 (a 1,..., a p, α; b 1,..., b q, α + β; X), (10) which can be obtained by expanding p F q in the integrand in series involving zonal polynomials and integrating term by term using Constatine 1, Eq. 22]. For properties and further results on these functions, the reader is referred to Herz 4], Constantine 1], James 5], and Gupta and Nagar 2]. Consider the following integral involving Gauss hypergeometric function of matrix argument: f(z) = Im 0 det(x) d (m+1)/2 det(i m X) σ (m+1)/2 C λ (Z(I X)) 2 F 1 (a, b; d; X)dX. Replacing X by I m X the above integral can also be written as f(z) = Im 0 det(x) σ (m+1)/2 det(i m X) d (m+1)/2 C λ (ZX) 2 F 1 (a, b; d; I m X)dX. It can easily be seen that f(z) = f(hzh ) for any H O(m). Thus, integrating f(hzh ) over the orthogonal group, O(m), we obtain Subrahmaniam 13] conjectured that f(z) = C λ(z) C λ (I m ) f(i m). f(i m ) = Γ m(d)γ m (σ, λ)γ m (d + σ a b, λ) Γ m (d + σ a, λ)γ m (d + σ b, λ) C λ(i m ) (11) which was proved by Kabe 6]. Definition 2.1. An m m random symmetric positive definite matrix U is said to have a matrix variate beta type 1 distribution with parameters (α, β), denoted as U B1(m, α, β), if its p.d.f. is given by det(u) α (m+1)/2 det(i m U) β (m+1)/2, 0 < U < I m, B m (α, β) where α > (m 1)/2 and β > (m 1)/2.

Properties of matrix variate hypergeometric function distribution 681 Definition 2.2. An m m random symmetric positive definite matrix V is said to have a matrix variate beta type 2 distribution with parameters (α, β), denoted as V B2(m, α, β), if its p.d.f. is given by det(v ) α (m+1)/2 det(i m + V ) (α+β), V > 0, B m (α, β) where α > (m 1)/2 and β > (m 1)/2. From the definitions of matrix variate beta type 1 and type 2 distributions, it follows that if V B2(m, α, β) then (I m + V ) 1 V B1(m, α, β) and (I m + V ) 1 B1(m, β, α). Definition 2.3. The m m random symmetric positive definite matrices U 1,..., U n are said to have a matrix variate Dirichlet type 1 distribution with parameters (α 1,..., α n ; β), denoted as (U 1,..., U n ) D1(m, α 1,..., α n ; β), if their joint p.d.f. is given by n i=1 det(u i) αi (m+1)/2 det(i m n i=1 U i) β (m+1)/2, (12) B m (α 1,..., α n, β) where n i=1 B m (α 1,..., α n, β) = Γ m(α i )Γ m (β) Γ m ( n i=1 α i + β) with n i=1 U i < I m, α i > (m 1)/2, i = 1,..., n, and β > (m 1)/2. For further results on matrix variate beta and Dirichlet distributions the reader is referred to Gupta and Nagar 2, 3]. 3 Matrix Variate Hypergeometric Function Distribution In this section, we study properties of the matrix variate hypergeometric function type I distribution. We begin this section by providing the definition of the aforementioned distribution which is due to Gupta and Nagar 2]. Definition 3.1. An m m random symmetric matrix X is said to have a hypergeometric function distribution of type I, denoted by X H I m(ν, α, β, γ), if its p.d.f. is given by C m (ν, α, β, γ) det(x) ν (m+1)/2 det(i m X) γ (m+1)/2 2F 1 (α, β; γ; I m X), (13) where 0 < X < I m, γ > (m 1)/2, ν > (m 1)/2, γ + ν α β > (m 1)/2, and C m (ν, α, β, γ) = Γ m(γ + ν α)γ m (γ + ν β) Γ m (γ)γ m (ν)γ m (γ + ν α β).

682 Daya K. Nagar and Juan Carlos Mosquera-Benítez From (13), we can see that for α = γ, X B1(m, ν β, γ) and for β = γ, X B1(m, ν α, γ). Theorem 3.1. If U B1(m, a, b) and V B1(m, c, d) are independent, then Z = U 1/2 V U 1/2 H I m(c, b, c + d a, b + d). Proof. See Gupta and Nagar 2]. by For c = a + b, the above theorem gives Z B1(m, a, b + d). The m.g.f. of X H I m(ν, α, β, γ) as derived in Gupta and Nagar 2] is given M X (Z) = 2 F 2 (ν, γ + ν α β; γ + ν α, γ + ν β; Z), (14) where Z = (z ij (1 + δ ij )/2) is a symmetric matrix of order m. The expected value of det(x) h is derived as Edet(X) h ] = C m(ν, α, β, γ) C m (ν + h, α, β, γ). Now, substituting for C m (ν, α, β, γ) and C m (ν +h, α, β, γ) in the above expression, we get Edet(X) h ] = Γ m(γ +ν α)γ m (γ +ν β)γ m (ν +h)γ m (γ +ν α β + h) Γ m (γ +ν α + h)γ m (γ +ν β + h)γ m (ν)γ m (γ +ν α β). (15) For m = 1, we have the univariate case and the above moment expression reduces to E(X h ) = Γ(γ + ν α)γ(γ + ν β) Γ(ν)Γ(γ + ν α β) Γ(ν + h)γ(γ + ν α β + h) Γ(γ + ν α + h)γ(γ + ν β + h). (16) Theorem 3.2. If X H I m(ν, α, β, γ), then det(x) is distributed as m i=1 z i, where z 1,..., z m are independent, z i H I (ν (i 1)/2, α, β, γ), i = 1,..., m. Proof. Writing multivariate gamma functions in terms of ordinary gamma functions, (15) is re-written as Edet(X) h ] = m i=1 Γγ + ν α (i 1)/2]Γγ + ν β (i 1)/2] Γν (i 1)/2]Γγ + ν α β (i 1)/2] Γν (i 1)/2 + h]γγ + ν α β (i 1)/2 + h] Γγ + ν α (i 1)/2 + h]γγ + ν β (i 1)/2 + h] Now, comparing (17) with (16), we get Edet(X) h ] = m i=1 E(zh i ). ]. (17) Corollary 3.2.1. If X H I 2(ν, α, β, γ), then det(x) H I (2ν 1, 2α, 2β, 2γ).

Properties of matrix variate hypergeometric function distribution 683 Proof. Substituting m = 2 in (17) and using the duplication formula for gamma function, namely, Γ(2z) = 22z 1 Γ(z)Γ(z + 1/2) π the hth moment of det(x) is written as E det(x)] h Γ(2γ + 2ν 2α 1)Γ(2γ + 2ν 2β 1) = Γ(2ν 1)Γ(2γ + 2ν 2α 2β 1) Γ(2ν 1 + h)γ(2γ + 2ν 2α 2β 1 + h) Γ(2γ + 2ν 2α 1 + h)γ(2γ + 2ν 2β 1 + h). (18) Now, comparing (18) with (16), we can get the desired result. Theorem 3.3. Let X H I m(ν, α, β, γ) and A be an m m constant nonsingular matrix. Then, the p.d.f. of Y = AXA is given by Γ m (γ + ν α)γ m (γ + ν β) Γ m (γ)γ m (ν)γ m (γ + ν α β) det(aa ) (ν+γ)+(m+1)/2 det(y ) ν (m+1)/2 det(aa X) γ (m+1)/2 2F 1 (α, β; γ; I m (AA ) 1 Y ), 0 < Y < AA. (19) Proof. By making the transformation Y = AXA with the Jacobian J(X Y ) = det(aa ) (m+1)/2 in (13), the density of Y is obtained. Theorem 3.4. Let X H I m(ν, α, β, γ) and H be an m m orthogonal matrix whose elements are either constants or random variables distributed independent of X. Then, the distribution of X is invariant under the transformation X HXH if H is a matrix of constants. Further, if H is a random matrix, then H and HXH are independent. Proof. First, let H be a constant matrix. Then, from Theorem 3.3, HXH H I m(ν, α, β, γ) since HH = I m. If, however, H is a random orthogonal matrix, then HXH H H I m(ν, α, β, γ). Since this distribution does not depend on H, HXH H I m(ν, α, β, γ). A consequence of the above result is that the marginal distributions of the diagonal elements x 11,..., x mm are identical. Further, if X i1,...,i q is a q q submatrix of X obtained by taking (i 1,..., i q ) rows and (i 1,..., i q ) columns of X, then X 1,...,q and X i1,...,i q are identically distributed (Khatri, Khattree and Gupta 7]). Let A be a q m constant matrix of rank q. Further, let T = (t ij (1+δ ij )/2) be a symmetric matrix of order q. Then, by using (14), the moment generating

684 Daya K. Nagar and Juan Carlos Mosquera-Benítez function of AXA is derived as M AXA (T ) = Eetr(AXA T )] = Eetr(XA T A)] = 2 F (m) 2 (ν, γ + ν α β; γ + ν α, γ + ν β; A T A) = 2 F (q) 2 (ν, γ + ν α β; γ + ν α, γ + ν β; (AA ) 1/2 T (AA ) 1/2 ), where the last line has been obtained by observing that non-zero eigenvalues of A T A and (AA ) 1/2 T (AA ) 1/2 are same. Now, from the above expression, it is easy to see that (AA ) 1/2 AXA (AA ) 1/2 Hq I (ν, α, β, γ). Further, by specifying A, it is straightforward to conclude that X 1,...,q Hq I (ν, α, β, γ) and each diagonal element of X has a hypergeometric function type I distribution with parameters ν, α, β and γ. If X Hm(ν, I α, β, γ), then EC κ (X)] = C m (ν, α, β, γ) Im 0 C κ (X) det(x) ν (m+1)/2 det(i m X) γ (m+1)/2 2F 1 (α, β; γ; I m X) dx = Γ m(γ + ν α)γ m (γ + ν β) Γ m (γ)γ m (ν)γ m (γ + ν α β) Γ m(γ)γ m (ν, κ)γ m (ν + γ α β, κ) Γ m (ν + γ α, κ)γ m (ν + γ β, κ) C κ(i m ), where the last line has been obtained by using (11). above expression we get and EC κ (X)] = (ν) κ(ν + γ α β) κ (ν + γ α) κ (ν + γ β) κ C κ (I m ). Now, simplifying the For κ = (1), κ = (2) and κ = (1 2 ), the above expression simplifies to EC (2) (X)] = EC (1 2 )(X)] = EC (1) (X)] = mν(ν + γ α β) (ν + γ α)(ν + γ β), m(m + 2)ν(ν + 1)(ν + γ α β)(ν + γ α β + 1) 3(ν + γ α)(ν + γ α + 1)(ν + γ β)(ν + γ β + 1), 2m(m 1)ν(ν 1/2)(ν + γ α β)(ν + γ α β 1/2) 3(ν + γ α)(ν + γ α 1/2)(ν + γ β)(ν + γ β 1/2), where we have used the results C (1) (I m ) = m, C (2) (I m ) = m(m + 2)/3 and C (1 2 )(I m ) = 2m(m 1)/3. Now, by noting C (1) (X) = tr(x), C (2) (X) C (1 2 )(X)/2 = (tr X 2 ) and C (2) (X) + C (1 2 )(X) = (tr X) 2, we obtain Etr(X)] = mν(ν + γ α β) (ν + γ α)(ν + γ β),

Properties of matrix variate hypergeometric function distribution 685 E(tr X 2 ) = E(tr X) 2 ] = mν(ν + γ α β) (m + 2)(ν + 1)(ν + γ α β + 1) 3(ν + γ α)(ν + γ β) (ν + γ α + 1)(ν + γ β + 1) ] (m 1)(ν 1/2)(ν + γ α β 1/2), (ν + γ α 1/2)(ν + γ β 1/2) mν(ν + γ α β) (m + 2)(ν + 1)(ν + γ α β + 1) 3(ν + γ α)(ν + γ β) (ν + γ α + 1)(ν + γ β + 1) ] 2(m 1)(ν 1/2)(ν + γ α β 1/2) +. (ν + γ α 1/2)(ν + γ β 1/2) Since, for any m m orthogonal matrix H, the random matrices X and HXH have same distributions, we have E(X) = c 1 I m, E(X 2 ) = c 2 I m and E(tr X)X] = di m and hence E(tr X) = c 1 m, E(tr X 2 ) = c 2 m and E(tr X) 2 ] = dm. Thus, the coefficient of m in the expressions for E(tr X), E(tr X 2 ) and E(tr X) 2 ] are c 1, c 2 and d, respectively, and we have E(X 2 ) = E(tr X)X] = E(X) = ν(ν + γ α β) (ν + γ α)(ν + γ β) I m, ν(ν + γ α β) (m + 2)(ν + 1)(ν + γ α β + 1) 3(ν + γ α)(ν + γ β) (ν + γ α + 1)(ν + γ β + 1) ] (m 1)(ν 1/2)(ν + γ α β 1/2) I m, (ν + γ α 1/2)(ν + γ β 1/2) ν(ν + γ α β) (m + 2)(ν + 1)(ν + γ α β + 1) 3(ν + γ α)(ν + γ β) (ν + γ α + 1)(ν + γ β + 1) ] 2(m 1)(ν 1/2)(ν + γ α β 1/2) + I m. (ν + γ α 1/2)(ν + γ β 1/2) The following result presents the joint distribution of eigenvalues of a random matrix which follows a matrix variate hypergeometric function distribution. Theorem 3.5. If X H I m(ν, α, β, γ), then the joint p.d.f. of eigenvalues λ 1, λ 2,..., λ m of X is given by π m2 /2 Γ m (m/2) C m(ν, α, β, γ) det(l) ν (m+1)/2 det(i m L) γ (m+1)/2 m ] (λ i λ j ) 2F 1 (α, β; γ; I m L), 0 < λ m < < λ 1 < 1, i<j where L = diag(λ 1, λ 2,..., λ m ). Proof. The p.d.f. of X is given by (13). Now, direct application of Theorem 3.2.17 of Muirhed 9] yields the desired result.

686 Daya K. Nagar and Juan Carlos Mosquera-Benítez 4 Bimatrix Vvariate Hypergeometric Function Distribution First we define the bimatrix-variate hypergeometric function type I distribution. Definition 4.1. The bimatrix variate hypergeometric function type I distribution, denoted by (X 1, X 2 ) H I m(ν 1, ν 2 ; α, β, γ), is defined by the p.d.f. C m (ν 1, ν 2 ; α, β, γ) det(x 1 ) ν 1 (m+1)/2 det(x 2 ) ν 2 (m+1)/2 det (I m X 1 X 2 ) γ (m+1)/2 2F 1 (α, β; γ; I m X 1 X 2 ), (20) where X 1 > 0, X 2 > 0, X 1 + X 2 < I m, ν 1 > (m 1)/2, ν 2 > (m 1)/2, γ > (m 1)/2, ν 1 + ν 2 + γ α β > (m 1)/2, and C m (ν 1, ν 2 ; α, β, γ) is the normalizing constant. By integrating the density (20) over its support set, the normalizing constant C m (ν 1, ν 2 ; α, β, γ) is evaluated as C m (ν 1, ν 2 ; α, β, γ) = Γ m(ν 1 + ν 2 + γ α)γ m (ν 1 + ν 2 + γ β) Γ m (ν 1 )Γ m (ν 2 )Γ m (γ)γ m (ν 1 + ν 2 + γ α β). (21) 5 Properties In this section we study several properties of the bimatrix variate hypergeometric function type I distribution defined in the previous section. In the next theorem, we derive the marginal distribution. Theorem 5.1. If (X 1, X 2 ) H I m(ν 1, ν 2 ; α, β, γ), then X 1 H I m(ν 1, α, β, ν 2 + γ) and X 2 H I m(ν 2, α, β, ν 1 + γ). Proof. To find the marginal p.d.f. of X 1, we integrate (2) with respect to X 2 to get C m (ν 1, ν 2 ; α, β, γ) det(x 1 ) ν 1 (m+1)/2 Im X 1 0 det(x 2 ) ν 2 (m+1)/2 det (I m X 1 X 2 ) γ (m+1)/2 2F 1 (α, β; γ; I m X 1 X 2 ) dx 2. Substituting Z = (I m X 1 ) 1/2 X 2 (I m X 1 ) 1/2 with the Jacobian J(X 2 Z) = det(i m X 1 ) (m+1)/2 above, one obtains C m (ν 1, ν 2 ; α, β, γ) det(x 1 ) ν 1 (m+1)/2 det(i m X 1 ) ν 2+γ (m+1)/2 Im 0 det(z) ν 2 (m+1)/2 det (I m Z) γ (m+1)/2 2F 1 (α, β; γ; (I m X 1 )(I m Z)) dz. Now, the desired result is obtained by using (10).

Properties of matrix variate hypergeometric function distribution 687 By using (20) and (21), the joint (r, s)-th moment of det(x 1 ) and det(x 2 ) is obtained as Edet(X 1 ) r det(x 2 ) s ] = C m (ν 1, ν 2 ; α, β, γ) C m (ν 1 + r, ν 2 + s; α, β, γ) = Γ m(ν 1 + ν 2 + γ α)γ m (ν 1 + ν 2 + γ β) Γ m (ν 1 )Γ m (ν 2 )Γ m (ν 1 + ν 2 + γ α β) Γ m(ν 1 + r)γ m (ν 2 + s)γ m (ν 1 + ν 2 + γ α β + r + s) Γ m (ν 1 + ν 2 + γ α + r + s)γ m (ν 1 + ν 2 + γ β + r + s). By writing multivariate gamma functions in terms of ordinary gamma functions, the above expression is re-written as Edet(X 1 ) r det(x 2 ) s ] m Γν 1 + ν 2 + γ α (i 1)/2]Γν 1 + ν 2 + γ β (i 1)/2] = Γν 1 (i 1)/2]Γν 2 (i 1)/2]Γν 1 + ν 2 + γ α β (i 1)/2] i=1 Γν 1 (i 1)/2 + r]γν 2 (i 1)/2 + s] Γν 1 + ν 2 + γ α (i 1)/2 + r + s] Γν ] 1 + ν 2 + γ α β (i 1)/2 + r + s]. (22) Γν 1 + ν 2 + γ β (i 1)/2 + r + s] For m = 1, we have the case of two scalar random variables and the above moment expression reduces to E(X r 1X s 2) = Γ(ν 1 + ν 2 + γ α)γ(ν 1 + ν 2 + γ β) Γ(ν 1 )Γ m (ν 2 )Γ(ν 1 + ν 2 + γ α β) Γ(ν 1 + r)γ(ν 2 + s)γ(ν 1 + ν 2 + γ α β + r + s) Γ(ν 1 + ν 2 + γ α + r + s)γ(ν 1 + ν 2 + γ β + r + s). (23) Substituting m = 2 in (22) and using the duplication formula for gamma function the (r, s)th joint moment of det(x 1 ) and det(x 2 ) is written as E( det(x 1 )) r ( det(x 2 )) s ] = Γ(2ν 1 + 2ν 2 + 2γ 2α 1)Γ(2ν 1 + 2ν 2 + 2γ 2β 1) Γ(2ν 1 1)Γ m (2ν 2 1)Γ(2ν 1 + 2ν 2 + 2γ 2α 2β 1) Γ(2ν 1 1 + r)γ(2ν 2 1 + s)γ(2ν 1 + 2ν 2 + 2γ 2α 2β 1+r+s) Γ(2ν 1 + 2ν 2 + 2γ 2α 1 + r + s)γ(2ν 1 + 2ν 2 + 2γ 2β 1+r+s). (24) Now, comparing (24) with (23) it is easy to that ( det(x 1 ), det(x 2 )) H I (2ν 1 1, 2ν 2 1; 2α, 2β, 2γ + 1).

688 Daya K. Nagar and Juan Carlos Mosquera-Benítez Substituting appropriately in (22), we obtain E(det(X j )) = E(det(X j ) 2 ) = m ν j (i 1)/2]ν 1 + ν 2 + γ α β (i 1)/2] ν i=1 1 + ν 2 + γ α (i 1)/2]ν 1 + ν 2 + γ β (i 1)/2], m ν j (i 1)/2]ν 1 + ν 2 + γ α β (i 1)/2] ν i=1 1 + ν 2 + γ α (i 1)/2]ν 1 + ν 2 + γ β (i 1)/2] ] ν j (i 3)/2]ν 1 + ν 2 + γ α β (i 3)/2], ν 1 + ν 2 + γ α (i 3)/2]ν 1 + ν 2 + γ β (i 3)/2] m ν 1 (i 1)/2]ν 1 + ν 2 + γ α β (i 1)/2] Edet(X 1 X 2 )] = ν i=1 1 + ν 2 + γ α (i 1)/2]ν 1 + ν 2 + γ β (i 1)/2] ] ν 2 (i 1)/2]ν 1 + ν 2 + γ α β (i 3)/2], ν 1 + ν 2 + γ α (i 3)/2]ν 1 + ν 2 + γ β (i 3)/2] m ν j (i 1)/2]ν 1 + ν 2 + γ α β (i 1)/2] Var(det(X j )) = ν i=1 1 + ν 2 + γ α (i 1)/2]ν 1 + ν 2 + γ β (i 1)/2] ν j (i 3)/2]ν 1 + ν 2 + γ α β (i 3)/2] ν 1 + ν 2 + γ α (i 3)/2]ν 1 + ν 2 + γ β (i 3)/2] ] ] ν j (i 1)/2]ν 1 + ν 2 + γ α β (i 1)/2], ν 1 + ν 2 + γ α (i 1)/2]ν 1 + ν 2 + γ β (i 1)/2] Cov(det(X 1 ), det(x 2 )) m ν 1 (i 1)/2]ν 2 (i 1)/2]ν 1 + ν 2 + γ α β (i 1)/2] = ν i=1 1 + ν 2 + γ α (i 1)/2]ν 1 + ν 2 + γ β (i 1)/2] ν 1 + ν 2 + γ α β (i 3)/2] ν 1 + ν 2 + γ α (i 3)/2]ν 1 + ν 2 + γ β (i 3)/2] ] ] ν 1 + ν 2 + γ α β (i 1)/2]. ν 1 + ν 2 + γ α (i 1)/2]ν 1 + ν 2 + γ β (i 1)/2] In the next theorem we derive the bimatrix variate hypergeometric function type I distribution using independent beta and Dirichlet matrices. Theorem 5.2. Let Z B1(m, a, b) and (U 1, U 2 ) D1(m, c 1, c 2 ; d) be independent. Then, (Z 1/2 U 1 Z 1/2, Z 1/2 U 2 Z 1/2 ) H I m(c 1, c 2 ; b, c 1 + c 2 + d a, b + d). Proof. The joint density of Z and (U 1, U 2 ) is given by K det(u 1 ) c 1 (m+1)/2 det(u 2 ) c 2 (m+1)/2 det (I m U 1 U 2 ) d (m+1)/2 det(z) a (m+1)/2 det(i m Z) b (m+1)/2, (25)

Properties of matrix variate hypergeometric function distribution 689 where U 1 > 0, U 2 > 0, U 1 + U 2 < I m, 0 < Z < I m and K = Γ m (c 1 + c 2 + d)γ m (a + b) Γ m (c 1 )Γ m (c 2 )Γ m (d)γ m (a)γ m (b). Transforming X i = Z 1/2 U i Z 1/2, i = 1, 2 with the Jacobian J(U 1, U 2, Z X 1, X 2, Z) = det(z) m 1 in (25) and integrating out Z, we get the marginal p.d.f. of (X 1, X 2 ) as K det(x 1 ) c 1 (m+1)/2 det(x 2 ) c 2 (m+1)/2 Im det (Z X 1 X 2 ) d (m+1)/2 det(i m Z) b (m+1)/2 dz X 1 +X 2 det(z) c 1+c 2, (26) +d a where X 1 > 0, X 2 > 0 and X 1 + X 2 < I m. Now, making the substitution V = (I m X 1 X 2 ) 1/2 (I m Z) (I m X 1 X 2 ) 1/2 with the Jacobian J(Z V ) = det (I m X 1 X 2 ) (m+1)/2 in (26), we obtain K det(x 1 ) c 1 (m+1)/2 det(x 2 ) c 2 (m+1)/2 det (I m X 1 X 2 ) b+d (m+1)/2 Im 0 det(v ) b (m+1)/2 det(i m V ) d (m+1)/2 dv det (I m (I m X 1 X 2 )V ) c 1+c 2 +d a. Finally, evaluation of the above integral using (8) yields the desired result. Corollary 5.2.1. Let Z B1(m, a, b) and (U 1, U 2 ) D1(m, c 1, c 2 ; d) be independent. Then, ( (Im Z) 1/2 U 1 (I m Z) 1/2, (I m Z) 1/2 U 2 (I m Z) 1/2) H I m (c 1, c 2 ; a, c 1 + c 2 + d b, a + d). Further, (Z 1/2 U 1 Z 1/2, Z 1/2 U 2 Z 1/2 ) D1(m, c 1, c 2 ; b + d) if a = c 1 + c 2 + d and ( (Im Z) 1/2 U 1 (I m Z) 1/2, (I m Z) 1/2 U 2 (I m Z) 1/2) D1(m, c 1, c 2 ; a + d) if b = c 1 + c 2 + d. Corollary 5.2.2. Let V B2(m, a, b) and (U 1, U 2 ) D1(m, c 1, c 2 ; d) be independent. Then, ( (Im + V 1 ) 1/2 U 1 (I m + V 1 ) 1/2, (I m + V 1 ) 1/2 U 2 (I m + V 1 ) 1/2 ) ) H I m (c 1, c 2 ; b, c 1 + c 2 + d a, b + d) and ( (Im + V ) 1/2 U 1 (I m + V ) 1/2, (I m + V ) 1/2 U 2 (I m + V ) 1/2 ) ) H I m (c 1, c 2 ; a, c 1 + c 2 + d b, a + d).

690 Daya K. Nagar and Juan Carlos Mosquera-Benítez Further, ( (Im + V 1 ) 1/2 U 1 (I m + V 1 ) 1/2, (I m + V 1 ) 1/2 U 2 (I m + V 1 ) 1/2 ) ) D1(m, c 1, c 2 ; b + d) if a = c 1 + c 2 + d and ( (Im + V ) 1/2 U 1 (I m + V ) 1/2, (I m + V ) 1/2 U 2 (I m + V ) 1/2 ) ) D1(m, c 1, c 2 ; a + d) if b = c 1 + c 2 + d. 6 Distributions of Sum and Quotients It is well known that if (X 1, X 2 ) D1(m, ν 1, ν 2 ; ν 3 ), then X 1/2 2 X 1 X 1/2 2 and (X 1 + X 2 ) 1/2 X 1 (X 1 + X 2 ) 1/2 are independent of X 1 + X 2. Further, (i) X 1/2 2 X 1 X 1/2 2 B2(m, ν 1, ν 2 ), (ii) (X 1 + X 2 ) 1/2 X 1 (X 1 + X 2 ) 1/2 B1(m, ν 1, ν 2 ), and (iii) X 1 + X 2 B1(m, ν 1 + ν 2, ν 3 ). In this section, we derive similar results when X 1 and X 2 have a bimatrix-variate hypergeometric function type I distribution. Theorem 6.1. Let (X 1, X 2 ) H I m(ν 1, ν 2 ; α, β, γ). Define S = X 1 + X 2 and Z = S 1/2 X 1 S 1/2. Then, Z and S are independent, Z B1(m, ν 1, ν 2 ) and S H I m(ν 1 + ν 2, α, β, γ). Proof. Transforming S = X 1 + X 2 and Z = S 1/2 X 1 S 1/2 with the Jacobian J(X 1, X 2 Z, S) = det(s) (m+1)/2 in (20) we obtain the joint p.d.f. of Z and S as C m (ν 1, ν 2 ; α, β, γ) det(z) ν 1 (m+1)/2 det(i m Z) ν 2 (m+1)/2 det(s) ν 1+ν 2 (m+1)/2 det(i m S) γ (m+1)/2 2F 1 (α, β; γ; I m S), where 0 < Z < I m and 0 < S < I m. Now, from the above factorization, it is clear that Z and S are independent, Z B1(m, ν 1, ν 2 ) and S H I m(ν 1 + ν 2, α, β, γ). Corollary 6.1.1. Let (X 1, X 2 ) Hm(ν I 1, ν 2 ; α, β, γ). Then, X 1/2 2 X 1 X 1/2 2 B2(m, ν 1, ν 2 ) and is independent of X 1 + X 2. Acknowledgements. The research work of DKN was supported by the Sistema Universitaria de Investigación, Universidad de Antioquia project no. 2015-5323].

Properties of matrix variate hypergeometric function distribution 691 References 1] A. G. Constantine, Some non-central distribution problems in multivariate analysis, Annals of Mathematical Statistics, 34 (1963), 1270 1285. https://doi.org/10.1214/aoms/1177703863 2] A. K. Gupta and D. K. Nagar, Matrix Variate Distributions, Chapman & Hall/CRC, Boca Raton, 2000. 3] A. K. Gupta and D. K. Nagar, Matrix-variate beta distribution, International Journal of Mathematics and Mathematical Science, 24 (2000), no. 7, 449 459. https://doi.org/10.1155/s0161171200002398 4] Carl S. Herz, Bessel functions of matrix argument, Annals of Mathematics, 61 (1955), 474 523. https://doi.org/10.2307/1969810 5] Alan T. James, Distributions of matrix variates and latent roots derived from normal samples, Annals of Mathematical Statistics, 35 (1964), 475-501. https://doi.org/10.1214/aoms/1177703550 6] D. G. Kabe, On Subrahmaniam s conjecture for an integral involving zonal polynomials, Utilitas Mathematica, 15 (1979), 245 248. 7] C. G. Khatri, Ravindra Khattree and Rameshwar D. Gupta, On a class of orthogonal invariant and residual independent matrix distributions, Sankhya Ser. B, 53 (1991), no. 1, 1 10. 8] Y. L. Luke, The Special Functions and their Approximations, Vol. 1, Academic Press, New York, 1969. 9] Robb J. Muirhead, Aspects of Multivariate Statistical Theory, Wiley Series in Probability and Mathematical Statistics, John Wiley & Sons, Inc., New York, 1982. https://doi.org/10.1002/9780470316559 10] Daya K. Nagar and José A. Alvarez, Properties of the hypergeometric function type I distribution, Advances and Applications in Statistics, 5 (2005), no. 3, 341 351. 11] Daya K. Nagar and José A. Alvarez, Distribution of the product of independent hypergeometric function type I variables, International Journal of Applied Mathematics & Statistics, 13 (2008), no. M08, 46 56. 12] Daya K. Nagar and Paula A. Bran-Cardona, Bivariate generalization of the hypergeometric function type I distribution, Far East Journal of Theoretical Statistics, 26 (2008), no. 1, 97 107.

692 Daya K. Nagar and Juan Carlos Mosquera-Benítez 13] Kocherlakota Subrahmaniam, On some functions of matrix argument, Utilitas Mathematica, 3 (1973), 83 106. Received: February 21, 2017; Published: March 11, 2017