A Study on the Correlation of Bivariate And Trivariate Normal Models

Μέγεθος: px
Εμφάνιση ξεκινά από τη σελίδα:

Download "A Study on the Correlation of Bivariate And Trivariate Normal Models"

Transcript

1 Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School A Study on the Correlation of Bivariate And Trivariate Normal Models Maria del Pilar Orjuela Florida International University, pily.org@gmail.com Follow this and additional works at: Part of the Multivariate Analysis Commons, and the Statistical Theory Commons Recommended Citation Orjuela, Maria del Pilar, "A Study on the Correlation of Bivariate And Trivariate Normal Models" (2013). FIU Electronic Theses and Dissertations. Paper This work is brought to you for free and open access by the University Graduate School at FIU Digital Commons. It has been accepted for inclusion in FIU Electronic Theses and Dissertations by an authorized administrator of FIU Digital Commons. For more information, please contact dcc@fiu.edu.

2 FLORIDA INTERNATIONAL UNIVERSITY Miami, Florida A STUDY ON THE CORRELATION OF BIVARIATE AND TRIVARIATE NORMAL MODELS A thesis submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE in STATISTICS by Maria del Pilar Orjuela Garavito 2013

3 To: Dean Kenneth Furton College of Arts and Sciences This thesis, written by Maria del Pilar Orjuela Garavito, and entitled A Study on the Correlation of Bivariate and Trivariate Normal Models, having been approved in respect to style and intellectual content, is referred to you for judgment. We have read this thesis and recommend that it be approved. Florence George Kai Huang, Co-Major Professor Jie Mi, Co-Major Professor Date of Defense: November 1, 2013 The thesis of Maria del Pilar Orjuela Garavito is approved. Dean Kenneth Furton College of Arts and Sciences Dean Lakshmi N. Reddi University Graduate School Florida International University, 2013 ii

4 c Copyright 2013 by Maria del Pilar Orjuela Garavito All rights reserved. iii

5 DEDICATION To my parents. iv

6 ACKNOWLEDGMENTS Thanks, everyone! v

7 ABSTRACT OF THE THESIS A STUDY ON THE CORRELATION OF BIVARIATE AND TRIVARIATE NORMAL MODELS by Maria del Pilar Orjuela Garavito Florida International University, 2013 Miami, Florida Professor Jie Mi, Co-Major Professor Professor Kai Huang, Co-Major Professor Suppose two or more variables are jointly normally distributed. If there is a common relationship between these variables, it would be very important to quantify this relationship by a parameter called the correlation coefficient which measures its strength, and the use of it can develop an equation for predicting, and ultimately draw testable conclusion about the parent population. My research focused on the correlation coefficient ρ for the bivariate and trivariate normal distribution when equal variances and equal covariances are considered. Particularly, we derived the Maximum Likelihood Estimators (MLE) of the distribution parameters assuming all of them are unknown, and we studied the properties and asymptotic distribution of ˆρ. Showing the asymptotic normality of the distribution of ˆρ, we were able to construct confidence intervals of the correlation coefficient ρ and test hypothesis about ρ. With a series of simulations, the performance of our new estimators were studied and were compared with those estimators that already exist in the literature. The results indicated that the MLE has a better or similar performance than the others. Keywords: Bivariate Normal Distribution, Trivariate Normal Distribution, Correlation coefficient, MLE, Pearson Correlation Coefficient. vi

8 TABLE OF CONTENTS CHAPTER PAGE 1. INTRODUCTION Motivation Background and Theory BIVARIATE NORMAL DISTRIBUTION General Discussion Case of Equal Variances Data and Model Likelihood Function Derivation of MLEs of Covariance Matrix Symmetric Bivariate Normal Parameters Properties of the MLEs of the Covariance Matrix Symmetric Bivariate Normal Parameters Sampling Distributions of MLEs of the Covariance Matrix Symmetric Bivariate Normal Parameters Discussion about the correlation coefficient ˆρ Variance Stabilizing transformation: The limiting distribution of the z- transformation of ˆρ TRIVARIATE NORMAL DISTRIBUTION General Discussion Covariance Matrix Symmetric Model Likelihood Function Derivation of Maximum Likelihood Estimators Properties of the MLE of ρ Sampling Distribution of ˆρ of Trivariate Normal Distribution The Limiting Distribution of ˆρ Variance-stabilizing transformation: The limiting distribution of the z- transformation of ˆρ NUMERICAL STUDY ON THE CASE OF BIVARIATE NORMAL DIS- TRIBUTION Comparison between ˆr n and ˆρ n Comparison Point Estimators of the Correlation Coefficient ρ Study on the Performance of the Estimators Confidence Interval for ρ Study on the Denominator for the Confidence Intervals Confidence Intervals Testing hypothesis vii

9 4.3.1 Probability of a Type I error Power of the Test NUMERICAL STUDY ON THE CASE OF TRIVARIATE NORMAL DIS- TRIBUTION Comparison Estimators of ρ Point Estimators Study on the performance of the estimators Confidence Intervals Study on the denominators Confidence Intervals Test of Hypothesis Probability Type I error Power of the test Conclusion viii

10 TABLE LIST OF TABLES PAGE 4.1 Comparison Point Estimators Comparison Bias Estimators Comparison MSE Estimators Comparison Denominadors MLE using Confidence Interval in Expression (4.3) Cont. Comparison Denominadors MLE using Confidence Interval in Expression. (4.3) Comparison Denominadors Pearson using Confidence Interval in Expression. (4.3) Cont. Comparison Denominadors Pearson using Confidence Interval in Expression. (4.3) Comparison Denominadors MLE using Fisher Transformation Cont. Comparison Denominadors MLE using Fisher Transformation Comparison Denominadors Pearson using Fisher Dist Cont. Comparison Denominadors Pearson using Fisher Dist Comparison Comparison Average Width CI Comparison Type I Error Power MLE when Ha : ρ 1 = ρ ± Power Pearson when Ha : ρ 1 = ρ ± Power MLE when Ha : ρ 1 = ρ ± Power Pearson when Ha : ρ 1 = ρ ± Comparison Point Estimators Trivariate Comparison Bias of Estimators Trivariate Comparison MSE of Estimators Trivariate Comparison Denominators MLE Using Normal Trivariate ix

11 5.5 Comparison Denominators MLE Using Variance Stabilizing Transformation Trivariate Confidence Interval for ρ Trivariate Average Width of Confidence Interval for ρ Trivariate Type I error Trivariate Comparison Power when Ha : ρ 1 = ρ 0 ± 0.10 using MLE Trivariate Comparison Power when Ha : ρ = ρ ± 0.05 using MLE Trivariate x

12 FIGURE LIST OF FIGURES PAGE 4.1 Comparison Point Estimators of Comparison Bias of Point Estimators Comparison Absolute Value of Bias Comparison MSE Comparison Denominators for MLE using Confidence Interval in Expression. (4.3) Comparison Denominators for Pearson Estimator using Confidence Interval in Espression. (4.3) Comparison Denominators for MLE using Fisher Transformation Comparison Denominators for Pearson Estimator using Fisher Transformation Comparison Confidence Interval Comparison Average Width of the 95% Confidence Interval Comparison Type I error Comparison Power Ha : ρ = ρ Comparison Power Ha : ρ = ρ Comparison Power Ha : ρ = ρ Comparison Power Ha : ρ = ρ Comparison Point Estimators Trivariate Normal Distribution Comparison Bias Trivariate Normal Distribution Comparison MSE Trivariate Normal Distribution Comparison denominators using MLE in Expression Comparison denominator MLE using variance stabilizing transformation Comparison of CI using Fisher Comparison Width 95% CI using Fisher Comparison Type I error xi

13 5.9 Comparison Power when Ha: ρ 1 = ρ Comparison Power when Ha : ρ 1 = ρ Comparison Power when Ha : ρ 1 = ρ Comparison Power when Ha : ρ 1 = ρ xii

14 CHAPTER 1 INTRODUCTION 1.1 Motivation The Normal Distribution is one of the most common distributions used in a variety of fields, not only because it appears in theoretical work as an approximation of the distribution of different kind of measurements, but also because there are so many phenomena in the nature that follows the normal distribution. For instance, in natural sciences, quantities such as height of people and intelligence coefficients are approximately normally distributed (Clauser [2007]); in physics velocities of the molecules in an ideal gas, the position of a particle which experiences diffusion and the ground state in a quantum harmonic oscillator, are also normally distributed. The Multivariate case of the Normal Distribution and, in particular, the Bivariate and Trivariate cases, play a fundamental role in Multivariate Analysis. Bivariate normal distributions have shown to be important from the theoretical point of view and have been widely studied because its several useful and elegant properties. However, the Trivariate case, although also important, has received much less attention. Sometimes there is a relationship and dependence among the variables under the Multivariate Normal Distribution. For this reason, would be very useful to quantify this relationship in a number called the correlation coefficient, measure its strength, develop an equation for predicting, and ultimately draw testable conclusion about the parent population. My thesis focuses on the measure of correlation, assuming that there is common relationship in hte Bivariate and Trivariate models when all the variables have common variance. 1

15 1.2 Background and Theory In the first half of the nineteenth century, scientists such as Laplace, Plana, Gauss and Bravais studied the Bivariate Normal and Seal and Lancaster contributed to these developments as well. But it was until 1888 when Francis Galton, who first introduced the concept of a measure of correlation in a bivariate data, re-described and recognized the mathematical definition of it and found some applications. Inference regarding the correlation coefficient of two random variables from the Bivariate Normal Distribution has been investigated by many authors. In particular, Fisher (1915) and Hotelling (1953) derived exact forms of the density for the sample correlation coefficient but obtaining confidence intervals was computationally intensive (1988). Fisher also derived his famous transformation for large samples for the limiting distribution of the bivariate correlation coefficient, and, in 1921, he obtained results for the correlation coefficient for small samples (1921). Other authors such as Ruben (1966) established the distribution of the correlation coefficient in the form of a normalization approximation, and Kraemer (1973) and Samiuddin (1970) in the form of a t approximations. More recently, Sun and Wong (2007) used a likelihood-based higher-order asymptotic method to obtain confidence intervals very accurate even when the sample size is small. Some other authors, such as Goria (1980), Masry (2011) and Paul (1988) studied the correlation coefficient when specific assumptions are given to the Bivariate Normal distributions. For example, Goria (1980) deduced two test for testing the correlation coefficient in the bivariate normal population assuming that the ratio of the variances is known; Masry (2011) worked with the estimation of the correlation 2

16 coefficient of bivariate data under dependence; and Paul (1988) proposed a method for testing the equality of several correlation coefficients, for more than two independent random samples from bivariate normal populations. Since very little attempt, in the literature, has been made to develop and evaluate statistics for testing the validity of the assumption of a common correlation with equal variances, commonly called covariance permutation-symmetric for multivariate normal distribution, our task is then to explore and study the correlation coefficient under this assumption in the bivariate and trivariate normal distributions. The idea of our study is that, on the basis of a random sample (X 1, Y 1 ),..., (X n, Y n ) of n observations from a Bivariate Normal population with equal variance, we will find the estimators of the parameters assuming that all of them are unknown. Particularly, we will derive the Maximum Likelihood estimators of the distribution parameters, and we will study the asymptotic distribution of correlation coefficient. Showing the asymptotic normality of this distribution, we will then be able to construct confidence intervals of it and test hypothesis. The performance of our new estimator will be studied and will be compared with those estimators that already exist in the literature. In the case of the Trivariate Normal distribution, we derive the maximum likelihood estimator for the correlation coefficient from a random sample of n observations (X 11, X 21, X 31 ),..., (X n1, X n1, X n1 ) as well assuming that this sample comes from a covariance permutation-symmetric Trivariate Normal population. We will compare it with that one obtained from the average of pairwise bivariate Pearson correlation coefficients and the average of pairwise MLE of ρ. 3

17 The thesis is organized as follows. The derivation of the Maximum Likelihood Estimators of the parameters of the Bivariate Normal distribution under the assumption of equal variances is reviewed in Chapter 2. Moreover, a discussion about the correlation coefficient is performed, whose properties are announced and sampling distribution is found. In Chapter 3 a similar procedure is made when a Trivariate Normal distribution is considered under the same assumption of equal variances. Chapter 4 and 5 focus on numerical computations using the MLE of the correlation coefficient for Bivariate and Trivariate cases respectively, comparing the performance of confidence regions, probability of Type I error and Power of testing for its true value. 4

18 CHAPTER 2 BIVARIATE NORMAL DISTRIBUTION In Chapter 2, we study a particular case of the Bivariate Normal Distribution. Using a frequently used method, namely the method of the Maximum Likelihood Estimation, we find the estimators of the parameters of this special case. We discuss its properties and focus on the MLE estimator of its correlation coefficient. The outline of the chapter is as follows. In Section 2.1 we introduce the probability density function of the Bivariate Normal Distribution and its MLEs. In Section 2.2 we discuss the probability density function of the Bivariate Normal Distributionn with equal variances. In particular, in Section we find the likelihood function and in Section we discuss the MLEs of the parameters. Later, in Section we show properties of those MLEs found before and finally, in Section 2.2.6, we focus on the study of the estimator of the correlation coefficient ˆρ. 2.1 General Discussion Let X and Y be two random variables. It is said that (X, Y ) have a Bivariate Normal Distribution if the joint probability density function f(x, y) is of the form f(x, y) = 1 { 2πσ 1 σ exp 1 } 2 1 ρ 2 2 Q(x, y) < x, y < (2.1) where Q(x, y) = 1 [ (x µ1 ) 2 (x µ 1 ) (y µ 2 ) ( y µ1 ) ] 2 2ρ + 1 ρ 2 σ 1 σ 1 σ 2 σ 2 (2.2) It can be shown that E(X) = µ 1, E(Y ) = µ 2, V ar(x) = σ 2 1, V ar(y ) = σ 2 2 5

19 and ρ is the correlation between X and Y, where < µ 1, µ 2 <, σ 2 1, σ 2 2 > 0 and 1 ρ 1. Suppose that n observations are taken from this Bivariate Normal Distribution, let say (x 1, y 1 ), (x 2, y 2 ),..., (x n, y n ). The Maximum Likelihood Estimators, MLEs, of the parameters µ 1, µ 2, σ 1, σ 2 and ρ are: r n = n ˆµ 1 = x i = x (2.3) n n ˆµ 2 = y i = y (2.4) n n ˆσ 1 2 = (x i x) 2 (2.5) n n ˆσ 2 2 = (y i y) 2 (2.6) n n (x i x)(y i y) n (x i x) n (y (2.7) i y) It can be shown that the properties of the estimator of the correlation coefficiente r n for the general case of the Bivariate Normal distribution are the following: Properties of r n 1. n ( r n ρ ) N ( 0, (1 ρ 2 ) 2) 2. ( 1 n ln( 1+r n ) 2 1 r n 1 ln( 1+ρ 2 1 ρ) ) N(0, 1). This is called Fisher s z-transformation 6

20 2.2 Case of Equal Variances Data and Model Suppose (X, Y ) is a random vector following a Bivariate Normal distribution N 2 (µ, Σ) where µ = (µ 1, µ 2 ) and Σ is the 2 2 variance-covariance matrix. In the present section we assume that V ar(x) = V ar(y ) = σ 2 and Cov(X, Y ) = σ 2 ρ. Then, the joint probability density function for (X, Y ) is given by f(x, y; u, σ 2 1 {, ρ) = 2πσ 2 1 ρ exp W (x, y) } 2 2σ 2 (1 ρ 2 ) where (2.8) provided that 1 < ρ < 1. W (x, y) = (x µ 1 ) 2 2ρ(x µ 1 )(y µ 2 ) + (y µ 1 ) 2 (2.9) Likelihood Function We want to find the MLEs of the unkown parameters of the distribution µ 1, µ 2, τ and ρ where τ = σ 2. For this purpose, let us assume (x 1, y 1 ), (x 2, y 2 ),..., (x n, y n ) is a random sample from this population. Since they are independents and identically distributed, we can write the likelihood function L(u, σ 2, ρ; (x, y)) as L(u, σ 2, ρ; (x, y)) = = n f(x i, y i ; u, σ 2, ρ) (2.10) n 1 { 2πτ 1 ρ exp 1 } 2 2τ(1 ρ 2 ) Q(x i, y i ) (2.11) 7

21 where x = (x 1, x 2,..., x n ), y = (y 1, y 2,..., y n ) and Q(x i, y i ) = (x i µ 1 ) 2 2ρ(x i µ 1 )(y i µ 2 ) + (y i µ 2 ) 2. (2.12) The likelihood function can be written as L(u, σ 2, ρ; (x, y)) = 1 (2π) n τ n (1 ρ 2 ) n 2 { 1 } exp 2τ(1 ρ 2 ) Q (x, y) (2.13) where Q (x, y) = ] [(x i µ 1 ) 2 2ρ(x i µ 1 )(y i µ 2 ) + (y i µ 2 ) 2. (2.14) Derivation of MLEs of Covariance Matrix Symmetric Bivariate Normal Parameters Using the likelihood function L(u, σ 2, ρ; (x, y)) we want to find the MLEs of ˆµ 1, ˆµ 2, ˆσ 1, ˆσ 2, ˆρ that maximizes L(u, σ 2, ρ; (x, y)). For this purpose, it is convenient to work with the natural logarithm of the likelihood function (since it is a monotone function) ln L(u, σ 2, ρ; (x, y)) = n ln(2π) n ln τ n 2 ln(1 ρ2 ) and solving the system of equations 1 [ ] [(x 2τ(1 ρ 2 i µ 1 ) 2 2ρ(x i µ 1 )(y i µ 2 ) + (y i µ 2 ) 2 ] ) (2.15) ln L = ln L = ln L µ 1 µ 2 τ = ln L ρ = 0 (2.16) 8

22 we can obtain the desired estimators. The equations in (2.16 can be written as ln L 1 [ = 2(xi µ µ 1 2τ(1 ρ 2 1 ) + 2ρ(y i µ 2 ) ] = 0 (2.17) ) ln L 1 [ = 2ρ(xi µ µ 2 2τ(1 ρ 2 1 ) 2(y i µ 2 ) ] = 0 (2.18) ) ln L = n n [ τ τ + (xi µ 1 ) 2 + (y i µ 2 ) 2 2ρ(x i µ 1 )(y i µ 2 ) ] = 0 (2.19) 2(1 ρ 2 )τ 2 ln L = 2nρ ρ 2(1 ρ 2 ) [ 2 n (x i µ 1 )(y i µ 2 )][2τ(1 ρ 2 )] 4τ 2 (a ρ 2 ) 2 2τ( 2ρ) n [ (xi µ 1 ) 2 + (y i µ 2 ) 2 2ρ(x i µ 1 )(y i µ 2 ) ] = 0 (2.20) 4τ 2 (a ρ 2 ) 2 Simplifying equations (2.17) and (2.18), we have 2 2ρ (x i µ 1 ) + 2ρ (x i µ 1 ) 2 (y i µ 2 ) = 0 (2.21) (y i µ 2 ) = 0 (2.22) multiplying equation (2.21) by ρ and adding it to equation (2.22), we obtain from where since ρ ±1, which implies that 2(ρ 2 1) 2 (y 1 µ 2 ) = 0 (2.23) (y i µ 2 ) = 0 (2.24) ˆµ 2 = n y i n = y (2.25) In similar way, we obtain ˆµ 1 = n x i n = x (2.26) 9

23 Now, in order to obtain ˆρ and ˆτ, let us use equations (2.19) and (2.20). Finding the common denominator in (2.19) and replacing µ 1 by x and µ 2 by y, we get 2nτ(1 ρ 2 ) + n [ (xi x) 2 + (y i y) 2] 2ρ n (x i x)(y i y) 2τ 2 (1 ρ 2 ) and multiplying it by 2τ 2 (1 ρ 2 ) we can obtain = 0 (2.27) 2nτ(1 ρ 2 ) + [ (xi x) 2 + (y i y) 2] 2ρ (x i x)(y i y) = 0 (2.28) Similarly, simplifying equation (2.20) and replacing µ 1 and µ 2 by their estimators we have nρ (1 ρ 2 ) + n (x i x)(y i y) τ(1 ρ 2 ) ρ n [ (xi x) 2 + (y i y) 2] τ(1 ρ 2 ) 2 + 2ρ2 n (x i x)(y i y) τ(1 ρ 2 ) 2 = 0 (2.29) and multiplying it by τ(1 ρ 2 ) 2 and combining like terms, equation (2.29) becomes nτρ(1 ρ 2 ) + (1 ρ 2 + 2ρ 2 ) which is the same as nτρ(1 ρ 2 ) ρ (x i x)(y i y) ρ [ (xi x) 2 + (y i y) 2] = 0 [ (xi x) 2 + (y i y) 2] + (1 + ρ 2 ) (x i x)(y i y) = 0 (2.30) (2.31) Multiplying equation (2.28) by ρ and equation (2.31) by 2 and adding them up, we get ρ [ (xi x) 2 + (y i y) 2] + 2 (x i x)(y i y) = 0 (2.32) 10

24 from where ˆρ = which is different from ˆr n obtained in (2.7). 2 n (x i x)(y i y) n [ (xi x) 2 + (y i y) 2] (2.33) Finally, solving for τ we obtain that ] n [ (x i x) 2 + (y i y) 2 2ˆρ(x i x)(y i y) ˆτ = 2n(1 ˆρ 2 ) (2.34) which is equivalent to ˆτ = n [ (xi x) 2 + (y i y) 2] 2n (2.35) and thus ˆσ = n [ (xi x) 2 + (y i y) 2] 2n (2.36) Properties of the MLEs of the Covariance Matrix Symmetric Bivariate Normal Parameters 1. Mean of ˆµ 1, ˆµ 2 : The estimators of ˆµ 1 and ˆµ 2 in our model are unbiased estimators. It can be easily proven: Proof. [ n E [ˆµ 1 ] = E X ] i n [ ] = 1 n E X i = 1 n E [X i ] 11

25 and using the fact that E [X i ] = µ 1 then E [ˆµ 1 ] = 1 n µ 1 = 1 n nµ 1 = µ 1 Therefore E [ˆµ 1 ] = µ 1 which means that ˆµ 1 is an unbiased estimator of µ 1 Similarly, ˆµ 2 is an unbiased estimator of µ Mean of ˆµ 1 ˆµ 2 : It is easy to show that the mean of ˆµ 1 ˆµ 2 is just µ 1 µ Variance of ˆµ 1, ˆµ 2 : The variance of ˆµ 1 and ˆµ 1 is σ2 n Proof. V ar (ˆµ ) ( n 1 = V ar X ) i n = 1 ( ) n V ar X 2 i Since X i, i = 1,..., n are independent, then Therefore V ar(ˆµ 1 ) = σ2 n V ar (ˆµ 1 ) = 1 n 2 V ar(x i ) = 1 n 2 = 1 n 2 nσ2 = σ2 n σ 2 12

26 4. Variance of ˆµ 1 ˆµ 2 : The variance of ˆµ 1 ˆµ 2 is given by 2σ2 (1 ρ) n Proof. V ar(ˆµ 1 ˆµ 2 ) = V ar(ˆµ 1 ) + V ar(ˆµ 2 ) 2Cov(ˆµ 1, ˆµ 2 ) ( n = σ2 n + σ2 n 2Cov X n i j=1, Y ) j n n = 2σ2 n 2 Cov(X n 2 i, Y j ) since Cov(X i, Y j ) = σ 2 ρ if i = j and Cov(X i, Y j ) = 0 if i j, then j=1 V ar(ˆµ 1 ˆµ 2 ) = 2σ2 n 2 n 2 = 2σ2 n = 2σ2 n σ 2 ρ 2 σ 2 ρ n 2 n 2σ2 ρ n ρ = 2σ2 (1 ρ) n Sampling Distributions of MLEs of the Covariance Matrix Symmetric Bivariate Normal Parameters 1. The distribution of ˆµ 1 ˆµ 2 Since ˆµ 1 ˆµ 2 is a linear combinations of (X i, Y i ), i = 1,..., n and those come from a Bivariate Normal populations, the linear combination is also a normal population. From ( the properties seen in ) Section 2.2.4, then we can conclude that ˆµ 1 ˆµ 2 N µ 1 µ 2, 2σ2 (1 ρ) n. 13

27 2.2.6 Discussion about the correlation coefficient ˆρ In subsection we will discuss some properties and the limiting distribution of the estimator r n of the correlation coefficient ρ that was found in Section Recall that for the general case of the Bivariate Normal Distribution, the estimator ˆr n for the correlation coefficient ρ is given by ˆr n = n (x i x)(y i y) ( n (x i x) 2) ( n ( yi y) 2) (2.37) while the estimator for the correlation coefficient in case of equal variances ˆρ was found to be ˆρ n = 2 n (x i x)(y i y) n [ (xi x) 2 + (y i y) 2] (2.38) Properties of ˆρ n 1. 1 ˆρ n 1 Proof. ˆρ n n (x i x)(y i y) n [ (xi x) 2 + (y i y) 2] 1 (x i x)(y i y) [(x i x) 2 + (y i y) 2 ] [(x i x) 2 + (y i y) 2 ] 2 (x i x)(y i y) 0 [(x i x) 2 + (y i y) 2 2(x i x)(y i y)] 0 [(x i x) + (y i y)] 2 0 In similar way, ˆρ n P ( 1 ˆρ n 1) = 1 14

28 Proof. P (ˆρ n = 1 ) ( ) = P [(X i X) 2 + (Y i Y ) 2 ] = 0 ( n [ = P (Xi X) 2 + (Y i Y ) 2] ) = 0 ( ) P (X i X) 2 + (Y i Y ) 2 = 0 = 0 3. ˆρ n and ˆr n have the same sign and ˆρ n r n Proof. The desired inequality is equivalent to 2 n (x i x)(y i y) n (x i x) 2 + n (y i y) n (x i x)(y i y) 2 n (x i x) 2 n (y i y) 2 ( ) [ ] 2 4 (y i y) 2 (x i x) 2 + (y i y) 2 (x i x) 2)( ( ) 2 ( ) 2 ( (x i x) 2 + (y i y) 2 2 which is certainly true. Therefore, ˆρ n r n (x i x) 2)( (y i y) 2 ) 0 The limiting distribution of ˆρ n In the present subsection we will show that as the sample size goes to infinity (n ) it holds that n (ˆρ n ρ ) N ( 0, γ 2) in distribution where the constant γ 2 will be determined later. To show this asymptotic normality the following results are needed. Result n ( X n µ 1 ) 2 0 in probability as n 15

29 Proof. For any fixed ɛ > 0, by Chebychev s inequality, it holds that [ ] n ( ) 2 Xn µ 1 ( P n(xn µ 1 ) 2 ) E > ɛ ɛ nv ar(xn ) = nɛ nσ 2 = nɛ = σ2 nɛ 0 as n Result n ( Y n µ 2 ) 2 0 in probability as n It can be shown in the similar way as in 1. Result n ( X n µ 1 )( Y n µ 2 ) 0 in probability as n Proof. For any given ɛ > 0 we have ( ( )( ) ) n Xn µ 1 Y n µ 2 > ɛ P n ɛ E (X n µ 1 )(Y n µ 2 ) n E(X n µ 1 ) ɛ 2 E(Y n µ 2 ) 2 n σ 2 = ɛ n = σ2 ɛ n 0 as n, where the first inequality follows from Chebychev s inequality and the second one is from Cauchy-Schwarz inequality. 16

30 Lemma As n the limit distribution of ( [ n 1 ] (X i X)(Y i Y ) Cov(X, Y ), n [ 1 n (X i X) 2 σ ], 2 n [ 1 ] ) n (Y i Y ) 2 σ 2 n is the same as ( n [ 1 n ] (X i µ 1 )(Y i µ 2 ) Cov(X, Y ), Proof. It is easy to see that (X i X)(Y i Y ) = = n [ 1 n n [ 1 n (X i µ 1 ) 2 σ ], 2 ] ) (Y i µ 2 ) 2 σ 2 [ ][ ] (X i µ 1 ) (X µ 1 ) (Y i µ 2 ) (Y µ 2 ) [ (X i µ 1 )(Y i µ 2 ) (X i µ 1 )(Y µ 2 ) ] (X µ 1 )(Y i µ 2 ) + (X µ 1 )(Y µ 2 ) = (X i µ 1 )(Y i µ 2 ) n(x µ 1 )(Y µ 2 ). (2.39) It yields [ ] 1 n (X i X)(Y i Y ) Cov(X, Y ) n = [ ] 1 n (X i µ 1 )(Y i µ 2 ) Cov(X, Y ) (X µ 1 )(Y µ 2 ) n = [ ] 1 n (X i µ 1 )(Y i µ 2 ) Cov(X, Y ) n(x µ 1 )(Y µ 2 ) n 17

31 In the same manner we can obtain [ ] 1 n (X i X) 2 σ 2 = [ 1 n n n and n [ 1 n ] (Y i Y ) 2 σ 2 = [ 1 n n ] (X i µ 1 ) 2 σ 2 n(x µ 1 ) 2 ] (Y i µ 2 ) 2 σ 2 n(y µ 2 ) 2 Using the Results obtained above, we can conclude that the desired result is true. Lemma The limiting distribution of n (ˆρn ρ ) is the same as n ( ρn ρ ) where ρ n = 2 n (X 1 µ 1 )(Y 1 µ 2 ) n (X i µ 1 ) 2 + n (Y i µ 2 ) 2 (2.40) Proof. Obviously n ) (ˆρn ρ n 2 ( = 1 2 n n (X ) i X)(Y i Y ) 2 n (X i X) 2 + n (Y i Y ) 2 n (x 1 µ 1 )(y 1 µ 2 ) 2 n (x i µ 1 ) 2 + n (y i µ 2 ) 2 n n = (X i X)(Y i Y ) n n n (X i X) 2 + n (Y i Y ) (X i µ 1 )(Y i µ 2 ) 2 n (X i µ 1 ) 2 + n (Y i µ 2 ) 2 adding and subtracting n n (X i µ 1 )(Y i µ 2 ) n (X i X) 2 + n (Y i Y ) 2, 18

32 then we get n ) (ˆρn ρ n 2 n n = (X i X)(Y i Y ) n (X i X) 2 + n (Y i Y ) 2 n n (X i µ 1 )(Y i µ 2 ) n (X i X) 2 + n (Y i Y ) n 2 grouping them n n (X i µ 1 )(Y i µ 2 ) n (X i µ 1 ) 2 + n (Y i µ 2 ) 2 n (X i µ 1 )(Y i µ 2 ) n (X i X) 2 + n (Y i Y ) 2 n ) (ˆρn ρ n 2 [ n n = g (X i X)(Y i Y ) n n n (X i X) 2 + n (Y i Y ) (X i µ 1 )(Y i µ 2 ) ] 2 n (X i X) 2 + n (Y i Y ) 2 [ n n + (X i µ 1 )(Y i µ 2 ) n n n (X i X) 2 + n (Y i Y ) (X i µ 1 )(Y i µ 2 ) ] 2 n (X i µ 1 ) 2 + n (Y i µ 2 ) 2 and factoring, we have n ) (ˆρn ρ n 2 = [ n n (X i X)(Y i Y ) ] n (X i µ 1 )(Y i µ 2 ) n (X i X) 2 + n (Y i Y ) 2 + [ ] n (X i µ 1 )(Y i µ 2 ) [ ] 1 n (X i X) 2 + n (Y i Y ) 1 2 n (x i µ 1 ) 2 + n (y i µ 2 ) 2 19

33 thus, applying the equality proved above in Equation 2.39 n ) (ˆρn ρ n 2 n n(x µ 1 )(Y µ 2 ) = n (X i X) 2 + n (Y i Y ) 2 n n (X i µ 1 )(Y i µ 2 ) [ n (X i X) 2 + n (Y i Y ) 2 ][ n (X i µ 1 ) 2 + ] n (Y i µ 2 ) 2 { [ ] [ ] } (X i X) 2 + (Y i Y ) 2 (X i µ 1 ) 2 + (Y i µ 2 ) 2 = n n(x µ 1 )(Y µ 2 ) n (X i X) 2 + n (Y i y) 2 n n + (X i µ 1 )(Y i µ 2 ) [ n (X i X) 2 + n (Y i Y ) 2][ n (X i µ 1 ) 2 + n (Y i µ 2 ) 2] [ n ( ) 2 ) ] 2 X µ1 + n (Y µ 2 (2.41) Note that as n 1 n (X i X) = 1 n with probability one, and likewise 1 n (Y i Y ) = 1 n (X i µ 1 ) 2 (X µ 1 ) 2 σ 2 (Y i µ 1 ) 2 (Y µ 1 ) 2 σ 2 with probability one. Thus the first term in (2.41) n n(x µ 1 )(Y µ 2 ) n (X i X) 2 + n in probability by Result (Y i Y ) 2 = n(x µ1 )(Y µ 2 ) n (X i X) 2 + n (Y i Y ) 2 n 0 In the same way, it can be shown that as n, the second term in (2.41) converges to zero in probability. 20

34 Summarizing the above, we have shown that n ( ) ρ n ρ n 0 in probability as n. This further implies that the limiting distribution of n (ˆρ n ρ ) is the same as that of n ( ρn ρ ). Lemma As n it is true that ( n(ξn ρσ 2 ), n(η n σ 2 ), n(ζ n σ 2 ) ) N(0, Σ) in distribution, where 1 + ρ 2 2ρ 2ρ Σ = σ 4 2ρ 2 2ρ 2 2ρ 2ρ 2 2 and ξ n, η n and ζ n are three random variables define as follows: ξ n = 1 n η n = 1 n ζ n = 1 n (X i µ 1 )(Y i µ 2 ) (X i µ 1 ) 2 (Y i µ 2 ) 2 Proof. The Multivariate Central Limit Theorem implies ( n(ξ n ρσ 2 ), n(η n σ 2 ), n(ζ n σ 2 )) N(0, Σ) in distribution where Σ is the covariance matrix of the random vector ( (Xi µ 1 )(Y i µ 2 ), (X i µ 1 ) 2, (Y i µ 2 ) 2). ) Therefore it is sufficient to obtain the matrix Σ = (σ ij It can be shown that 1 i,j 3. 21

35 σ 11 = (1 + ρ 2 )σ 4 σ 12 = 2ρ 2 σ 4 σ 13 = 2ρ 2 σ 4 σ 22 = 2σ 4 σ 23 = 2ρ 2 σ 4 and so, σ 33 = 2σ ρ 2 2ρ 2ρ Σ = σ 4 2ρ 2 2ρ 2 2ρ 2ρ 2 2 Theorem As n, it holds that n(ˆρn ρ) N(0, γ 2 ) in distribution where γ 2 = (1 ρ 2 ) 2. Proof. According to Lemma 2.2.5, it suffices to show that as n n ( ρn ρ ) N ( 0, γ 2) in distribution. Clearly ρ n can be expressed in terms of ξ n, η n, ζ n as ρ n = Define the function g(u, v, w) by then obviously 2 n (x i µ 1 )(y i µ 2 ) n (x i µ 1 ) 2 + n (y i µ 1 ) = 2ξ n η n + ζ n g(u, v, w) = 2u v + w ρ n = g(u, v, w) It is easy to see that ( g u, g v, g ) ( ) 2 = w v + w, 2u (v + w), 2u 2 (v + w) 2 22

36 Evaluating it at u = ρσ 2, v = σ 2, w = σ 2 we have ( g u, g v, g ) = w By the delta method it follows that (ρσ 2,σ 2,σ 2 ) ( 1 σ 2, ρ 2σ 2, ρ 2σ 2 ) = 1 σ 2 ( 1, ρ 2, ρ 2 n (g ( ξ n, η n, ζ n ) g ( ρσ 2, σ 2, σ 2)) N ( 0, γ 2) ) in distribution where γ 2 = 1 ( σ 1, ρ ) 1 4 2, ρ Σ 2 ρ 2 ρ 2 ( = 1, ρ ) 1 + ρ 2 2ρ 2ρ 1 2, ρ 2 2 2ρ 2 ρ 2 2 =(1 ρ 2 ) 2. ρ 2 Therefore, as n n ( ρn ρ ) N ( 0, γ 2) and consequently n (ˆρn ρ ) N ( 0, γ 2) in distribution The result of Theorem can be expressed as ˆρ n N (ρ, (1 ρ2 ) 2 ) n Notice that the variance of the distribution involves the parameter ρ. How can we find a distribution whose variance is parameter - free? In the next section we address this problem. 23

37 2.2.7 Variance Stabilizing transformation: The limiting distribution of the z- transformation of ˆρ The Variance Stabilizing Transformation is used when the constant variance assumption is violated; that is, when V ar(w ), where W is a random variable, is a function of E[W ] = µ. In particular, the transformation assumes that V ar(w ) = g 2 (µ), where the function g( ) is known. The idea of it is to find a (differentiable) function h( ) so that the transformed random variable h(w ) will have a variance which is approximately independent of its mean E[h(W )]. The delta method yields the following approximation: V ar(h(w )) [h (µ)] 2 g 2 (µ) (2.42) Then, the desired function h(w) is the solution of the differential equation above. For our case, we need to find a function h such that ( ) ( ) n h(ˆρ n ) h(ρ) N 0, 1 therefore, applying the Variance Stabilizing transformation we have (h (ρ)) 2 (1 ρ 2 ) 2 = 1 (2.43) from where, solving for h h (ρ)) 2 dρ = h(ρ) = 1 1 ρ dρ 2 1 2(1 ρ) + 1 2(1 + ρ) dρ h(ρ) = 1 2 ln(1 ρ) + 1 ln(1 + ρ) 2 h(ρ) = 1 ( 1 + ρ ) 2 ln 1 ρ (2.44) 24

38 Thus, we can conclude that ( n ( 1 + ˆρ ) ( 1 + ρ ) ) ln ln N ( 0, 1 ) (2.45) 2 1 ˆρ 1 ρ The transformation express in Equation (2.44) is commonly called the Fisher Transformation. 25

39 CHAPTER 3 TRIVARIATE NORMAL DISTRIBUTION In this chapter we introduce the Trivariate Normal Distribution and describe its probability density function. In particular, making the assumptions of common variances and common correlation coefficient, we derive its Maximum Likelihood Estimators and focus our study on the estimation of the correlation coefficient ρ. Chapter 3 is developed as follows. In Section 3.1 we introduce the Trivariate Normal Distribution and its probability density function. In Section 3.2 we describe our particular case of the Covariance Matrix Symmetric Trivariate Normal Distribution, its probability density function in Section and derive its estimators in Section using the method of the Maximum Likelihood. 3.1 General Discussion A three dimensional random variable (Y 1, Y 2, Y 3 ) is said to have a Trivariate Normal Distribution if the joint probability distribution function can be represented in matrix form as f(y) = 1 (2π) 3 2 Σ 1 2 exp { } (y u)t Σ 1 (y u) 2 (3.1) where y = (y 1, y 2, y 3 ), µ = (µ 1, µ 2, µ 3 ), Σ is a 3 3, positive definite and symmetric variance-covariance matrix of the form σ 11 σ 12 σ 13 Σ = σ 12 σ 22 σ 23 σ 13 σ 23 σ 33 (3.2) 26

40 and Σ > 0 is the determinant of Σ. The density function of a Trivariate Normal Distribution can be written also as 1 f(y 1, y 2, y 3 ) = (2π) 3 2 σ11 σ 22 σ σ 12 σ 13 σ 23 σ12σ 2 33 σ13σ 2 22 σ23σ 2 11 { } 1 exp 2(σ 11 σ 22 σ σ 12 σ 13 σ 23 σ12σ 2 33 σ13σ 2 22 σ23σ 2 11 ) W (y 1, y 2, y 3 ) where W (y 1, y 2, y 3 ) = (y 1 µ 1 ) 2 (σ 22 σ 33 σ 2 23) + (y 2 µ 2 ) 2 (σ 11 σ 33 σ 2 13) +(y 3 µ 3 ) 2 (σ 11 σ 22 σ 2 12) + 2[(y 1 µ 1 )(y 2 µ 2 ) (σ 13 σ 23 σ 12 σ 33 ) + (y 1 µ 1 )(y 2 µ 2 )(σ 13 σ 23 σ 12 σ 33 ) (3.3) +(y 1 µ 1 )(y 2 µ 2 )(σ 13 σ 23 σ 12 σ 33 )] (3.4) and provided that σ 11 σ 22 σ σ 12 σ 13 σ 23 σ 2 12σ 33 σ 2 13σ 22 σ 2 23σ Covariance Matrix Symmetric We focus our study on a special form of the variance-covariance matrix Σ, a symmetric matrix where all variances are assumed to be equal and correlation coefficients too. It will be discussed below Model For our study let us consider the case of equal variances, i.e. σ 11 = σ 22 = σ 33 = σ 2 and equal correlation coefficient for each pair of variables, that is ρ 12 = ρ 13 = ρ 23 = ρ, thus the variance-covariance matrix Σ has the form σ 2 σ 2 ρ σ 2 ρ 1 ρ ρ Σ = σ 2 ρ σ 2 σ 2 ρ = σ2 ρ 1 ρ σ 2 ρ σ 2 ρ σ 2 ρ ρ 1 (3.5) 27

41 It can be shown that Σ = (σ 2 ) 3 (1 + 2ρ 3 3ρ 2 ) and thus 1 ρ 2 ρ 2 ρ ρ 2 ρ Σ 1 1 = σ 2 (2ρ 3 3ρ 2 + 1) ρ 2 ρ 1 ρ 2 ρ 2 ρ ρ 2 ρ ρ 2 ρ 1 ρ 2 (3.6) provided that Σ = (σ 2 ) 3 (2ρ 3 3ρ 2 + 1) = (σ 2 ) 3 (2ρ + 1)(ρ 1) 2 0. Therefore, the probability density function for this Trivaritate Normal Distribution is written as where f(y 1, y 2, y 3 ) = 1 { (2π) 3 2 (σ 2 ) 3 2 2ρ3 3ρ exp w } 2σ 2 (2ρ 2 ρ 1) (3.7) w = 2ρ[(y 1 µ 1 )(y 2 µ 2 ) + (y 1 µ 1 )(y 3 µ 3 ) + (y 2 µ 2 )(y 3 µ 3 )] (1 + ρ)[(y 1 µ 1 ) 2 + (y 2 µ 2 ) 2 + (y 3 µ 3 ) 2 ] (3.8) and 1 2 < ρ < Likelihood Function To obtain the likelihood function of the Covariance matrix Symmetric Trivariate Normal Distribution let us take a random sample of n 3 1-vectors y 1, y 2,...,y n, where y k = (y k1, y k2, y k3 ) for 1 k n. Thus, the likelihood function L(y; µ, σ 2, ρ) is L(µ, σ 2, ρ; y 1, y 2,..., y n ) = = n f(y i ; µ, σ 2, ρ) (3.9) n 1 { (2π) 3 2 (σ 2 ) 3 2 2ρ3 3ρ exp wi } 2σ 2 (2ρ 2 ρ 1) (3.10) 28

42 where w i =2ρ[(y i1 µ 1 )(y i2 µ 2 ) + (y i1 µ 1 )(y i3 µ 3 ) + (y i2 µ 2 )(y i3 µ 3 )] (1 + ρ)[(y i1 µ 1 ) 2 + (y i2 µ 2 ) 2 + (y i3 µ 3 ) 2 ] (3.11) Taking τ = σ 2, the likelihood function L(µ, σ 2, ρ; y 1, y 2,..., y n ) is L(µ, σ 2, ρ; y 1, y 2,..., y n ) = where µ = (µ 1, µ 2, µ 3 ) and 1 (2π) 3n 2 τ 3n 2 (2ρ 3 3ρ 2 + 1) n 2 { w } exp 2τ(2ρ 2 ρ 1) (3.12) w =2ρ [(y i1 µ 1 )(y i2 µ 2 ) + (y i1 µ 1 )(y i3 µ 3 ) + (y i2 µ 2 )(y i3 µ 3 )] (1 + ρ) [(y i1 µ 1 ) 2 + (y i2 µ 2 ) 2 + (y i3 µ 3 ) 2 ] (3.13) and the natural logarithm of the same function (1 + ρ) n j=1 3 (y ij µ j ) 2 ln L(µ, σ 2, ρ; y 1, y 2,..., y n ) = 2τ(2ρ 2 ρ 1) 2ρ n 3j,k=1 (y ij µ j )(y ik µ k ) j k 2τ(2ρ 2 ρ 1) ln(2π) 3n 2 3n 2 ln τ n 2 ln(2ρ3 3ρ 2 + 1) (3.14) Derivation of Maximum Likelihood Estimators Since µ 1, µ 2, µ 3, τ and ρ are unknown, we want to find the MLE s for these parameters, solving the system of equations ln L µ i = 0, i = 1, 2, 3; ln L τ = 0, ln L ρ = 0 29

43 that is: 2(1 + ρ) n (y i1 µ 1 ) + 2ρ n [(y i2 µ 2 ) + (y i3 µ 3 )] ln L = = 0 (3.15) µ 1 2τ(2ρ 2 ρ 1) 2(1 + ρ) n (y i2 µ 2 ) + 2ρ n [(y i1 µ 1 ) + (y i3 µ 3 )] ln L = = 0 (3.16) µ 2 2τ(2ρ 2 ρ 1) 2(1 + ρ) n (y i3 µ 3 ) + 2ρ n [(y i1 µ 1 ) + (y i2 µ 2 )] ln L = = 0 (3.17) µ 3 2τ(2ρ 2 ρ 1) (1 + ρ) n 3 (y ij µ j ) 2 2ρ n 3 (y ij µ j )(y ik µ k ) ln L τ ln L ρ where = j=1 j,k=1 j k 2τ 2 (2ρ 2 ρ 1) 3n 2τ = 0 (3.18) [ 3 3 ] =(2ρ 2 ρ 1)ψ (y ij µ j ) 2 2 (y ij µ j )(y ik µ k ) [ (4ρ 1)ψ (1 + ρ) j=1 j,k=1 j k 3 (y ij µ j ) 2 2ρ j=1 3 j,k=1 j k ] (y ij µ j )(y ik µ k ) n(6ρ2 6ρ) 2(2ρ 3 3ρ 2 + 1) = 0 (3.19) ψ = 2τ 4τ 2 (2ρ 2 ρ 1) 2 In order to solve (3.15) - (3.18) for µ 1, µ 2 and µ 3 we begin by multiplying equations (3.15),(3.16) and (3.17) by τ(2ρ 2 ρ 1) and simplifying, we get (1 + ρ) (y i1 µ 1 ) ρ [(y i2 µ 2 ) + (y i3 µ 3 )] = 0 (3.20) (1 + ρ) (1 + ρ) (y i2 µ 2 ) ρ (y i3 µ 3 ) ρ [(y i1 µ 1 ) + (y i3 µ 3 )] = 0 (3.21) [(y i1 µ 1 ) + (y i2 µ 2 )] = 0 (3.22) 30

44 Now, adding equation (3.21) to equation (3.22) and simplifying the sum we get [(y i2 µ 2 ) + (y i3 µ 3 )]) 2ρ (y i1 µ 1 ) = 0 (3.23) Multiplying (3.23) by ρ, adding it to equation (3.20) the we obtain ( 2ρ 2 + ρ + 1) (y 1i µ 1 ) = 0 (2ρ + 1)(ρ 1) (y 1i µ 1 ) = 0 recall that ρ 1 and ρ 1, therefore it follows that 2 (y i1 µ 1 ) = 0 which implies that the MLE ˆµ 1 for µ 1 is ˆµ 1 = y i1 n = y 1 (3.24) In similar way, we can obtain ˆµ 2 = y i2 n = y 2 (3.25) and ˆµ 3 = y i3 n = y 3 (3.26) Now, in order to find the estimators for ρ and τ, let us simplify equations (3.18) and equation (3.19) and substitute the estimators of µ 1, µ 2, and µ 3 found in equations (3.24),(3.25) and (3.26): Multiplying equation (3.18) by 2τ 2 (2ρ 2 ρ 1) we obtain (1 + ρ) 3 (y ij y j ) 2 2ρ j=1 3 (y ij y j )(y ik y k ) j,k=1 j k +3nτ(2ρ 2 ρ 1) = 0 (3.27) 31

45 and multiplying equation (3.19) by 2τ(2ρ 2 ρ 1) 2 and symplifying it we get [ (2ρ 2 ρ 1) [ (4ρ 1) (1 + ρ) 3 (y ij y j ) 2 2 j=1 3 (y ij y j ) 2 2ρ j=1 3 j,k=1 j k 3 j,k=1 j k ] (y ij y j )(y ik y k ) ] (y ij y j )(y ik y k ) 6nρτ(2ρ 2 ρ 1) = 0. (3.28) Grouping commom terms, the equation (3.28) will become [ (2ρ 2 ρ 1) (4ρ 1)(1 + ρ) ] n + [ 2(2ρ 2 ρ 1) + 2ρ(4ρ 1) ] n 3 (y ij y j ) 2 j=1 3 (y ij y j )(y ik y k ) j,k=1 j k 6nρτ(2ρ 2 ρ 1) = 0 (3.29) and finally combining like terms we get 2ρ(ρ + 2) 3 (y ij y j ) 2 + 2(2ρ 2 + 1) j=1 3 (y ij y j )(y ik y k ) j,k=1 j k 6nρτ(2ρ 2 ρ 1) = 0 (3.30) Now, adding equations (3.30) and the product of equation (3.27) by 2ρ, we obtain 2ρ 3 (y ij µ j ) j=1 and solving it for ρ we get 3 (y ij µ j )(y ik µ k ) = 0 (3.31) j,k=1 j k ˆρ = 3 (y ij y j )(y ik y k ) j,k=1 j k j=1 (3.32) 3 (y ij y j ) 2 32

46 which is equivalent to [ (yi1 y 1 )(y i2 y 2 ) + (y i1 y 1 )(y i3 y 3 ) + (y i2 y 2 )(y i3 y 3 ) ] ˆρ = [ (yi1 y 1 ) 2 + (y i2 y 2 ) 2 + (y i3 y 3 ) 2] (3.33) Replacing ρ in (3.27) by ˆρ in(3.33) and solving (??) for τ, we obtain the MLE ˆτ for τ [ (yi1 y 1 ) 2 + (y i2 y 2 ) 2 + (y i3 y 3 ) 2] ˆτ = 3n (3.34) and thus ˆσ is [ (yi1 y 1 ) 2 + (y i2 y 2 ) 2 + (y i3 y 3 ) 2] ˆσ = 3n (3.35) Properties of the MLE of ρ Theorem It holds that 1 2 < ˆρ < 1 with probability one. Proof. ˆρ (y ij y j )(y ik y k ) j,k=1 j k 3 j=1 1 2 (y ij y j ) 2 3 (y ij y j )(y ik y k ) j,k=1 j k 3 (y ij y j ) j=1 3 (y ij y j ) 2 j=1 3 (y ij y j )(y ik y k ) 0 j,k=1 j k 33

47 [ (yi1 y 1 ) 2 + (y i2 y 2 ) 2 + (y i3 y 3 ) 2 + 2(y i1 y 1 )(y i2 y 2 ) + 2(y i1 y 1 )(y i3 y 3 ) + 2(y i2 y 2 )(y i3 y 3 ) ] 0 ( (yi1 y 1 ) + (y i2 y 2 ) + (y i3 y 3 ) ) 2 0 which is true for all y ij, i = 1,..., n; j = 1, 2, 3. Now, let us prove that ˆρ 1: ˆρ 1 3 (y ij y j )(y ik y k ) j,k=1 j k j=1 1 3 (y ij y j ) 2 3 (y ij y j )(y ik y k ) j,k=1 j k 3 (y ij y j ) 2 j=1 3 (y ij y j ) 2 j=1 3 (y ij y j )(y ik y k ) 0 j,k=1 j k [ 2(yi1 y 1 ) 2 + 2(y i2 y 2 ) 2 + 2(y i3 y 3 ) 2 2(y i1 y 1 )(y i2 y 2 ) 2(y i1 y 1 )(y i3 y 3 ) 2(y i2 y 2 )(y i3 y 3 ) ] 0 [ ((yi1 y 1 ) (y i2 y 2 ) ) 2 ( + (yi1 y 1 ) (y i3 y 3 ) ) 2 + ( (y i2 y 2 ) (y i3 y 3 ) ) 2 ] 0 which is also true for all y ij, i = 1,..., n; j = 1, 2, 3. ) Note that P (ˆρ = 12 or 1 = 0, therefore, we can conclude that 1 2 < ˆρ < 1 with probability one. 34

48 3.3 Sampling Distribution of ˆρ of Trivariate Normal Distribution In Section we derived the maximum likelihood estimator ˆρ of the correlation coefficient ρ. Now we will derive its limiting distribution The Limiting Distribution of ˆρ Mi Mie has shown that under the covariance-matrix symmetric Trivariate Normal Distribution the MLE ˆρ n of the correlation coefficient ρ is asymptotically normal, that is ( ) n ˆρ n ρ N (0, (1 ρ)2 (1 + 2ρ) 2 ) 3 (3.36) in distribution, but since the variance in the limiting distribution depends on the unknown parameter ρ, we want to find a function h( ) such that the limiting distribution of n(h( ˆρ n ) h(ρ)) will be parameter-free, that is ( ) n h( ˆρ n ) h(ρ) N(0, 1) (3.37) For our purpose, we will use the Variance-Stabilizing Transformation in the next section Variance-stabilizing transformation: The limiting distribution of the z- transformation of ˆρ Using the Variance - Stabilizing transformation explained in Section 2.2.7, we have to find the function h( ) such that ( n h( ˆθ ) ( ) n ) h(θ) N 0, (h (θ) 2 τ 2 (θ)) 35

49 which in our desired case [h (θ)] 2 τ 2 (θ) = 1 and where τ 2 (ρ) = (1 ρ)2 (1+2ρ) 2 3. Solving for h (ρ) in the previous equation h (ρ) = 3 (1 ρ)(1 + 2ρ) and thus h(ρ) = h (ρ)dρ = 1 3 (1 ρ)(1 + 2ρ) dρ = 1 ( ρ + 1 ) dρ 1 ρ = 1 ] [ln(1 + 2ρ) ln(1 ρ) 3 = 1 ( ) 1 + 2ρ ln 3 1 ρ Therefore, applying the previous result, we can conclude that ( ( ) ( ) ) n 1 + 2ˆρ 1 + 2ρ ln ln N(0, 1) (3.38) 3 1 ˆρ 1 ρ in distribution. 36

50 CHAPTER 4 NUMERICAL STUDY ON THE CASE OF BIVARIATE NORMAL DISTRIBUTION In Chapter 4 a simulation study is conducted to compare the performance of the estimators ˆρ n and ˆr n of the correlation coefficient in the case of equal variances for the Bivariate Normal Distribution discussed in Chapter 2. We compare these two point estimators in Section and the performance of those have been evaluated using the Bias and Mean Square Error (MSE) in Section Since in Theorem was discussed the limiting distribution of ˆρ and in Section was presented a parameter-free distribution for ˆρ, a study on the confidence intervals for ρ are made in Section 4.2 and Test of Hypothesis in Section Comparison between ˆr n and ˆρ n?? In Section?? we conduct simulations to compare (a) the Pearson Correlation Coefficient r n defined in Equation (2.7) and (b) the MLE defined in the Equation (2.33). Without loss of generality, we assume µ T = (0, 0) and σ 2 = 4. We picked seven different sample sizes (n), with n ranging from 20 to 50 in increments of 5 and the 19 values ρ that ranges from -0.9 to 0.9 in 0.1 increments. The simulations were run 1000 times. 37

51 4.1.1 Comparison Point Estimators of the Correlation Coefficient ρ Figure 4.1 shows the comparison of the true value of ρ and the two estimators considered here: (a) The MLE ˆρ n and (b) the Pearson Correlation Coefficient r n. Table 4.1 shows the results of the simulation. In this figure we can see that the value of the estimators are close to the true value of ρ; however, the Pearson Correlation Coeffient is closer to ρ than the MLE except when ρ = 0 where both are the same. Note also that as the sample size increases, the difference between the estimators and ρ decreases. 38

52 Table 4.1: Comparison Point Estimators (b) (a) SAMPLE SIZE

53 =.9 =.8 = =.6 =.5 = =.3 =.2 = =0 =.1 = =0.3 =.4 = =.6 =0.7 = = True value of MLE Pearson Figure 4.1: Comparison Point Estimators of 40

54 4.1.2 Study on the Performance of the Estimators In order to compare the MLE and Pearson Correlation Coefficient, we evaluate two properties of the estimators: The Bias and the Mean Square Error. In Table 4.2 and in Figure 4.2, the bias of both estimators are shown. The Pearson Correlation Coefficient always has smaller bias than the MLE except when ρ = 0. In Figure 4.3 the absolute value of the Bias can be observed. From this Figure we can see that the absolute value of the bias decreases as the sample size increases. Since the Bias is not enough to compare the performance of both estimators, the MSE of the estimators were found. Table 4.3 describes those values and are represented in Figure 4.4. It can be observed that there is not a noticeable difference on the MSE of the MLE compared to the Pearson Correlation Coefficient. 41

55 (b) (a) Table 4.2: Comparison Bias Estimators SAMPLE SIZE

Other Test Constructions: Likelihood Ratio & Bayes Tests

Other Test Constructions: Likelihood Ratio & Bayes Tests Other Test Constructions: Likelihood Ratio & Bayes Tests Side-Note: So far we have seen a few approaches for creating tests such as Neyman-Pearson Lemma ( most powerful tests of H 0 : θ = θ 0 vs H 1 :

Διαβάστε περισσότερα

ST5224: Advanced Statistical Theory II

ST5224: Advanced Statistical Theory II ST5224: Advanced Statistical Theory II 2014/2015: Semester II Tutorial 7 1. Let X be a sample from a population P and consider testing hypotheses H 0 : P = P 0 versus H 1 : P = P 1, where P j is a known

Διαβάστε περισσότερα

6.3 Forecasting ARMA processes

6.3 Forecasting ARMA processes 122 CHAPTER 6. ARMA MODELS 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss a linear

Διαβάστε περισσότερα

2 Composition. Invertible Mappings

2 Composition. Invertible Mappings Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan Composition. Invertible Mappings In this section we discuss two procedures for creating new mappings from old ones, namely,

Διαβάστε περισσότερα

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch: HOMEWORK 4 Problem a For the fast loading case, we want to derive the relationship between P zz and λ z. We know that the nominal stress is expressed as: P zz = ψ λ z where λ z = λ λ z. Therefore, applying

Διαβάστε περισσότερα

Statistical Inference I Locally most powerful tests

Statistical Inference I Locally most powerful tests Statistical Inference I Locally most powerful tests Shirsendu Mukherjee Department of Statistics, Asutosh College, Kolkata, India. shirsendu st@yahoo.co.in So far we have treated the testing of one-sided

Διαβάστε περισσότερα

Statistics 104: Quantitative Methods for Economics Formula and Theorem Review

Statistics 104: Quantitative Methods for Economics Formula and Theorem Review Harvard College Statistics 104: Quantitative Methods for Economics Formula and Theorem Review Tommy MacWilliam, 13 tmacwilliam@college.harvard.edu March 10, 2011 Contents 1 Introduction to Data 5 1.1 Sample

Διαβάστε περισσότερα

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS CHAPTER 5 SOLVING EQUATIONS BY ITERATIVE METHODS EXERCISE 104 Page 8 1. Find the positive root of the equation x + 3x 5 = 0, correct to 3 significant figures, using the method of bisection. Let f(x) =

Διαβάστε περισσότερα

4.6 Autoregressive Moving Average Model ARMA(1,1)

4.6 Autoregressive Moving Average Model ARMA(1,1) 84 CHAPTER 4. STATIONARY TS MODELS 4.6 Autoregressive Moving Average Model ARMA(,) This section is an introduction to a wide class of models ARMA(p,q) which we will consider in more detail later in this

Διαβάστε περισσότερα

Homework 3 Solutions

Homework 3 Solutions Homework 3 Solutions Igor Yanovsky (Math 151A TA) Problem 1: Compute the absolute error and relative error in approximations of p by p. (Use calculator!) a) p π, p 22/7; b) p π, p 3.141. Solution: For

Διαβάστε περισσότερα

EE512: Error Control Coding

EE512: Error Control Coding EE512: Error Control Coding Solution for Assignment on Finite Fields February 16, 2007 1. (a) Addition and Multiplication tables for GF (5) and GF (7) are shown in Tables 1 and 2. + 0 1 2 3 4 0 0 1 2 3

Διαβάστε περισσότερα

Concrete Mathematics Exercises from 30 September 2016

Concrete Mathematics Exercises from 30 September 2016 Concrete Mathematics Exercises from 30 September 2016 Silvio Capobianco Exercise 1.7 Let H(n) = J(n + 1) J(n). Equation (1.8) tells us that H(2n) = 2, and H(2n+1) = J(2n+2) J(2n+1) = (2J(n+1) 1) (2J(n)+1)

Διαβάστε περισσότερα

C.S. 430 Assignment 6, Sample Solutions

C.S. 430 Assignment 6, Sample Solutions C.S. 430 Assignment 6, Sample Solutions Paul Liu November 15, 2007 Note that these are sample solutions only; in many cases there were many acceptable answers. 1 Reynolds Problem 10.1 1.1 Normal-order

Διαβάστε περισσότερα

derivation of the Laplacian from rectangular to spherical coordinates

derivation of the Laplacian from rectangular to spherical coordinates derivation of the Laplacian from rectangular to spherical coordinates swapnizzle 03-03- :5:43 We begin by recognizing the familiar conversion from rectangular to spherical coordinates (note that φ is used

Διαβάστε περισσότερα

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013 Notes on Average Scattering imes and Hall Factors Jesse Maassen and Mar Lundstrom Purdue University November 5, 13 I. Introduction 1 II. Solution of the BE 1 III. Exercises: Woring out average scattering

Διαβάστε περισσότερα

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Partial Differential Equations in Biology The boundary element method. March 26, 2013 The boundary element method March 26, 203 Introduction and notation The problem: u = f in D R d u = ϕ in Γ D u n = g on Γ N, where D = Γ D Γ N, Γ D Γ N = (possibly, Γ D = [Neumann problem] or Γ N = [Dirichlet

Διαβάστε περισσότερα

Lecture 34 Bootstrap confidence intervals

Lecture 34 Bootstrap confidence intervals Lecture 34 Bootstrap confidence intervals Confidence Intervals θ: an unknown parameter of interest We want to find limits θ and θ such that Gt = P nˆθ θ t If G 1 1 α is known, then P θ θ = P θ θ = 1 α

Διαβάστε περισσότερα

5.4 The Poisson Distribution.

5.4 The Poisson Distribution. The worst thing you can do about a situation is nothing. Sr. O Shea Jackson 5.4 The Poisson Distribution. Description of the Poisson Distribution Discrete probability distribution. The random variable

Διαβάστε περισσότερα

Problem Set 3: Solutions

Problem Set 3: Solutions CMPSCI 69GG Applied Information Theory Fall 006 Problem Set 3: Solutions. [Cover and Thomas 7.] a Define the following notation, C I p xx; Y max X; Y C I p xx; Ỹ max I X; Ỹ We would like to show that C

Διαβάστε περισσότερα

6. MAXIMUM LIKELIHOOD ESTIMATION

6. MAXIMUM LIKELIHOOD ESTIMATION 6 MAXIMUM LIKELIHOOD ESIMAION [1] Maximum Likelihood Estimator (1) Cases in which θ (unknown parameter) is scalar Notational Clarification: From now on, we denote the true value of θ as θ o hen, view θ

Διαβάστε περισσότερα

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β 3.4 SUM AND DIFFERENCE FORMULAS Page Theorem cos(αβ cos α cos β -sin α cos(α-β cos α cos β sin α NOTE: cos(αβ cos α cos β cos(α-β cos α -cos β Proof of cos(α-β cos α cos β sin α Let s use a unit circle

Διαβάστε περισσότερα

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required) Phys460.nb 81 ψ n (t) is still the (same) eigenstate of H But for tdependent H. The answer is NO. 5.5.5. Solution for the tdependent Schrodinger s equation If we assume that at time t 0, the electron starts

Διαβάστε περισσότερα

Lecture 21: Properties and robustness of LSE

Lecture 21: Properties and robustness of LSE Lecture 21: Properties and robustness of LSE BLUE: Robustness of LSE against normality We now study properties of l τ β and σ 2 under assumption A2, i.e., without the normality assumption on ε. From Theorem

Διαβάστε περισσότερα

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Chapter 6: Systems of Linear Differential. be continuous functions on the interval Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations

Διαβάστε περισσότερα

Μηχανική Μάθηση Hypothesis Testing

Μηχανική Μάθηση Hypothesis Testing ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ Μηχανική Μάθηση Hypothesis Testing Γιώργος Μπορμπουδάκης Τμήμα Επιστήμης Υπολογιστών Procedure 1. Form the null (H 0 ) and alternative (H 1 ) hypothesis 2. Consider

Διαβάστε περισσότερα

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8  questions or comments to Dan Fetter 1 Eon : Fall 8 Suggested Solutions to Problem Set 8 Email questions or omments to Dan Fetter Problem. Let X be a salar with density f(x, θ) (θx + θ) [ x ] with θ. (a) Find the most powerful level α test

Διαβάστε περισσότερα

Areas and Lengths in Polar Coordinates

Areas and Lengths in Polar Coordinates Kiryl Tsishchanka Areas and Lengths in Polar Coordinates In this section we develop the formula for the area of a region whose boundary is given by a polar equation. We need to use the formula for the

Διαβάστε περισσότερα

Solution Series 9. i=1 x i and i=1 x i.

Solution Series 9. i=1 x i and i=1 x i. Lecturer: Prof. Dr. Mete SONER Coordinator: Yilin WANG Solution Series 9 Q1. Let α, β >, the p.d.f. of a beta distribution with parameters α and β is { Γ(α+β) Γ(α)Γ(β) f(x α, β) xα 1 (1 x) β 1 for < x

Διαβάστε περισσότερα

Section 8.3 Trigonometric Equations

Section 8.3 Trigonometric Equations 99 Section 8. Trigonometric Equations Objective 1: Solve Equations Involving One Trigonometric Function. In this section and the next, we will exple how to solving equations involving trigonometric functions.

Διαβάστε περισσότερα

Areas and Lengths in Polar Coordinates

Areas and Lengths in Polar Coordinates Kiryl Tsishchanka Areas and Lengths in Polar Coordinates In this section we develop the formula for the area of a region whose boundary is given by a polar equation. We need to use the formula for the

Διαβάστε περισσότερα

Every set of first-order formulas is equivalent to an independent set

Every set of first-order formulas is equivalent to an independent set Every set of first-order formulas is equivalent to an independent set May 6, 2008 Abstract A set of first-order formulas, whatever the cardinality of the set of symbols, is equivalent to an independent

Διαβάστε περισσότερα

Exercises to Statistics of Material Fatigue No. 5

Exercises to Statistics of Material Fatigue No. 5 Prof. Dr. Christine Müller Dipl.-Math. Christoph Kustosz Eercises to Statistics of Material Fatigue No. 5 E. 9 (5 a Show, that a Fisher information matri for a two dimensional parameter θ (θ,θ 2 R 2, can

Διαβάστε περισσότερα

Math221: HW# 1 solutions

Math221: HW# 1 solutions Math: HW# solutions Andy Royston October, 5 7.5.7, 3 rd Ed. We have a n = b n = a = fxdx = xdx =, x cos nxdx = x sin nx n sin nxdx n = cos nx n = n n, x sin nxdx = x cos nx n + cos nxdx n cos n = + sin

Διαβάστε περισσότερα

Section 7.6 Double and Half Angle Formulas

Section 7.6 Double and Half Angle Formulas 09 Section 7. Double and Half Angle Fmulas To derive the double-angles fmulas, we will use the sum of two angles fmulas that we developed in the last section. We will let α θ and β θ: cos(θ) cos(θ + θ)

Διαβάστε περισσότερα

( ) 2 and compare to M.

( ) 2 and compare to M. Problems and Solutions for Section 4.2 4.9 through 4.33) 4.9 Calculate the square root of the matrix 3!0 M!0 8 Hint: Let M / 2 a!b ; calculate M / 2!b c ) 2 and compare to M. Solution: Given: 3!0 M!0 8

Διαβάστε περισσότερα

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y Stat 50 Homework Solutions Spring 005. (a λ λ λ 44 (b trace( λ + λ + λ 0 (c V (e x e e λ e e λ e (λ e by definition, the eigenvector e has the properties e λ e and e e. (d λ e e + λ e e + λ e e 8 6 4 4

Διαβάστε περισσότερα

A Note on Intuitionistic Fuzzy. Equivalence Relation

A Note on Intuitionistic Fuzzy. Equivalence Relation International Mathematical Forum, 5, 2010, no. 67, 3301-3307 A Note on Intuitionistic Fuzzy Equivalence Relation D. K. Basnet Dept. of Mathematics, Assam University Silchar-788011, Assam, India dkbasnet@rediffmail.com

Διαβάστε περισσότερα

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

6.1. Dirac Equation. Hamiltonian. Dirac Eq. 6.1. Dirac Equation Ref: M.Kaku, Quantum Field Theory, Oxford Univ Press (1993) η μν = η μν = diag(1, -1, -1, -1) p 0 = p 0 p = p i = -p i p μ p μ = p 0 p 0 + p i p i = E c 2 - p 2 = (m c) 2 H = c p 2

Διαβάστε περισσότερα

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with Week 03: C lassification of S econd- Order L inear Equations In last week s lectures we have illustrated how to obtain the general solutions of first order PDEs using the method of characteristics. We

Διαβάστε περισσότερα

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ. Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο The time integral of a force is referred to as impulse, is determined by and is obtained from: Newton s 2 nd Law of motion states that the action

Διαβάστε περισσότερα

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1. Exercises 0 More exercises are available in Elementary Differential Equations. If you have a problem to solve any of them, feel free to come to office hour. Problem Find a fundamental matrix of the given

Διαβάστε περισσότερα

Matrices and Determinants

Matrices and Determinants Matrices and Determinants SUBJECTIVE PROBLEMS: Q 1. For what value of k do the following system of equations possess a non-trivial (i.e., not all zero) solution over the set of rationals Q? x + ky + 3z

Διαβάστε περισσότερα

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science.

Bayesian statistics. DS GA 1002 Probability and Statistics for Data Science. Bayesian statistics DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Frequentist vs Bayesian statistics In frequentist

Διαβάστε περισσότερα

Homework for 1/27 Due 2/5

Homework for 1/27 Due 2/5 Name: ID: Homework for /7 Due /5. [ 8-3] I Example D of Sectio 8.4, the pdf of the populatio distributio is + αx x f(x α) =, α, otherwise ad the method of momets estimate was foud to be ˆα = 3X (where

Διαβάστε περισσότερα

New bounds for spherical two-distance sets and equiangular lines

New bounds for spherical two-distance sets and equiangular lines New bounds for spherical two-distance sets and equiangular lines Michigan State University Oct 8-31, 016 Anhui University Definition If X = {x 1, x,, x N } S n 1 (unit sphere in R n ) and x i, x j = a

Διαβάστε περισσότερα

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University Estimation for ARMA Processes with Stable Noise Matt Calder & Richard A. Davis Colorado State University rdavis@stat.colostate.edu 1 ARMA processes with stable noise Review of M-estimation Examples of

Διαβάστε περισσότερα

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Chapter 6: Systems of Linear Differential. be continuous functions on the interval Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations

Διαβάστε περισσότερα

Approximation of distance between locations on earth given by latitude and longitude

Approximation of distance between locations on earth given by latitude and longitude Approximation of distance between locations on earth given by latitude and longitude Jan Behrens 2012-12-31 In this paper we shall provide a method to approximate distances between two points on earth

Διαβάστε περισσότερα

HW 3 Solutions 1. a) I use the auto.arima R function to search over models using AIC and decide on an ARMA(3,1)

HW 3 Solutions 1. a) I use the auto.arima R function to search over models using AIC and decide on an ARMA(3,1) HW 3 Solutions a) I use the autoarima R function to search over models using AIC and decide on an ARMA3,) b) I compare the ARMA3,) to ARMA,0) ARMA3,) does better in all three criteria c) The plot of the

Διαβάστε περισσότερα

Strain gauge and rosettes

Strain gauge and rosettes Strain gauge and rosettes Introduction A strain gauge is a device which is used to measure strain (deformation) on an object subjected to forces. Strain can be measured using various types of devices classified

Διαβάστε περισσότερα

HISTOGRAMS AND PERCENTILES What is the 25 th percentile of a histogram? What is the 50 th percentile for the cigarette histogram?

HISTOGRAMS AND PERCENTILES What is the 25 th percentile of a histogram? What is the 50 th percentile for the cigarette histogram? HISTOGRAMS AND PERCENTILES What is the 25 th percentile of a histogram? The point on the horizontal axis such that of the area under the histogram lies to the left of that point (and to the right) What

Διαβάστε περισσότερα

Theorem 8 Let φ be the most powerful size α test of H

Theorem 8 Let φ be the most powerful size α test of H Testing composite hypotheses Θ = Θ 0 Θ c 0 H 0 : θ Θ 0 H 1 : θ Θ c 0 Definition 16 A test φ is a uniformly most powerful (UMP) level α test for H 0 vs. H 1 if φ has level α and for any other level α test

Διαβάστε περισσότερα

Aquinas College. Edexcel Mathematical formulae and statistics tables DO NOT WRITE ON THIS BOOKLET

Aquinas College. Edexcel Mathematical formulae and statistics tables DO NOT WRITE ON THIS BOOKLET Aquinas College Edexcel Mathematical formulae and statistics tables DO NOT WRITE ON THIS BOOKLET Pearson Edexcel Level 3 Advanced Subsidiary and Advanced GCE in Mathematics and Further Mathematics Mathematical

Διαβάστε περισσότερα

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit Ting Zhang Stanford May 11, 2001 Stanford, 5/11/2001 1 Outline Ordinal Classification Ordinal Addition Ordinal Multiplication Ordinal

Διαβάστε περισσότερα

Second Order Partial Differential Equations

Second Order Partial Differential Equations Chapter 7 Second Order Partial Differential Equations 7.1 Introduction A second order linear PDE in two independent variables (x, y Ω can be written as A(x, y u x + B(x, y u xy + C(x, y u u u + D(x, y

Διαβάστε περισσότερα

Lecture 2. Soundness and completeness of propositional logic

Lecture 2. Soundness and completeness of propositional logic Lecture 2 Soundness and completeness of propositional logic February 9, 2004 1 Overview Review of natural deduction. Soundness and completeness. Semantics of propositional formulas. Soundness proof. Completeness

Διαβάστε περισσότερα

Numerical Analysis FMN011

Numerical Analysis FMN011 Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 12 Periodic data A function g has period P if g(x + P ) = g(x) Model: Trigonometric polynomial of order M T M (x) =

Διαβάστε περισσότερα

Congruence Classes of Invertible Matrices of Order 3 over F 2

Congruence Classes of Invertible Matrices of Order 3 over F 2 International Journal of Algebra, Vol. 8, 24, no. 5, 239-246 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.2988/ija.24.422 Congruence Classes of Invertible Matrices of Order 3 over F 2 Ligong An and

Διαβάστε περισσότερα

Exercise 2: The form of the generalized likelihood ratio

Exercise 2: The form of the generalized likelihood ratio Stats 2 Winter 28 Homework 9: Solutions Due Friday, March 6 Exercise 2: The form of the generalized likelihood ratio We want to test H : θ Θ against H : θ Θ, and compare the two following rules of rejection:

Διαβάστε περισσότερα

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016

ECE598: Information-theoretic methods in high-dimensional statistics Spring 2016 ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture 7: Information bound Lecturer: Yihong Wu Scribe: Shiyu Liang, Feb 6, 06 [Ed. Mar 9] Recall the Chi-squared divergence

Διαβάστε περισσότερα

Srednicki Chapter 55

Srednicki Chapter 55 Srednicki Chapter 55 QFT Problems & Solutions A. George August 3, 03 Srednicki 55.. Use equations 55.3-55.0 and A i, A j ] = Π i, Π j ] = 0 (at equal times) to verify equations 55.-55.3. This is our third

Διαβάστε περισσότερα

Solutions to Exercise Sheet 5

Solutions to Exercise Sheet 5 Solutions to Eercise Sheet 5 jacques@ucsd.edu. Let X and Y be random variables with joint pdf f(, y) = 3y( + y) where and y. Determine each of the following probabilities. Solutions. a. P (X ). b. P (X

Διαβάστε περισσότερα

Uniform Convergence of Fourier Series Michael Taylor

Uniform Convergence of Fourier Series Michael Taylor Uniform Convergence of Fourier Series Michael Taylor Given f L 1 T 1 ), we consider the partial sums of the Fourier series of f: N 1) S N fθ) = ˆfk)e ikθ. k= N A calculation gives the Dirichlet formula

Διαβάστε περισσότερα

Probability and Random Processes (Part II)

Probability and Random Processes (Part II) Probability and Random Processes (Part II) 1. If the variance σ x of d(n) = x(n) x(n 1) is one-tenth the variance σ x of a stationary zero-mean discrete-time signal x(n), then the normalized autocorrelation

Διαβάστε περισσότερα

Asymptotic distribution of MLE

Asymptotic distribution of MLE Asymptotic distribution of MLE Theorem Let {X t } be a causal and invertible ARMA(p,q) process satisfying Φ(B)X = Θ(B)Z, {Z t } IID(0, σ 2 ). Let ( ˆφ, ˆϑ) the values that minimize LL n (φ, ϑ) among those

Διαβάστε περισσότερα

ESTIMATION OF SYSTEM RELIABILITY IN A TWO COMPONENT STRESS-STRENGTH MODELS DAVID D. HANAGAL

ESTIMATION OF SYSTEM RELIABILITY IN A TWO COMPONENT STRESS-STRENGTH MODELS DAVID D. HANAGAL ESTIMATION OF SYSTEM RELIABILITY IN A TWO COMPONENT STRESS-STRENGTH MODELS DAVID D. HANAGAL Department of Statistics, University of Poona, Pune-411007, India. Abstract In this paper, we estimate the reliability

Διαβάστε περισσότερα

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R + Chapter 3. Fuzzy Arithmetic 3- Fuzzy arithmetic: ~Addition(+) and subtraction (-): Let A = [a and B = [b, b in R If x [a and y [b, b than x+y [a +b +b Symbolically,we write A(+)B = [a (+)[b, b = [a +b

Διαβάστε περισσότερα

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018 Journal of rogressive Research in Mathematics(JRM) ISSN: 2395-028 SCITECH Volume 3, Issue 2 RESEARCH ORGANISATION ublished online: March 29, 208 Journal of rogressive Research in Mathematics www.scitecresearch.com/journals

Διαβάστε περισσότερα

Quadratic Expressions

Quadratic Expressions Quadratic Expressions. The standard form of a quadratic equation is ax + bx + c = 0 where a, b, c R and a 0. The roots of ax + bx + c = 0 are b ± b a 4ac. 3. For the equation ax +bx+c = 0, sum of the roots

Διαβάστε περισσότερα

Tridiagonal matrices. Gérard MEURANT. October, 2008

Tridiagonal matrices. Gérard MEURANT. October, 2008 Tridiagonal matrices Gérard MEURANT October, 2008 1 Similarity 2 Cholesy factorizations 3 Eigenvalues 4 Inverse Similarity Let α 1 ω 1 β 1 α 2 ω 2 T =......... β 2 α 1 ω 1 β 1 α and β i ω i, i = 1,...,

Διαβάστε περισσότερα

Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03)..

Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03).. Supplemental Material (not for publication) Persistent vs. Permanent Income Shocks in the Buffer-Stock Model Jeppe Druedahl Thomas H. Jørgensen May, A Additional Figures and Tables Figure A.: Wealth and

Διαβάστε περισσότερα

Example Sheet 3 Solutions

Example Sheet 3 Solutions Example Sheet 3 Solutions. i Regular Sturm-Liouville. ii Singular Sturm-Liouville mixed boundary conditions. iii Not Sturm-Liouville ODE is not in Sturm-Liouville form. iv Regular Sturm-Liouville note

Διαβάστε περισσότερα

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS EXERCISE 01 Page 545 1. Use matrices to solve: 3x + 4y x + 5y + 7 3x + 4y x + 5y 7 Hence, 3 4 x 0 5 y 7 The inverse of 3 4 5 is: 1 5 4 1 5 4 15 8 3

Διαβάστε περισσότερα

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics Fourier Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Introduction Not all functions can be represented by Taylor series. f (k) (c) A Taylor series f (x) = (x c)

Διαβάστε περισσότερα

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions SCHOOL OF MATHEMATICAL SCIENCES GLMA Linear Mathematics 00- Examination Solutions. (a) i. ( + 5i)( i) = (6 + 5) + (5 )i = + i. Real part is, imaginary part is. (b) ii. + 5i i ( + 5i)( + i) = ( i)( + i)

Διαβάστε περισσότερα

Potential Dividers. 46 minutes. 46 marks. Page 1 of 11

Potential Dividers. 46 minutes. 46 marks. Page 1 of 11 Potential Dividers 46 minutes 46 marks Page 1 of 11 Q1. In the circuit shown in the figure below, the battery, of negligible internal resistance, has an emf of 30 V. The pd across the lamp is 6.0 V and

Διαβάστε περισσότερα

The Simply Typed Lambda Calculus

The Simply Typed Lambda Calculus Type Inference Instead of writing type annotations, can we use an algorithm to infer what the type annotations should be? That depends on the type system. For simple type systems the answer is yes, and

Διαβάστε περισσότερα

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 1 State vector space and the dual space Space of wavefunctions The space of wavefunctions is the set of all

Διαβάστε περισσότερα

w o = R 1 p. (1) R = p =. = 1

w o = R 1 p. (1) R = p =. = 1 Πανεπιστήµιο Κρήτης - Τµήµα Επιστήµης Υπολογιστών ΗΥ-570: Στατιστική Επεξεργασία Σήµατος 205 ιδάσκων : Α. Μουχτάρης Τριτη Σειρά Ασκήσεων Λύσεις Ασκηση 3. 5.2 (a) From the Wiener-Hopf equation we have:

Διαβάστε περισσότερα

CRASH COURSE IN PRECALCULUS

CRASH COURSE IN PRECALCULUS CRASH COURSE IN PRECALCULUS Shiah-Sen Wang The graphs are prepared by Chien-Lun Lai Based on : Precalculus: Mathematics for Calculus by J. Stuwart, L. Redin & S. Watson, 6th edition, 01, Brooks/Cole Chapter

Διαβάστε περισσότερα

Section 9.2 Polar Equations and Graphs

Section 9.2 Polar Equations and Graphs 180 Section 9. Polar Equations and Graphs In this section, we will be graphing polar equations on a polar grid. In the first few examples, we will write the polar equation in rectangular form to help identify

Διαβάστε περισσότερα

Study on Bivariate Normal Distribution

Study on Bivariate Normal Distribution Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 11-9-2012 Study on Bivariate Normal Distribution Yipin Shi Florida International

Διαβάστε περισσότερα

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question 1 a) The cumulative distribution function of T conditional on N n is Pr T t N n) Pr max X 1,..., X N ) t N n) Pr max

Διαβάστε περισσότερα

FORMULAS FOR STATISTICS 1

FORMULAS FOR STATISTICS 1 FORMULAS FOR STATISTICS 1 X = 1 n Sample statistics X i or x = 1 n x i (sample mean) S 2 = 1 n 1 s 2 = 1 n 1 (X i X) 2 = 1 n 1 (x i x) 2 = 1 n 1 Xi 2 n n 1 X 2 x 2 i n n 1 x 2 or (sample variance) E(X)

Διαβάστε περισσότερα

Variational Wavefunction for the Helium Atom

Variational Wavefunction for the Helium Atom Technische Universität Graz Institut für Festkörperphysik Student project Variational Wavefunction for the Helium Atom Molecular and Solid State Physics 53. submitted on: 3. November 9 by: Markus Krammer

Διαβάστε περισσότερα

PARTIAL NOTES for 6.1 Trigonometric Identities

PARTIAL NOTES for 6.1 Trigonometric Identities PARTIAL NOTES for 6.1 Trigonometric Identities tanθ = sinθ cosθ cotθ = cosθ sinθ BASIC IDENTITIES cscθ = 1 sinθ secθ = 1 cosθ cotθ = 1 tanθ PYTHAGOREAN IDENTITIES sin θ + cos θ =1 tan θ +1= sec θ 1 + cot

Διαβάστε περισσότερα

DERIVATION OF MILES EQUATION FOR AN APPLIED FORCE Revision C

DERIVATION OF MILES EQUATION FOR AN APPLIED FORCE Revision C DERIVATION OF MILES EQUATION FOR AN APPLIED FORCE Revision C By Tom Irvine Email: tomirvine@aol.com August 6, 8 Introduction The obective is to derive a Miles equation which gives the overall response

Διαβάστε περισσότερα

Second Order RLC Filters

Second Order RLC Filters ECEN 60 Circuits/Electronics Spring 007-0-07 P. Mathys Second Order RLC Filters RLC Lowpass Filter A passive RLC lowpass filter (LPF) circuit is shown in the following schematic. R L C v O (t) Using phasor

Διαβάστε περισσότερα

Lecture 12: Pseudo likelihood approach

Lecture 12: Pseudo likelihood approach Lecture 12: Pseudo likelihood approach Pseudo MLE Let X 1,...,X n be a random sample from a pdf in a family indexed by two parameters θ and π with likelihood l(θ,π). The method of pseudo MLE may be viewed

Διαβάστε περισσότερα

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits. EAMCET-. THEORY OF EQUATIONS PREVIOUS EAMCET Bits. Each of the roots of the equation x 6x + 6x 5= are increased by k so that the new transformed equation does not contain term. Then k =... - 4. - Sol.

Διαβάστε περισσότερα

Finite Field Problems: Solutions

Finite Field Problems: Solutions Finite Field Problems: Solutions 1. Let f = x 2 +1 Z 11 [x] and let F = Z 11 [x]/(f), a field. Let Solution: F =11 2 = 121, so F = 121 1 = 120. The possible orders are the divisors of 120. Solution: The

Διαβάστε περισσότερα

Math 6 SL Probability Distributions Practice Test Mark Scheme

Math 6 SL Probability Distributions Practice Test Mark Scheme Math 6 SL Probability Distributions Practice Test Mark Scheme. (a) Note: Award A for vertical line to right of mean, A for shading to right of their vertical line. AA N (b) evidence of recognizing symmetry

Διαβάστε περισσότερα

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007 Οδηγίες: Να απαντηθούν όλες οι ερωτήσεις. Αν κάπου κάνετε κάποιες υποθέσεις να αναφερθούν στη σχετική ερώτηση. Όλα τα αρχεία που αναφέρονται στα προβλήματα βρίσκονται στον ίδιο φάκελο με το εκτελέσιμο

Διαβάστε περισσότερα

Queensland University of Technology Transport Data Analysis and Modeling Methodologies

Queensland University of Technology Transport Data Analysis and Modeling Methodologies Queensland University of Technology Transport Data Analysis and Modeling Methodologies Lab Session #7 Example 5.2 (with 3SLS Extensions) Seemingly Unrelated Regression Estimation and 3SLS A survey of 206

Διαβάστε περισσότερα

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω 0 1 2 3 4 5 6 ω ω + 1 ω + 2 ω + 3 ω + 4 ω2 ω2 + 1 ω2 + 2 ω2 + 3 ω3 ω3 + 1 ω3 + 2 ω4 ω4 + 1 ω5 ω 2 ω 2 + 1 ω 2 + 2 ω 2 + ω ω 2 + ω + 1 ω 2 + ω2 ω 2 2 ω 2 2 + 1 ω 2 2 + ω ω 2 3 ω 3 ω 3 + 1 ω 3 + ω ω 3 +

Διαβάστε περισσότερα

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X. Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequalit for metrics: Let (X, d) be a metric space and let x,, z X. Prove that d(x, z) d(, z) d(x, ). (ii): Reverse triangle inequalit for norms:

Διαβάστε περισσότερα

Problem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ.

Problem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ. Chemistry 362 Dr Jean M Standard Problem Set 9 Solutions The ˆ L 2 operator is defined as Verify that the angular wavefunction Y θ,φ) Also verify that the eigenvalue is given by 2! 2 & L ˆ 2! 2 2 θ 2 +

Διαβάστε περισσότερα

10.7 Performance of Second-Order System (Unit Step Response)

10.7 Performance of Second-Order System (Unit Step Response) Lecture Notes on Control Systems/D. Ghose/0 57 0.7 Performance of Second-Order System (Unit Step Response) Consider the second order system a ÿ + a ẏ + a 0 y = b 0 r So, Y (s) R(s) = b 0 a s + a s + a

Διαβάστε περισσότερα

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1 Conceptual Questions. State a Basic identity and then verify it. a) Identity: Solution: One identity is cscθ) = sinθ) Practice Exam b) Verification: Solution: Given the point of intersection x, y) of the

Διαβάστε περισσότερα

An Introduction to Signal Detection and Estimation - Second Edition Chapter II: Selected Solutions

An Introduction to Signal Detection and Estimation - Second Edition Chapter II: Selected Solutions An Introduction to Signal Detection Estimation - Second Edition Chapter II: Selected Solutions H V Poor Princeton University March 16, 5 Exercise : The likelihood ratio is given by L(y) (y +1), y 1 a With

Διαβάστε περισσότερα