lecture 10: the em algorithm (contd)
|
|
- Ζώνα Ταμτάκος
- 5 χρόνια πριν
- Προβολές:
Transcript
1 lecture 10: the em algorithm (contd) STAT 545: Intro. to Computational Statistics Vinayak Rao Purdue University September 24, 2018
2 Exponential family models Consider a space X. E.g. R, R d or N. ϕ(x) = [ϕ 1 (x),..., ϕ D (x)] : (feature) vector of sufficient statistics θ = [θ 1,..., θ D ] : vector of natural parameters 1/20
3 Exponential family models Consider a space X. E.g. R, R d or N. ϕ(x) = [ϕ 1 (x),..., ϕ D (x)] : (feature) vector of sufficient statistics θ = [θ 1,..., θ D ] : vector of natural parameters Exponential family distribution: p(x θ) = 1 Z(θ) h(x) exp(θ ϕ(x)) 1/20
4 Exponential family models Consider a space X. E.g. R, R d or N. ϕ(x) = [ϕ 1 (x),..., ϕ D (x)] : (feature) vector of sufficient statistics θ = [θ 1,..., θ D ] : vector of natural parameters Exponential family distribution: p(x θ) = 1 Z(θ) h(x) exp(θ ϕ(x)) h(x) is the base-measure or base distribution. Z(θ) = h(x) exp(θ ϕ(x))dx is the normalization constant. 1/20
5 Examples The normal distribution: ( p(x µ, σ 2 1 ) = exp 1 ) (x µ)2 2πσ 2 2σ2 ) exp ( µ2 ( 2σ = 2 exp 1 2πσ 2 2σ 2 x2 + µ ) σ 2 x 2/20
6 Examples The normal distribution: ( p(x µ, σ 2 1 ) = exp 1 ) (x µ)2 2πσ 2 2σ2 ) exp ( µ2 ( 2σ = 2 exp 1 2πσ 2 2σ 2 x2 + µ ) σ 2 x The Poisson distribution: p(x λ) = λx exp( λ) x! = exp( λ) 1 x! exp(log(λ)x) 2/20
7 Minimal exponential family Sufficient statistics are linearly independent Consider a K-component discrete distribution π = (π 1,..., π K ) p(x) = K c=1 π δ(x=c) i = exp( K δ(x = c) log π c ) c=1 3/20
8 Minimal exponential family Sufficient statistics are linearly independent Consider a K-component discrete distribution π = (π 1,..., π K ) p(x) = K c=1 π δ(x=c) i = exp( K δ(x = c) log π c ) c=1 Is it minimal? K 1 p(x) = π K exp( δ(x = c) log π c /π K ) c=1 = 1 K 1 Z exp( δ(x = c)θ c ) c=1 3/20
9 Maximum-likelihood estimation Given N i.i.d. observations X {x 1,..., x N }, the likelihood is N 1 L(X θ) = Z(θ) h(x i) exp(θ ϕ(x i )) ( ) ( 1 N N ) = h(x Z(θ) i ) exp(θ N ϕ(x i )) 4/20
10 Maximum-likelihood estimation Given N i.i.d. observations X {x 1,..., x N }, the likelihood is N 1 L(X θ) = Z(θ) h(x i) exp(θ ϕ(x i )) ( ) ( 1 N N ) = h(x Z(θ) i ) exp(θ The log-likelihood l(x θ) = log L(X θ) is ( N ) l(x θ) = θ ϕ(x i ) N log Z(θ) + N ϕ(x i )) log h(x i ) 4/20
11 Maximum-likelihood estimation Given N i.i.d. observations X {x 1,..., x N }, the likelihood is N 1 L(X θ) = Z(θ) h(x i) exp(θ ϕ(x i )) ( ) ( 1 N N ) = h(x Z(θ) i ) exp(θ The log-likelihood l(x θ) = log L(X θ) is ( N ) l(x θ) = θ ϕ(x i ) N log Z(θ) + N ϕ(x i )) log h(x i ) To calculate a maximum likelihood estimate, we only need the sum of the suff. statistics. 4/20
12 MLE for exponential families ( N ) l(x θ) = θ ϕ(x i ) N log Z(θ) + log h(x i ) 5/20
13 MLE for exponential families ( N ) l(x θ) = θ ϕ(x i ) N log Z(θ) + At MLE of θ d, the dth component of θ: log h(x i ) l(x θ) θ d = 0. 5/20
14 MLE for exponential families ( N ) l(x θ) = θ ϕ(x i ) N log Z(θ) + At MLE of θ d, the dth component of θ: log Z(θ) ϕ d (x i ) = N θ d log h(x i ) l(x θ) θ d = 0. 5/20
15 MLE for exponential families ( N ) l(x θ) = θ ϕ(x i ) N log Z(θ) + At MLE of θ d, the dth component of θ: 1 N log Z(θ) ϕ d (x i ) = N θ d ϕ d (x i ) = 1 Z(θ) = 1 Z(θ) θ d = 1 Z(θ) = Z(θ) log h(x i ) l(x θ) θ d = 0. h(x) exp(θ ϕ(x)) dx θ d h(x) exp(θ ϕ(x))dx θ d 1 Z(θ) h(x)exp(θ ϕ(x))ϕ d (x)dx 5/20
16 MLE for exponential families Match empirical and population averages of ϕ(x): 1 N ϕ d (x i ) = E θmle [ϕ d (x)] 6/20
17 MLE for exponential families Match empirical and population averages of ϕ(x): 1 N ϕ d (x i ) = E θmle [ϕ d (x)] RHS: moment parameters of the exponential distribution. Thus: θ MLE are natural parameters corresponding to empirical moment parameters ( moment matching ). 6/20
18 MLE for exponential families Match empirical and population averages of ϕ(x): 1 N ϕ d (x i ) = E θmle [ϕ d (x)] RHS: moment parameters of the exponential distribution. Thus: θ MLE are natural parameters corresponding to empirical moment parameters ( moment matching ). Is this a maximum? is second derivative (Hessian) negative (negative definite)? We can show 2 θ i θ log Z(X θ) = Cov(ϕ j i, ϕ j ), and the Hessian of l(x θ) is N times the feature covariance matrix 6/20
19 Example The 1-d Gaussian: ϕ = [x x 2 ] Moment parameters are mean and mean squared Easy to find corresponding natural parameters Quite often, it is not the case. However, we will restrict ourselves to cases where it is. 7/20
20 Example The 1-d Gaussian: ϕ = [x x 2 ] Moment parameters are mean and mean squared Easy to find corresponding natural parameters Quite often, it is not the case. However, we will restrict ourselves to cases where it is. Can you do it for the Poisson? p(x λ) = 1 x! λx exp( λ) 7/20
21 Missing data in exponential family distributions Let samples from the exponential family have two parts: [x y]. Feature vector ϕ([x y]) := ϕ(x, y). P(x, y θ) := P([x y] θ) = h(x, y) Z(θ) exp(θ ϕ(x, y)) 8/20
22 Missing data in exponential family distributions Let samples from the exponential family have two parts: [x y]. Feature vector ϕ([x y]) := ϕ(x, y). P(x, y θ) := P([x y] θ) = h(x, y) Z(θ) exp(θ ϕ(x, y)) We observe only x. What is the posterior over y? P(y x, θ) = P(x, y θ) P(x θ) = h(x, y) P(x θ)z(θ) exp(θ ϕ(x, y)) 8/20
23 Missing data in exponential family distributions Let samples from the exponential family have two parts: [x y]. Feature vector ϕ([x y]) := ϕ(x, y). P(x, y θ) := P([x y] θ) = h(x, y) Z(θ) exp(θ ϕ(x, y)) We observe only x. What is the posterior over y? P(y x, θ) = P(x, y θ) P(x θ) = h(x, y) P(x θ)z(θ) exp(θ ϕ(x, y)) An exponential family distrib. over y (remember x is fixed) with: feature vector ϕ x (y) = ϕ(x, y) base distribution h x (y) = h(x, y) 8/20
24 Missing data in exponential family distributions Let samples from the exponential family have two parts: [x y]. Feature vector ϕ([x y]) := ϕ(x, y). P(x, y θ) := P([x y] θ) = h(x, y) Z(θ) exp(θ ϕ(x, y)) We observe only x. What is the posterior over y? P(y x, θ) = P(x, y θ) P(x θ) = h(x, y) P(x θ)z(θ) exp(θ ϕ(x, y)) An exponential family distrib. over y (remember x is fixed) with: feature vector ϕ x (y) = ϕ(x, y) base distribution h x (y) = h(x, y) Not necessarily easy to work with, but will restrict to this case. y i with different x i belong to different exp. fam. distrbs. 8/20
25 MLE with missing data What about the marginal likelihood P(x θ) = P(x, y θ)dy? 9/20
26 MLE with missing data What about the marginal likelihood P(x θ) = P(x, y θ)dy? Not an exponential family distribution! Calculating derivatives is messy No nice closed form expression for MLE 9/20
27 MLE with missing data What about the marginal likelihood P(x θ) = P(x, y θ)dy? Not an exponential family distribution! Calculating derivatives is messy No nice closed form expression for MLE One approach: the EM algorithm. An algorithm for MLE in exp. fam. distribs. with missing data 9/20
28 MLE with missing data What about the marginal likelihood P(x θ) = P(x, y θ)dy? Not an exponential family distribution! Calculating derivatives is messy No nice closed form expression for MLE One approach: the EM algorithm. An algorithm for MLE in exp. fam. distribs. with missing data Problem: Given observations i.i.d. X = {x 1,..., x N } from P(x θ) where P(X, Y θ) is exponential family, maximize w.r.t. θ l(x θ) = log P(X θ) = log P(x i θ) 9/20
29 MLE in latent variable models l(x θ) = log P(x i θ) = log P(x i, y i θ)dy i 10/20
30 MLE in latent variable models l(x θ) = = log P(x i θ) = log log P(x i, y i θ)dy i q i (y i ) P(x i, y i θ) dy q i (y i ) i (for arbitrary densities q i (y i )) 10/20
31 MLE in latent variable models l(x θ) = = log P(x i θ) = log log P(x i, y i θ)dy i q i (y i ) P(x i, y i θ) dy q i (y i ) i (for arbitrary densities q i (y i )) q i (y i ) log P(x i, y i θ) dy q i (y i ) i (Jensen s inequality) 10/20
32 MLE in latent variable models l(x θ) = = = log P(x i θ) = log log P(x i, y i θ)dy i q i (y i ) P(x i, y i θ) dy q i (y i ) i (for arbitrary densities q i (y i )) q i (y i ) log P(x i, y i θ) dy q i (y i ) i q i (y i ) log P(x i, y i θ)dy i (Jensen s inequality) q i (y i ) log q i (y i )dy i 10/20
33 MLE in latent variable models l(x θ) = = = = log P(x i θ) = log log P(x i, y i θ)dy i q i (y i ) P(x i, y i θ) dy q i (y i ) i (for arbitrary densities q i (y i )) q i (y i ) log P(x i, y i θ) dy q i (y i ) i q i (y i ) log P(x i, y i θ)dy i q i (y i ) log P(x i θ)dy i + (Jensen s inequality) q i (y i ) log q i (y i )dy i q i (y i ) log P(y i x i, θ) dy q i (y i ) i 10/20
34 MLE in latent variable models l(x θ) = = = = log P(x i θ) = log = l(x θ) log P(x i, y i θ)dy i q i (y i ) P(x i, y i θ) dy q i (y i ) i (for arbitrary densities q i (y i )) q i (y i ) log P(x i, y i θ) dy q i (y i ) i q i (y i ) log P(x i, y i θ)dy i q i (y i ) log P(x i θ)dy i + (Jensen s inequality) KL(q i (y i ) P(y i X, θ)) q i (y i ) log q i (y i )dy i q i (y i ) log P(y i x i, θ) dy q i (y i ) i 10/20
35 Jensen s inequality Let f(x) be a concave real-valued function defined on X. Concave: Non-positive 2nd-derivative (non-increasing deriv.) A chord always lies below the function. E.g. logarithm (defined on R + ). 11/20
36 Jensen s inequality Let f(x) be a concave real-valued function defined on X. Concave: Non-positive 2nd-derivative (non-increasing deriv.) A chord always lies below the function. E.g. logarithm (defined on R + ). Jensen: for any prob. vector p = (p 1,..., p K ) and any set of points (x 1,..., x K ), f( K p ix i ) K p if(x i ) 11/20
37 Jensen s inequality Let f(x) be a concave real-valued function defined on X. Concave: Non-positive 2nd-derivative (non-increasing deriv.) A chord always lies below the function. E.g. logarithm (defined on R + ). Jensen: for any prob. vector p = (p 1,..., p K ) and any set of points (x 1,..., x K ), f( K p ix i ) K p if(x i ) In fact, for a prob. density p(x), f( X xp(x)dx) X f(x)p(x)dx 11/20
38 MLE in latent variable models Defining Q(Y) = N q i(y i ), l(x θ) q i (y i ) log P(x i, y i θ)dy i + H(q i ) = E qi [log P(x i, y i θ)] + = l(x θ) H(q i ) KL(q i (y i ) P(y i X, θ)) := F X (θ, Q( )) 12/20
39 MLE in latent variable models Defining Q(Y) = N q i(y i ), l(x θ) q i (y i ) log P(x i, y i θ)dy i + = E qi [log P(x i, y i θ)] + = l(x θ) H(q i ) H(q i ) KL(q i (y i ) P(y i X, θ)) := F X (θ, Q( )) F X (θ, Q( )) is a lower bound to the log-likelihood l(x θ). Sometimes called variational free energy and is function of θ and the variational distribution Q(Y) (X is fixed). 12/20
40 Optimizing the variational lower bound Our original goal was to maximize the log-likelihood: θ MLE = argmax l(x θ) EM algorithm: maximize the lower-bound instead (θ, Q ) = argmax F X (θ, Q( )) Hopefully easier, since all summations are outside logarithms. Strategy: Coordinate ascent. Alternately maximize w.r.t Q and θ First find best lower-bound given the current θ s. Optimize this lower-bound to find θ s+1. 13/20
41 The EM algorithm Maximizing F X (θ, Q) with θ fixed: Recall F X (θ, Q) = l(x θ) N KL(q i(y i ) P(y i x i, θ)) 14/20
42 The EM algorithm Maximizing F X (θ, Q) with θ fixed: Recall F X (θ, Q) = l(x θ) N KL(q i(y i ) P(y i x i, θ)) Solution: set q i (y i ) = P(y i x i, θ) for i = 1,..., N Recall: P( x i, θ) is an exponential family distribution with natural parameters θ and feature vector ϕ(x i, ) 14/20
43 The EM algorithm Maximizing F X (θ, Q) with Q fixed: F X (θ, Q) = E qi [log P(x i, y i θ)] + H(q i ) The entropy terms H(q i ) don t depend on θ. Ignore. 15/20
44 The EM algorithm Maximizing F X (θ, Q) with Q fixed: F X (θ, Q) = E qi [log P(x i, y i θ)] + H(q i ) The entropy terms H(q i ) don t depend on θ. Ignore. log P(x i, y i θ) = θ ϕ(x i, y i ) + log h(x i ) log Z(θ) 15/20
45 The EM algorithm Maximizing F X (θ, Q) with Q fixed: F X (θ, Q) = E qi [log P(x i, y i θ)] + H(q i ) The entropy terms H(q i ) don t depend on θ. Ignore. log P(x i, y i θ) = θ ϕ(x i, y i ) + log h(x i ) log Z(θ) F X (θ, Q) = θ E qi [ϕ(x i, y i )] N log Z(θ) + const 15/20
46 The Expectation-Maximization algorithm F X (θ, Q) = θ N E qi [ϕ(x i, y i )] N log Z(θ) + const 16/20
47 The Expectation-Maximization algorithm F X (θ, Q) = θ N To maximize F X w.r.t. θ d, solve E qi [ϕ(x i, y i )] N log Z(θ) + const θ d F X (θ, Q) = 0: Solution: set θ to match moments (compare w. fully observed case) E θ [ϕ d (x, y)] = 1 N N E q i [ϕ d (x i, y i )] 16/20
48 The Expectation-Maximization algorithm F X (θ, Q) = θ N To maximize F X w.r.t. θ d, solve E qi [ϕ(x i, y i )] N log Z(θ) + const θ d F X (θ, Q) = 0: Solution: set θ to match moments (compare w. fully observed case) E θ [ϕ d (x, y)] = 1 N N E q i [ϕ d (x i, y i )] 16/20
49 The Expectation-Maximization algorithm F X (θ, Q) = θ N To maximize F X w.r.t. θ d, solve E qi [ϕ(x i, y i )] N log Z(θ) + const θ d F X (θ, Q) = 0: Solution: set θ to match moments (compare w. fully observed case) E θ [ϕ d (x, y)] = 1 N N E q i [ϕ d (x i, y i )] q i (y i ) = P(y i X, θ old ), an exponential family distribution whose moment parameters can be calculated (by assumption). 16/20
50 Step s of the EM algorithm Current parameters: θ s, Q s (Y) = N qs i (y i) 17/20
51 Step s of the EM algorithm Current parameters: θ s, Q s (Y) = N qs i (y i) E-step: For i = 1,..., N: Set q s+1 i (y i ) = P(y i X, θ s ). Exp. fam. distrib. with suff. stats ϕ(x i, ), natural params θ s Calculate E q s+1[ϕ(x i, y i )] i (Expectation) 17/20
52 Step s of the EM algorithm Current parameters: θ s, Q s (Y) = N qs i (y i) E-step: For i = 1,..., N: Set q s+1 i (y i ) = P(y i X, θ s ). Exp. fam. distrib. with suff. stats ϕ(x i, ), natural params θ s Calculate E q s+1[ϕ(x i, y i )] i M-step (Maximization): (Expectation) Set θ s+1 so that E θ s+1[ϕ(x, y)] = 1 N N E q s i [ϕ(x i, y i )]. 17/20
53 EM algorithm never decreases log-likelihood Recall F X (θ s 1, Q s 1 ) = l(x θ s 1 ) KL(Q(Y) P(Y X, θ s 1 )). After the E-step, Q s (Y) = P(Y X, θ s 1 )) l(x θ s 1 ) = F X (θ s 1, Q s ) 18/20
54 EM algorithm never decreases log-likelihood Recall F X (θ s 1, Q s 1 ) = l(x θ s 1 ) KL(Q(Y) P(Y X, θ s 1 )). After the E-step, Q s (Y) = P(Y X, θ s 1 )) l(x θ s 1 ) = F X (θ s 1, Q s ) F X (θ s, Q s ) 18/20
55 EM algorithm never decreases log-likelihood Recall F X (θ s 1, Q s 1 ) = l(x θ s 1 ) KL(Q(Y) P(Y X, θ s 1 )). After the E-step, Q s (Y) = P(Y X, θ s 1 )) l(x θ s 1 ) = F X (θ s 1, Q s ) F X (θ s, Q s ) F X (θ s, Q s+1 ) 18/20
56 EM algorithm never decreases log-likelihood Recall F X (θ s 1, Q s 1 ) = l(x θ s 1 ) KL(Q(Y) P(Y X, θ s 1 )). After the E-step, Q s (Y) = P(Y X, θ s 1 )) l(x θ s 1 ) = F X (θ s 1, Q s ) F X (θ s, Q s ) F X (θ s, Q s+1 ) = l(x θ s ) 18/20
57 EM algorithm never decreases log-likelihood Recall F X (θ s 1, Q s 1 ) = l(x θ s 1 ) KL(Q(Y) P(Y X, θ s 1 )). After the E-step, Q s (Y) = P(Y X, θ s 1 )) l(x θ s 1 ) = F X (θ s 1, Q s ) F X (θ s, Q s ) F X (θ s, Q s+1 ) = l(x θ s ) Can also show that local maxima of F X are local maxima of l. 18/20
58 EM algorithm never decreases log-likelihood Recall F X (θ s 1, Q s 1 ) = l(x θ s 1 ) KL(Q(Y) P(Y X, θ s 1 )). After the E-step, Q s (Y) = P(Y X, θ s 1 )) l(x θ s 1 ) = F X (θ s 1, Q s ) F X (θ s, Q s ) F X (θ s, Q s+1 ) = l(x θ s ) Can also show that local maxima of F X are local maxima of l. Variants of E and M-steps. 18/20
59 EM algorithm never decreases log-likelihood Recall F X (θ s 1, Q s 1 ) = l(x θ s 1 ) KL(Q(Y) P(Y X, θ s 1 )). After the E-step, Q s (Y) = P(Y X, θ s 1 )) l(x θ s 1 ) = F X (θ s 1, Q s ) F X (θ s, Q s ) F X (θ s, Q s+1 ) = l(x θ s ) Can also show that local maxima of F X are local maxima of l. Variants of E and M-steps. Partial M-step: Update θ to increase (rather than maximize) F X 18/20
60 EM algorithm never decreases log-likelihood Recall F X (θ s 1, Q s 1 ) = l(x θ s 1 ) KL(Q(Y) P(Y X, θ s 1 )). After the E-step, Q s (Y) = P(Y X, θ s 1 )) l(x θ s 1 ) = F X (θ s 1, Q s ) F X (θ s, Q s ) F X (θ s, Q s+1 ) = l(x θ s ) Can also show that local maxima of F X are local maxima of l. Variants of E and M-steps. Partial M-step: Update θ to increase (rather than maximize) F X Partial E-step: Update Q to decrease KL(Q(Y) P(Y X, θ) (rather reduce to 0). E.g. update just one or a few q i. 18/20
61 A toy example: A mixture of two Gaussians, N (x m, 1) and N (x 5 m, 2). First has probability 0.6, the second /20
62 A toy example: A mixture of two Gaussians, N (x m, 1) and N (x 5 m, 2). First has probability 0.6, the second 0.4. We observe a single data point x = 1. What is the ML estimate of m? 19/20
63 A toy example: A mixture of two Gaussians, N (x m, 1) and N (x 5 m, 2). First has probability 0.6, the second 0.4. We observe a single data point x = 1. What is the ML estimate of m? What is the hidden variable? 19/20
64 A toy example: A mixture of two Gaussians, N (x m, 1) and N (x 5 m, 2). First has probability 0.6, the second 0.4. We observe a single data point x = 1. What is the ML estimate of m? What is the hidden variable? Is the overall model exponential family? 19/20
65 A toy example: A mixture of two Gaussians, N (x m, 1) and N (x 5 m, 2). First has probability 0.6, the second 0.4. We observe a single data point x = 1. What is the ML estimate of m? What is the hidden variable? Is the overall model exponential family? What is the posterior distribution over the hidden variable? 19/20
66 A toy example: A mixture of two Gaussians, N (x m, 1) and N (x 5 m, 2). First has probability 0.6, the second 0.4. We observe a single data point x = 1. What is the ML estimate of m? What is the hidden variable? Is the overall model exponential family? What is the posterior distribution over the hidden variable? If we knew the hidden variable, what is the MLE? 19/20
67 The EM algorithm 2 l_q q 2 l_m m 20/20
68 The EM algorithm 2 f_q q Initialize m = l_m m 20/20
69 The EM algorithm 2 f_q q 2 Initialize m = 2.9. Set q = f_m m 20/20
70 The EM algorithm 2 f_q q f_m 2 4 Initialize m = 2.9. Set q = Set m = m 20/20
71 The EM algorithm 2 f_q q f_m Initialize m = 2.9. Set q = Set m = Set q = m 20/20
72 The EM algorithm 2 f_q q f_m m Initialize m = 2.9. Set q = Set m = Set q = Set m = /20
73 The EM algorithm 2 f_q q f_m m Initialize m = 2.9. Set q = Set m = Set q = Set m = Repeat till convergence 20/20
Probabilistic and Bayesian Machine Learning
Probabilistic and Bayesian Machine Learning Day 3: The EM Algorithm Yee Whye Teh ywteh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit University College London http://www.gatsby.ucl.ac.uk/ ywteh/teaching/probmodels
Διαβάστε περισσότεραBayesian statistics. DS GA 1002 Probability and Statistics for Data Science.
Bayesian statistics DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Frequentist vs Bayesian statistics In frequentist
Διαβάστε περισσότεραSolution Series 9. i=1 x i and i=1 x i.
Lecturer: Prof. Dr. Mete SONER Coordinator: Yilin WANG Solution Series 9 Q1. Let α, β >, the p.d.f. of a beta distribution with parameters α and β is { Γ(α+β) Γ(α)Γ(β) f(x α, β) xα 1 (1 x) β 1 for < x
Διαβάστε περισσότεραOther Test Constructions: Likelihood Ratio & Bayes Tests
Other Test Constructions: Likelihood Ratio & Bayes Tests Side-Note: So far we have seen a few approaches for creating tests such as Neyman-Pearson Lemma ( most powerful tests of H 0 : θ = θ 0 vs H 1 :
Διαβάστε περισσότεραProbabilistic & Unsupervised Learning
Probabilistic & Unsupervised Learning Week 3: The EM algorithm Yee Whye Teh ywteh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit University College London Term 1, Autumn 2009 Mixtures of Gaussians
Διαβάστε περισσότεραStatistical Inference I Locally most powerful tests
Statistical Inference I Locally most powerful tests Shirsendu Mukherjee Department of Statistics, Asutosh College, Kolkata, India. shirsendu st@yahoo.co.in So far we have treated the testing of one-sided
Διαβάστε περισσότεραEcon 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1
Eon : Fall 8 Suggested Solutions to Problem Set 8 Email questions or omments to Dan Fetter Problem. Let X be a salar with density f(x, θ) (θx + θ) [ x ] with θ. (a) Find the most powerful level α test
Διαβάστε περισσότεραProblem Set 3: Solutions
CMPSCI 69GG Applied Information Theory Fall 006 Problem Set 3: Solutions. [Cover and Thomas 7.] a Define the following notation, C I p xx; Y max X; Y C I p xx; Ỹ max I X; Ỹ We would like to show that C
Διαβάστε περισσότεραHomework 8 Model Solution Section
MATH 004 Homework Solution Homework 8 Model Solution Section 14.5 14.6. 14.5. Use the Chain Rule to find dz where z cosx + 4y), x 5t 4, y 1 t. dz dx + dy y sinx + 4y)0t + 4) sinx + 4y) 1t ) 0t + 4t ) sinx
Διαβάστε περισσότεραChapter 6: Systems of Linear Differential. be continuous functions on the interval
Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations
Διαβάστε περισσότεραExample Sheet 3 Solutions
Example Sheet 3 Solutions. i Regular Sturm-Liouville. ii Singular Sturm-Liouville mixed boundary conditions. iii Not Sturm-Liouville ODE is not in Sturm-Liouville form. iv Regular Sturm-Liouville note
Διαβάστε περισσότεραST5224: Advanced Statistical Theory II
ST5224: Advanced Statistical Theory II 2014/2015: Semester II Tutorial 7 1. Let X be a sample from a population P and consider testing hypotheses H 0 : P = P 0 versus H 1 : P = P 1, where P j is a known
Διαβάστε περισσότεραSection 8.3 Trigonometric Equations
99 Section 8. Trigonometric Equations Objective 1: Solve Equations Involving One Trigonometric Function. In this section and the next, we will exple how to solving equations involving trigonometric functions.
Διαβάστε περισσότερα2 Composition. Invertible Mappings
Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan Composition. Invertible Mappings In this section we discuss two procedures for creating new mappings from old ones, namely,
Διαβάστε περισσότεραSOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM
SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question 1 a) The cumulative distribution function of T conditional on N n is Pr T t N n) Pr max X 1,..., X N ) t N n) Pr max
Διαβάστε περισσότεραSection 7.6 Double and Half Angle Formulas
09 Section 7. Double and Half Angle Fmulas To derive the double-angles fmulas, we will use the sum of two angles fmulas that we developed in the last section. We will let α θ and β θ: cos(θ) cos(θ + θ)
Διαβάστε περισσότεραThe Simply Typed Lambda Calculus
Type Inference Instead of writing type annotations, can we use an algorithm to infer what the type annotations should be? That depends on the type system. For simple type systems the answer is yes, and
Διαβάστε περισσότεραTheorem 8 Let φ be the most powerful size α test of H
Testing composite hypotheses Θ = Θ 0 Θ c 0 H 0 : θ Θ 0 H 1 : θ Θ c 0 Definition 16 A test φ is a uniformly most powerful (UMP) level α test for H 0 vs. H 1 if φ has level α and for any other level α test
Διαβάστε περισσότεραJesse Maassen and Mark Lundstrom Purdue University November 25, 2013
Notes on Average Scattering imes and Hall Factors Jesse Maassen and Mar Lundstrom Purdue University November 5, 13 I. Introduction 1 II. Solution of the BE 1 III. Exercises: Woring out average scattering
Διαβάστε περισσότεραExercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.
Exercises 0 More exercises are available in Elementary Differential Equations. If you have a problem to solve any of them, feel free to come to office hour. Problem Find a fundamental matrix of the given
Διαβάστε περισσότεραLecture 2. Soundness and completeness of propositional logic
Lecture 2 Soundness and completeness of propositional logic February 9, 2004 1 Overview Review of natural deduction. Soundness and completeness. Semantics of propositional formulas. Soundness proof. Completeness
Διαβάστε περισσότερα5.4 The Poisson Distribution.
The worst thing you can do about a situation is nothing. Sr. O Shea Jackson 5.4 The Poisson Distribution. Description of the Poisson Distribution Discrete probability distribution. The random variable
Διαβάστε περισσότεραPartial Differential Equations in Biology The boundary element method. March 26, 2013
The boundary element method March 26, 203 Introduction and notation The problem: u = f in D R d u = ϕ in Γ D u n = g on Γ N, where D = Γ D Γ N, Γ D Γ N = (possibly, Γ D = [Neumann problem] or Γ N = [Dirichlet
Διαβάστε περισσότεραNumerical Analysis FMN011
Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 12 Periodic data A function g has period P if g(x + P ) = g(x) Model: Trigonometric polynomial of order M T M (x) =
Διαβάστε περισσότεραChapter 6: Systems of Linear Differential. be continuous functions on the interval
Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations
Διαβάστε περισσότεραExercises to Statistics of Material Fatigue No. 5
Prof. Dr. Christine Müller Dipl.-Math. Christoph Kustosz Eercises to Statistics of Material Fatigue No. 5 E. 9 (5 a Show, that a Fisher information matri for a two dimensional parameter θ (θ,θ 2 R 2, can
Διαβάστε περισσότερα3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β
3.4 SUM AND DIFFERENCE FORMULAS Page Theorem cos(αβ cos α cos β -sin α cos(α-β cos α cos β sin α NOTE: cos(αβ cos α cos β cos(α-β cos α -cos β Proof of cos(α-β cos α cos β sin α Let s use a unit circle
Διαβάστε περισσότεραIntroduction to the ML Estimation of ARMA processes
Introduction to the ML Estimation of ARMA processes Eduardo Rossi University of Pavia October 2013 Rossi ARMA Estimation Financial Econometrics - 2013 1 / 1 We consider the AR(p) model: Y t = c + φ 1 Y
Διαβάστε περισσότερα6. MAXIMUM LIKELIHOOD ESTIMATION
6 MAXIMUM LIKELIHOOD ESIMAION [1] Maximum Likelihood Estimator (1) Cases in which θ (unknown parameter) is scalar Notational Clarification: From now on, we denote the true value of θ as θ o hen, view θ
Διαβάστε περισσότεραforms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with
Week 03: C lassification of S econd- Order L inear Equations In last week s lectures we have illustrated how to obtain the general solutions of first order PDEs using the method of characteristics. We
Διαβάστε περισσότεραEE512: Error Control Coding
EE512: Error Control Coding Solution for Assignment on Finite Fields February 16, 2007 1. (a) Addition and Multiplication tables for GF (5) and GF (7) are shown in Tables 1 and 2. + 0 1 2 3 4 0 0 1 2 3
Διαβάστε περισσότεραΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007
Οδηγίες: Να απαντηθούν όλες οι ερωτήσεις. Αν κάπου κάνετε κάποιες υποθέσεις να αναφερθούν στη σχετική ερώτηση. Όλα τα αρχεία που αναφέρονται στα προβλήματα βρίσκονται στον ίδιο φάκελο με το εκτελέσιμο
Διαβάστε περισσότεραQueensland University of Technology Transport Data Analysis and Modeling Methodologies
Queensland University of Technology Transport Data Analysis and Modeling Methodologies Lab Session #7 Example 5.2 (with 3SLS Extensions) Seemingly Unrelated Regression Estimation and 3SLS A survey of 206
Διαβάστε περισσότεραParametrized Surfaces
Parametrized Surfaces Recall from our unit on vector-valued functions at the beginning of the semester that an R 3 -valued function c(t) in one parameter is a mapping of the form c : I R 3 where I is some
Διαβάστε περισσότεραPractice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1
Conceptual Questions. State a Basic identity and then verify it. a) Identity: Solution: One identity is cscθ) = sinθ) Practice Exam b) Verification: Solution: Given the point of intersection x, y) of the
Διαβάστε περισσότεραAn Inventory of Continuous Distributions
Appendi A An Inventory of Continuous Distributions A.1 Introduction The incomplete gamma function is given by Also, define Γ(α; ) = 1 with = G(α; ) = Z 0 Z 0 Z t α 1 e t dt, α > 0, >0 t α 1 e t dt, α >
Διαβάστε περισσότεραSecond Order Partial Differential Equations
Chapter 7 Second Order Partial Differential Equations 7.1 Introduction A second order linear PDE in two independent variables (x, y Ω can be written as A(x, y u x + B(x, y u xy + C(x, y u u u + D(x, y
Διαβάστε περισσότεραC.S. 430 Assignment 6, Sample Solutions
C.S. 430 Assignment 6, Sample Solutions Paul Liu November 15, 2007 Note that these are sample solutions only; in many cases there were many acceptable answers. 1 Reynolds Problem 10.1 1.1 Normal-order
Διαβάστε περισσότεραFourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics
Fourier Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Introduction Not all functions can be represented by Taylor series. f (k) (c) A Taylor series f (x) = (x c)
Διαβάστε περισσότεραAreas and Lengths in Polar Coordinates
Kiryl Tsishchanka Areas and Lengths in Polar Coordinates In this section we develop the formula for the area of a region whose boundary is given by a polar equation. We need to use the formula for the
Διαβάστε περισσότεραProbability and Random Processes (Part II)
Probability and Random Processes (Part II) 1. If the variance σ x of d(n) = x(n) x(n 1) is one-tenth the variance σ x of a stationary zero-mean discrete-time signal x(n), then the normalized autocorrelation
Διαβάστε περισσότερα6.1. Dirac Equation. Hamiltonian. Dirac Eq.
6.1. Dirac Equation Ref: M.Kaku, Quantum Field Theory, Oxford Univ Press (1993) η μν = η μν = diag(1, -1, -1, -1) p 0 = p 0 p = p i = -p i p μ p μ = p 0 p 0 + p i p i = E c 2 - p 2 = (m c) 2 H = c p 2
Διαβάστε περισσότεραSolutions to Exercise Sheet 5
Solutions to Eercise Sheet 5 jacques@ucsd.edu. Let X and Y be random variables with joint pdf f(, y) = 3y( + y) where and y. Determine each of the following probabilities. Solutions. a. P (X ). b. P (X
Διαβάστε περισσότεραHomework for 1/27 Due 2/5
Name: ID: Homework for /7 Due /5. [ 8-3] I Example D of Sectio 8.4, the pdf of the populatio distributio is + αx x f(x α) =, α, otherwise ad the method of momets estimate was foud to be ˆα = 3X (where
Διαβάστε περισσότεραLecture 12: Pseudo likelihood approach
Lecture 12: Pseudo likelihood approach Pseudo MLE Let X 1,...,X n be a random sample from a pdf in a family indexed by two parameters θ and π with likelihood l(θ,π). The method of pseudo MLE may be viewed
Διαβάστε περισσότεραMachine Learning. Lecture 5: Gaussian Discriminant Analysis, Naive Bayes and EM Algorithm. Feng Li.
Machine Learning Lecture 5: Gaussian Discriminant Analysis, Naive Bayes and EM Algorithm Feng Li fli@sdu.edu.cn https://funglee.github.io School of Computer Science and Technology Shandong University Fall
Διαβάστε περισσότεραHOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:
HOMEWORK 4 Problem a For the fast loading case, we want to derive the relationship between P zz and λ z. We know that the nominal stress is expressed as: P zz = ψ λ z where λ z = λ λ z. Therefore, applying
Διαβάστε περισσότερα: Monte Carlo EM 313, Louis (1982) EM, EM Newton-Raphson, /. EM, 2 Monte Carlo EM Newton-Raphson, Monte Carlo EM, Monte Carlo EM, /. 3, Monte Carlo EM
2008 6 Chinese Journal of Applied Probability and Statistics Vol.24 No.3 Jun. 2008 Monte Carlo EM 1,2 ( 1,, 200241; 2,, 310018) EM, E,,. Monte Carlo EM, EM E Monte Carlo,. EM, Monte Carlo EM,,,,. Newton-Raphson.
Διαβάστε περισσότεραDifferential equations
Differential equations Differential equations: An equation inoling one dependent ariable and its deriaties w. r. t one or more independent ariables is called a differential equation. Order of differential
Διαβάστε περισσότεραHomework 3 Solutions
Homework 3 Solutions Igor Yanovsky (Math 151A TA) Problem 1: Compute the absolute error and relative error in approximations of p by p. (Use calculator!) a) p π, p 22/7; b) p π, p 3.141. Solution: For
Διαβάστε περισσότεραLecture 10 - Representation Theory III: Theory of Weights
Lecture 10 - Representation Theory III: Theory of Weights February 18, 2012 1 Terminology One assumes a base = {α i } i has been chosen. Then a weight Λ with non-negative integral Dynkin coefficients Λ
Διαβάστε περισσότεραMatrices and Determinants
Matrices and Determinants SUBJECTIVE PROBLEMS: Q 1. For what value of k do the following system of equations possess a non-trivial (i.e., not all zero) solution over the set of rationals Q? x + ky + 3z
Διαβάστε περισσότεραPhys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)
Phys460.nb 81 ψ n (t) is still the (same) eigenstate of H But for tdependent H. The answer is NO. 5.5.5. Solution for the tdependent Schrodinger s equation If we assume that at time t 0, the electron starts
Διαβάστε περισσότεραExample of the Baum-Welch Algorithm
Example of the Baum-Welch Algorithm Larry Moss Q520, Spring 2008 1 Our corpus c We start with a very simple corpus. We take the set Y of unanalyzed words to be {ABBA, BAB}, and c to be given by c(abba)
Διαβάστε περισσότεραEstimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University
Estimation for ARMA Processes with Stable Noise Matt Calder & Richard A. Davis Colorado State University rdavis@stat.colostate.edu 1 ARMA processes with stable noise Review of M-estimation Examples of
Διαβάστε περισσότεραApproximation of distance between locations on earth given by latitude and longitude
Approximation of distance between locations on earth given by latitude and longitude Jan Behrens 2012-12-31 In this paper we shall provide a method to approximate distances between two points on earth
Διαβάστε περισσότεραAreas and Lengths in Polar Coordinates
Kiryl Tsishchanka Areas and Lengths in Polar Coordinates In this section we develop the formula for the area of a region whose boundary is given by a polar equation. We need to use the formula for the
Διαβάστε περισσότεραOrdinal Arithmetic: Addition, Multiplication, Exponentiation and Limit
Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit Ting Zhang Stanford May 11, 2001 Stanford, 5/11/2001 1 Outline Ordinal Classification Ordinal Addition Ordinal Multiplication Ordinal
Διαβάστε περισσότεραb. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!
MTH U341 urface Integrals, tokes theorem, the divergence theorem To be turned in Wed., Dec. 1. 1. Let be the sphere of radius a, x 2 + y 2 + z 2 a 2. a. Use spherical coordinates (with ρ a) to parametrize.
Διαβάστε περισσότεραMath 6 SL Probability Distributions Practice Test Mark Scheme
Math 6 SL Probability Distributions Practice Test Mark Scheme. (a) Note: Award A for vertical line to right of mean, A for shading to right of their vertical line. AA N (b) evidence of recognizing symmetry
Διαβάστε περισσότεραΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ. Ψηφιακή Οικονομία. Διάλεξη 10η: Basics of Game Theory part 2 Mαρίνα Μπιτσάκη Τμήμα Επιστήμης Υπολογιστών
ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ Ψηφιακή Οικονομία Διάλεξη 0η: Basics of Game Theory part 2 Mαρίνα Μπιτσάκη Τμήμα Επιστήμης Υπολογιστών Best Response Curves Used to solve for equilibria in games
Διαβάστε περισσότεραVariational Wavefunction for the Helium Atom
Technische Universität Graz Institut für Festkörperphysik Student project Variational Wavefunction for the Helium Atom Molecular and Solid State Physics 53. submitted on: 3. November 9 by: Markus Krammer
Διαβάστε περισσότεραConcrete Mathematics Exercises from 30 September 2016
Concrete Mathematics Exercises from 30 September 2016 Silvio Capobianco Exercise 1.7 Let H(n) = J(n + 1) J(n). Equation (1.8) tells us that H(2n) = 2, and H(2n+1) = J(2n+2) J(2n+1) = (2J(n+1) 1) (2J(n)+1)
Διαβάστε περισσότεραSOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM
SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question 1 a) The cumulative distribution function of T conditional on N n is Pr (T t N n) Pr (max (X 1,..., X N ) t N n) Pr (max
Διαβάστε περισσότερα4.6 Autoregressive Moving Average Model ARMA(1,1)
84 CHAPTER 4. STATIONARY TS MODELS 4.6 Autoregressive Moving Average Model ARMA(,) This section is an introduction to a wide class of models ARMA(p,q) which we will consider in more detail later in this
Διαβάστε περισσότεραWeb-based supplementary materials for Bayesian Quantile Regression for Ordinal Longitudinal Data
Web-based supplementary materials for Bayesian Quantile Regression for Ordinal Longitudinal Data Rahim Alhamzawi, Haithem Taha Mohammad Ali Department of Statistics, College of Administration and Economics,
Διαβάστε περισσότεραFinite Field Problems: Solutions
Finite Field Problems: Solutions 1. Let f = x 2 +1 Z 11 [x] and let F = Z 11 [x]/(f), a field. Let Solution: F =11 2 = 121, so F = 121 1 = 120. The possible orders are the divisors of 120. Solution: The
Διαβάστε περισσότεραENGR 691/692 Section 66 (Fall 06): Machine Learning Assigned: August 30 Homework 1: Bayesian Decision Theory (solutions) Due: September 13
ENGR 69/69 Section 66 (Fall 06): Machine Learning Assigned: August 30 Homework : Bayesian Decision Theory (solutions) Due: Septemer 3 Prolem : ( pts) Let the conditional densities for a two-category one-dimensional
Διαβάστε περισσότερα6.3 Forecasting ARMA processes
122 CHAPTER 6. ARMA MODELS 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss a linear
Διαβάστε περισσότεραΜηχανική Μάθηση Hypothesis Testing
ΕΛΛΗΝΙΚΗ ΔΗΜΟΚΡΑΤΙΑ ΠΑΝΕΠΙΣΤΗΜΙΟ ΚΡΗΤΗΣ Μηχανική Μάθηση Hypothesis Testing Γιώργος Μπορμπουδάκης Τμήμα Επιστήμης Υπολογιστών Procedure 1. Form the null (H 0 ) and alternative (H 1 ) hypothesis 2. Consider
Διαβάστε περισσότεραPARTIAL NOTES for 6.1 Trigonometric Identities
PARTIAL NOTES for 6.1 Trigonometric Identities tanθ = sinθ cosθ cotθ = cosθ sinθ BASIC IDENTITIES cscθ = 1 sinθ secθ = 1 cosθ cotθ = 1 tanθ PYTHAGOREAN IDENTITIES sin θ + cos θ =1 tan θ +1= sec θ 1 + cot
Διαβάστε περισσότεραECE598: Information-theoretic methods in high-dimensional statistics Spring 2016
ECE598: Information-theoretic methods in high-dimensional statistics Spring 06 Lecture 7: Information bound Lecturer: Yihong Wu Scribe: Shiyu Liang, Feb 6, 06 [Ed. Mar 9] Recall the Chi-squared divergence
Διαβάστε περισσότεραSrednicki Chapter 55
Srednicki Chapter 55 QFT Problems & Solutions A. George August 3, 03 Srednicki 55.. Use equations 55.3-55.0 and A i, A j ] = Π i, Π j ] = 0 (at equal times) to verify equations 55.-55.3. This is our third
Διαβάστε περισσότεραStatistics 104: Quantitative Methods for Economics Formula and Theorem Review
Harvard College Statistics 104: Quantitative Methods for Economics Formula and Theorem Review Tommy MacWilliam, 13 tmacwilliam@college.harvard.edu March 10, 2011 Contents 1 Introduction to Data 5 1.1 Sample
Διαβάστε περισσότεραLocal Approximation with Kernels
Local Approximation with Kernels Thomas Hangelbroek University of Hawaii at Manoa 5th International Conference Approximation Theory, 26 work supported by: NSF DMS-43726 A cubic spline example Consider
Διαβάστε περισσότεραk A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +
Chapter 3. Fuzzy Arithmetic 3- Fuzzy arithmetic: ~Addition(+) and subtraction (-): Let A = [a and B = [b, b in R If x [a and y [b, b than x+y [a +b +b Symbolically,we write A(+)B = [a (+)[b, b = [a +b
Διαβάστε περισσότεραMath221: HW# 1 solutions
Math: HW# solutions Andy Royston October, 5 7.5.7, 3 rd Ed. We have a n = b n = a = fxdx = xdx =, x cos nxdx = x sin nx n sin nxdx n = cos nx n = n n, x sin nxdx = x cos nx n + cos nxdx n cos n = + sin
Διαβάστε περισσότεραSection 9.2 Polar Equations and Graphs
180 Section 9. Polar Equations and Graphs In this section, we will be graphing polar equations on a polar grid. In the first few examples, we will write the polar equation in rectangular form to help identify
Διαβάστε περισσότερα( ) 2 and compare to M.
Problems and Solutions for Section 4.2 4.9 through 4.33) 4.9 Calculate the square root of the matrix 3!0 M!0 8 Hint: Let M / 2 a!b ; calculate M / 2!b c ) 2 and compare to M. Solution: Given: 3!0 M!0 8
Διαβάστε περισσότεραHOMEWORK#1. t E(x) = 1 λ = (b) Find the median lifetime of a randomly selected light bulb. Answer:
HOMEWORK# 52258 李亞晟 Eercise 2. The lifetime of light bulbs follows an eponential distribution with a hazard rate of. failures per hour of use (a) Find the mean lifetime of a randomly selected light bulb.
Διαβάστε περισσότεραSpherical Coordinates
Spherical Coordinates MATH 311, Calculus III J. Robert Buchanan Department of Mathematics Fall 2011 Spherical Coordinates Another means of locating points in three-dimensional space is known as the spherical
Διαβάστε περισσότεραCongruence Classes of Invertible Matrices of Order 3 over F 2
International Journal of Algebra, Vol. 8, 24, no. 5, 239-246 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.2988/ija.24.422 Congruence Classes of Invertible Matrices of Order 3 over F 2 Ligong An and
Διαβάστε περισσότεραLecture 7: Overdispersion in Poisson regression
Lecture 7: Overdispersion in Poisson regression Claudia Czado TU München c (Claudia Czado, TU Munich) ZFS/IMS Göttingen 2004 0 Overview Introduction Modeling overdispersion through mixing Score test for
Διαβάστε περισσότεραCHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS
CHAPTER 5 SOLVING EQUATIONS BY ITERATIVE METHODS EXERCISE 104 Page 8 1. Find the positive root of the equation x + 3x 5 = 0, correct to 3 significant figures, using the method of bisection. Let f(x) =
Διαβάστε περισσότεραEvery set of first-order formulas is equivalent to an independent set
Every set of first-order formulas is equivalent to an independent set May 6, 2008 Abstract A set of first-order formulas, whatever the cardinality of the set of symbols, is equivalent to an independent
Διαβάστε περισσότεραΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 6/5/2006
Οδηγίες: Να απαντηθούν όλες οι ερωτήσεις. Ολοι οι αριθμοί που αναφέρονται σε όλα τα ερωτήματα είναι μικρότεροι το 1000 εκτός αν ορίζεται διαφορετικά στη διατύπωση του προβλήματος. Διάρκεια: 3,5 ώρες Καλή
Διαβάστε περισσότεραHW 3 Solutions 1. a) I use the auto.arima R function to search over models using AIC and decide on an ARMA(3,1)
HW 3 Solutions a) I use the autoarima R function to search over models using AIC and decide on an ARMA3,) b) I compare the ARMA3,) to ARMA,0) ARMA3,) does better in all three criteria c) The plot of the
Διαβάστε περισσότεραAppendix S1 1. ( z) α βc. dβ β δ β
Appendix S1 1 Proof of Lemma 1. Taking first and second partial derivatives of the expected profit function, as expressed in Eq. (7), with respect to l: Π Π ( z, λ, l) l θ + s ( s + h ) g ( t) dt λ Ω(
Διαβάστε περισσότεραFractional Colorings and Zykov Products of graphs
Fractional Colorings and Zykov Products of graphs Who? Nichole Schimanski When? July 27, 2011 Graphs A graph, G, consists of a vertex set, V (G), and an edge set, E(G). V (G) is any finite set E(G) is
Διαβάστε περισσότεραMath 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.
Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequalit for metrics: Let (X, d) be a metric space and let x,, z X. Prove that d(x, z) d(, z) d(x, ). (ii): Reverse triangle inequalit for norms:
Διαβάστε περισσότεραΥπολογιστική Φυσική Στοιχειωδών Σωματιδίων
Υπολογιστική Φυσική Στοιχειωδών Σωματιδίων Όρια Πιστότητας (Confidence Limits) 2/4/2014 Υπολογ.Φυσική ΣΣ 1 Τα όρια πιστότητας -Confidence Limits (CL) Tα όρια πιστότητας μιας μέτρησης Μπορεί να αναφέρονται
Διαβάστε περισσότεραAquinas College. Edexcel Mathematical formulae and statistics tables DO NOT WRITE ON THIS BOOKLET
Aquinas College Edexcel Mathematical formulae and statistics tables DO NOT WRITE ON THIS BOOKLET Pearson Edexcel Level 3 Advanced Subsidiary and Advanced GCE in Mathematics and Further Mathematics Mathematical
Διαβάστε περισσότεραElements of Information Theory
Elements of Information Theory Model of Digital Communications System A Logarithmic Measure for Information Mutual Information Units of Information Self-Information News... Example Information Measure
Διαβάστε περισσότερα= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y
Stat 50 Homework Solutions Spring 005. (a λ λ λ 44 (b trace( λ + λ + λ 0 (c V (e x e e λ e e λ e (λ e by definition, the eigenvector e has the properties e λ e and e e. (d λ e e + λ e e + λ e e 8 6 4 4
Διαβάστε περισσότεραLecture 15 - Root System Axiomatics
Lecture 15 - Root System Axiomatics Nov 1, 01 In this lecture we examine root systems from an axiomatic point of view. 1 Reflections If v R n, then it determines a hyperplane, denoted P v, through the
Διαβάστε περισσότεραLecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3
Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 1 State vector space and the dual space Space of wavefunctions The space of wavefunctions is the set of all
Διαβάστε περισσότεραThese derivations are not part of the official forthcoming version of Vasilaky and Leonard
Target Input Model with Learning, Derivations Kathryn N Vasilaky These derivations are not part of the official forthcoming version of Vasilaky and Leonard 06 in Economic Development and Cultural Change.
Διαβάστε περισσότεραSUPERPOSITION, MEASUREMENT, NORMALIZATION, EXPECTATION VALUES. Reading: QM course packet Ch 5 up to 5.6
SUPERPOSITION, MEASUREMENT, NORMALIZATION, EXPECTATION VALUES Readig: QM course packet Ch 5 up to 5. 1 ϕ (x) = E = π m( a) =1,,3,4,5 for xa (x) = πx si L L * = πx L si L.5 ϕ' -.5 z 1 (x) = L si
Διαβάστε περισσότεραAn Introduction to Signal Detection and Estimation - Second Edition Chapter II: Selected Solutions
An Introduction to Signal Detection Estimation - Second Edition Chapter II: Selected Solutions H V Poor Princeton University March 16, 5 Exercise : The likelihood ratio is given by L(y) (y +1), y 1 a With
Διαβάστε περισσότεραProblem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ.
Chemistry 362 Dr Jean M Standard Problem Set 9 Solutions The ˆ L 2 operator is defined as Verify that the angular wavefunction Y θ,φ) Also verify that the eigenvalue is given by 2! 2 & L ˆ 2! 2 2 θ 2 +
Διαβάστε περισσότερα