Πανεπιστήµιο Κρήτης - Τµήµα Επιστήµης Υπολογιστών ΗΥ-570: Στατιστική Επεξεργασία Σήµατος 205 ιδάσκων : Α. Μουχτάρης Τριτη Σειρά Ασκήσεων Λύσεις Ασκηση 3. 5.2 (a) From the Wiener-Hopf equation we have: w o = R p. () We are give and 0.5 R = 0.5 0.5 p =. 0.25 Hence, the inverse matrix R is 0.5 R = 0.5 = 0.75 0.5 0.5 Using Eq. (), we therefore get. w o = 0.75 0.5 0.5 = 0.5 2 =.5 0.5 = 0.5 3 0.25 0.5 3 0 0
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205 2 (b) The minimum mean-square error is : ] J min = σd 2 ph w o = σd [0.5, 2 0.5 0.25 = σd 2 0.25. 0 (c) The eigenvalues of matrix R are roots of the characteristic equation: ( λ) 2 0.5 2 = 0. That is, the two roots are: λ = 0.5 and λ 2 =.5. The associated eigenvectors are defined by: Rq = λq. For λ = 0.5 we have: 0.5 0.5 q q 2 = 0.5 q q 2. Expanding: q + 0.5q 2 = 0.5q 0.5q + q 2 = 0.5q 2 Therefore, q = q2. ormalizing the eigenvector q to unit length, we therefore have q = 2. Similarly, for the eigenvalue λ 2 =.5, we may show that q 2 = 2. Accordingly, we may express the Wiener fitler in terms of its eigenvalues and eigenvectors as follows:
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205 3 w o = ( 2 i= λ i q i q H i ) p = [ ], + [ ], 0.5 = + 0.5 3 0.25 3 0.25 }{{}}{{} }{{} q λ q H q λ 2 q H p 2 2 5.4 The optimum filtering solution is defined by the Wiener-Hopf equation as: Rw o = p, (2) for which the minimum mean-square error equals: J min = σ 2 d ph w o. (3) Combine Eqs. (2) and (3) into a single relation and we get: σ2 d p p H = J min R w o 0 Define A = σ2 d p p H (4) R Since σ 2 d = E[d(n)d (n)], p = E[u(n)d (n)] and R = E[u(n)u (n)], we may rewrite Eq. (4) as A = E[d(n)d (n)] E[u(n)d (n)] E[d(n)]u H (n) = E d(n) [ d E[u(n)u (n)] (n), u(n) u H (n) ] The minimum mean-square error equals: J min = σ 2 d ph w o (5)
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205 4 Eliminate σd 2 between Eqs. (2) and (5): J(w) = J min + p H w o p H w w H p + w H Rw (6) Eliminate p between Eqs. (3) and (??): J(w) = J min + w H o Rw o w H o Rw w H Rw o + w H Rw, (7) where we have used the property R H = R. We may rewrite Eq. (7) simply as J(w) = J min + (w w o ) H R(w w o ), (8) which clearly shows that J(w o ) = J min. 5.5 The minimum mean-square error equals J min σ 2 d ph R p. (9) Using the spectral theorem, we may express the correlation matrix R as M R = QΛQ H = λ k q k q H k. k= Hence, the inverse of R equals: R = M k= λ k q k q H k. (0) Substituting Eq. (0) into (9): J min = σ 2 d M k= q k q H k λ p = σ2 d M p H q k 2. k λ k k= 5.7 (a) The correlation matrix R is
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205 5 R = E[u(n)u H (n)] = E[ A 2 ] e jω n e jω (n ). e jω (n M+) [ ] e +jωn, e +jω(n ),..., e +jω (n M+) = E[ A 2 ]s H (ω ) where I is the identity matrix. (b) The tap-weight vector of the Wiener filter is w o = R p. From part a, R = σ 2 vi + σ 2 s(ω )s H (ω ). We are given p = σ 2 0s(ω o ). To invert the matrix R. we use the matrix inversion lemma, as described here: If A = B + CD C H then A = B BC(D + C H BC) C H B. In our case, A = σ 2 vi, B = σ 2 vi, D = σ 2, C = s(ω ). Hence, R = σ 2 v I s(ω σv 2 )s H (ω ) σv 2 + s σ H (ω 2 )s(ω ) The corresponding value of the wiener tap-weight vector is w o = R p = σ2 0 σv 2 s(ω 0 ) We note that s H (ω )s(ω ) = M and s H (ω )s(ω 0 ) = scalar. Hence, σ 2 0 σ 2 v s(ω )s H (ω ) σv 2 + s σ H (ω 2 )s(ω )
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205 6 5.2 w o = σ2 0 σ 2 v s(ω 0 ) σ2 0 σv 2 s H (ω )s(ω ) + M s(ω ) σ 2 v σ 2 ) (a) Under the hypothesis H, we have u = s + v. The correlation matrix of u equals R = E[uu T ] = ss T + R, where R = E[VV T ]. The tap-weight vector w k is chosen so that wk T u yields an optimum estimate of the kth element of s. Thus, with s(k) treated as the desired response, the cross-correlation vector between u and s(k) equals: p k = E[us(k)] = ss(k), k =, 2,..., M. Hence, the Wiener-Hopf equation yields the optimum value of w k as w ko = R p k = ( ss T + R ) ss(k), k =, 2,..., M () To apply the matrix inversion lemma, we let A = R, B = R, C = s, D =. Hence, R = R R sst R + s T R s (2) Substitute Eq. (2) into Eq. (): w ko = ( R ) R sst R + s T R s ss(k) = R s( + st R s) R sst R s + s T R s s(k) = s(k) + s T R sr s (b) The output signal-to-noise ratio equals
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205 7 SR = E[(wT s) 2 ] E[(w T v) 2 ] = wt ss T w w T E[vv T ]w = wt ss T w w T R w (3) Since R is positive definite, we may write R = R /2 R/2. Define the vector: a = R /2 w, or equivalently w = R /2 a. (4) Accordingly, we may rewrite Eq. 3 as follows SR = at R /2 sst R /2 a a T, (5) a where we have used the symmetric property of R V. Define the normalized vector ā = a a, where a is the norm of a. Then we may rewrite Eq. 5 as SR = ā T R /2 sst R /2 ā ā = T R /2 s 2 Thus the output signal-to-noise ratio SR equals the squared magnitude of the inner product of two vectors ā and R /2 s. This inner product is maximized when a equals R /2s. That is a S = R /2 s (6) Let w S denote the value of the tap-weight vector that corresponds to Eq. 6. Hence, the use of Eq. 4 in Eq. 6 yields: ( ) w S = R /2 R /2 s = R s.
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205 8 (c)
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205 9 Ασκηση 4. 6.
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205 0 6.4
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205 2
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205 3 6.5
ΗΥ-570: Στατιστική Επεξεργασία Σήµατος Τριτη Σειρά Ασκήσεων Λύσεις 205 4 6.9