EXAMENSARBETEN I MATEMATIK
|
|
- Ῥουβήν Δημαράς
- 7 χρόνια πριν
- Προβολές:
Transcript
1 EXAMENSARBETEN I MATEMATIK MATEMATISKA INSTITUTIONEN, STOCKHOLMS UNIVERSITET Operator theory in finite-dimensional vector spaces av Kharema Ebshesh No 12 MATEMATISKA INSTITUTIONEN, STOCKHOLMS UNIVERSITET, STOCKHOLM
2
3 Operator theory in finite-dimensional vector spaces Kharema Ebshesh Examensarbete i matematik 15 högskolepoäng, påbyggnadskurs Handledare: Andrzej Szulkin och Yishao Zhou 2008
4
5 Abstract The theory of linear operators is an extensive area. This thesis is about the linear operators in finite dimensional vector spaces. We study the symmetric, unitary, isometric, and normal operators, and orthogonal projection in the unitary space, the eigenvalue problem and the resolvent. We give a proof of the minimax principle in the end. 1
6 Acknowledgements I have the pleasure to thank Prof. Andrzej Szulkin and Prof. Yishao Zhou for their help, I enjoyed my work with them, I actually learned a lot during the preparation of this thesis. I would like to thank all the teachers that I had in the Maths Department. I would like to thank Prof. Mikael Passare for inviting me to Sweden. Finally I will not forget my teachers in Libya who supported me during my studies. /Kharema 2
7 Contents Introduction 4 1 Operators in vector spaces Vector spaces and adjoint vector spaces Linear operators Projections The adjoint operator The eigenvalue problem The resolvent Operators in unitary spaces Unitary spaces Symmetric operators Unitary, isometric and normal operators Orthogonal projections The eigenvalue problem The minimax principle References 38 3
8 Introduction This report contains two sections. In Section 1 we study linear operators in finite dimensional vector spaces, where projection, the adjoint operators are introduced. In particular, we study the eigenvalue problem and some properties of the resolvent. In Section 2, different operators in unitary spaces, such as symmetric, unitary, isometric and normal operators are considered. We study the eigenvalue problems for these operators. Finally we prove the minimax principle for eigenvalues. The results in this report are primarily taken from Chapter 1 of [1]. 4
9 1 Operators in vector spaces 1.1 Vector spaces and adjoint vector spaces Let X be a vector space, and dimx be the dimension of X. In this thesis we shall always assume that dim X <. A subset M of X is a subspace, if M is itself a vector space. We define the codimension of M in X by setting codimm = dim X dim M. Example 1. The set X = C N of all ordered N-tuples u = (ξ i ) = (ξ 1,..., ξ N ) of complex numbers is an N-dimensional vector space. Let dim X = N, if x 1,..., x N are linearly independent, then they are a basis of X, and each u X can be uniquely represented as u = N ξ j x j, (1) j=1 and the scalars ξ j are called the coefficients of u with respect to this basis. Example 2. In C N the N vectors x j = (0,...,0, 1, 0,...,0) with 1 in the j-th place, j = 1,..., N, form a basis (the canonical basis). The coefficients of u = (ξ j ) with respect to the canonical basis are the ξ j themselves. If {x j } is another basis of X, since u = ξ j x j, there is a system of linear relations x k = j γ jk x j, k = 1,..., N. (2) When ξ j,ξ j are coefficients of the same vector u with respect to the bases {x j } and {x j } respectively, they are then related to each other by ξ j = k γ jk ξ k, j = 1,..., N. (3) The inverse transformations to (2) and (3) are x j = k ˆγ kj x k, ξ k = j ˆγ kj ξ j, (4) where (ˆγ jk ) is the inverse of the matrix (γ jk ) : 5
10 ˆγ ji γ ik = i i γ jiˆγ ik = δ jk = { 1, j = k 0, j k. Let M 1, M 2 be subspaces of X, we define the subspace M 1 + M 2 by (5) M 1 + M 2 = {x 1 + x 2 : x 1 M 1, x 2 M 2 }. (6) Theorem 1. If M 1, M 2 are subspaces of X, then dim(m 1 + M 2 ) = dim M 1 + dimm 2 dim(m 1 M 2 ). (7) Proof. We can see that M 1 M 2 is subspace of both M 1 and M 2. Suppose that dimm 1 = m 1, dimm 2 = m 2 and dim(m 1 M 2 ) = m. We have to prove that dim(m 1 + M 2 ) = m 1 + m 2 m. Suppose that {x 1,..., x m } is a basis of M 1 M 2. We can extend this basis to a basis of M 1 and to a basis of M 2. Let for example { x 1,..., x m, x (1) 1 },..., x(1) m 1 m, { x 1,..., x m, x (2) 1 },..., x(2) m 2 m be two bases { of M 1 and M 2 respectively. } Let B = x 1,..., x m, x (1) 1,..., x(1) m 1 m, x(2) 1,..., x(2) m 2 m. Then B generates M 1 + M 2. Now we will show that the vectors in B are linearly independent. Suppose that α 1 x α m x m +β 1 x (1) β m 1 mx (1) m 1 m +γ 1x (2) γ m 2 mx (2) m 2 m = 0. (8) Let we have also u = α 1 x α m x m + β 1 x (1) β m1 mx (1) m 1 m, (9) u = γ 1 x (2) 1... γ m2 mx (2) m 2 m, (10) 6
11 thus u M 1 and u M 2 by (9) and (10). Hence u M 1 M 2. Therefore there are ζ i, i = 1,..., m such that u = ζ i x i, and { Since ζ 1 x ζ m x m + γ 1 x (2) γ m2 mx (2) m 2 m = 0. x 1,..., x m, x (2) 1 },..., x(2) m 2 m is a basis of M 2, then γ 1 = 0,..., γ m2 m = 0. Substituting this in (8), we obtain α 1 x α m x m + β 1 x (1) β m1 mx (1) m 1 m = 0, thus α 1 = 0,..., α m = 0, β 1 = 0,..., β m1 m = 0. Hence B is a basis of M 1 +M 2. Since B has m 1 +m 2 m vectors, we get the required result. Definition 1. (Direct sum) Let M 1,..., M s be subspaces of X. X is a direct sum of them if each u X has a unique expression of the form u = j u j, u j M j, j = 1,..., s. (11) Then we write X = M 1... M s. Proposition 1. If X = M 1... M s, then 1. X = M M s. 2. M i Mj = {0}, where i j. 3. dimx = j dimm j. Proof. If X = M 1... M s, then each u X can be uniquely expressed as u = u u s, where u j M j. Hence X = M M s. Now let u M i M j, where i j, then u = i 1 + u i + 0 i j s, and u = i j 1 + u j + 0 j s. Since the expression (11) for u is unique, u i = u j = 0 and hence u = 0. To show the last equality in the proposition we assume that dim X = N, dimm j = m j, and we have to show that N = m j. 7
12 Let x 1 i,...,xs i, i = 1,..., m j be bases of M j, j = 1,..., s respectively. Suppose that mj m 1 i=1 m 2 m s α 1i x 1 i + α 2i x 2 i α si x s i = 0. (12) i=1 i=1 α jix j i M j, and since X is the direct sum of M j, the representation (12) is unique, and so for j = 1,..., s we have m j i=1 α jix j i = 0. Since {xj i } are linearly independent, α ji = 0 for i = 1,..., m j, j = 1,..., s. Hence x j i, i = 1,..., m j, j = 1,..., s are linearly independent. Now suppose u X, u = u 1 +u u s, u j M j. Since u j is expressed in a unique way as a linear combination of x j i, i = 1,..., m j, then u is expressed in a unique way as a linear combination of x j i, j = 1,..., s. Hence xj i, j = 1,..., s is a basis of X. Thus i=1 dimx = j dimm j. Definition 2. We call u a norm of u X, if (i) u 0 for all u X and u = 0 iff u = 0. (ii) αu = α u for all α C, u X. (iii) u + v u + v for all u, v X. Example 3. u = max ξ j, j u = j u = j ξ j 1 2 ξ j 2, where ξ j are coefficients of u with respect to the basis {x j } in X, are three different norms. 8
13 We show only that the last expression is a norm. Let ξ j,η j be the coefficients of u, v respectively, then - u = ( ξ j 2)1 2 0 and u = 0 iff ξ j = 0, j. - αu = ( αξ j 2)1 2 = ( α 2 ξ j 2)1 2 = α ( ξ j 2)1 2. -It follows from the Schwarz inequality (see e.g. [2]) that ( ξ j η j ) 2 ξ j 2 η j 2, therefore 2 ( ( ξ j η j 2 ξj 2)1 2 ηj 2)1 2. Adding ξ j 2 + η j 2 to both sides gives ( ξj ξ j η j + η j 2 ) = ξ j ξ j η j + η j 2 ξ j 2 + 2( ξ j 2 ) 1 2 ( ηj 2 ) ηj 2 = [( ξ j 2 ) ( ηj 2 ) 1 2 ] 2. Hence that is ( ξ j + η j 2 ) 1 2 ( ξj 2 ) ( ηj 2 ) 1 2, u + v u + v. -The adjoint space Definition 3. (Linear forms and semilinear forms): A complex-valued function f[u] defined for u X is called a linear form if f[αu + βv] = αf[u] + βf[v] (13) for all u, v X, and all scalars α, β, and a semilinear form if f[αu + βv] = ᾱf[u] + βf[v]. (14) Example 4. Let x 1,..., x N be a fixed basis in X. It follows from (13) that a linear form on X can be expressed in the form f[u] = N α j ξ j, where u = (ξ j ) and f[x j ] = α j, j=1 9
14 and similarly by (14) a semilinear form on X can be expressed in the form f[u] = j α j ξj. f[u] is a semilinear form if and only if f[u] is a linear form. Definition 4. The set of all semilinear forms on X is a vector space, called the adjoint (or conjugate) space of X and is denoted by X. Let us denote f[u] by (f, u) where f is a semilinear form. It follows from the definition that (f, u) is linear in f and semilinear in u: (αf + βg, u) = α(f, u) + β(g, u), (15) (f, αu + βv) = ᾱ(f, u) + β(f, v). (16) Example 5. For X = C N, X may be regarded as the set of all row vectors f = (α j ) whereas X is the set of all column vectors u = (ξ j ), and (f, u) = α j ξj. The adjoint basis The principal content in this part is the following theorem: Theorem 2. Suppose {x j } is a basis of X, and let e 1,..., e N be vectors in X defined by { 1, j = k (e j, x k ) = δ jk = 0, j k, (17) then {e j } is a basis of X. Proof. First we show that e j satisfying (17) exist. Define e j, j = 1,..., N by (e j, u) = ξ j. Then this corresponds to α k = δ jk, k = 1,..., N, in the formula above. Next we shall show that {e j } generate X. Let f X, and suppose (f, x 1 ) = α 1, (f, x 2 ) = α 2,...,(f, x N ) = α N. Put g = α j e j, then (g, x 1 ) = ( α j e j, x 1 ) = α 1 (e 1, x 1 ) + α 2 (e 2, x 1 ) α N (e N, x 1 ) = α 1, and similarly for j = 2,..., N,(g, x j ) = α j so that f(x j ) = g(x j ), j = 1,..., N. Since f, g are equal on vectors of the basis {x j }, then f = g = α 1 e α N e N, 10
15 i.e., f is a linear combination of e 1,..., e N. It remains to show e 1,..., e N are linearly independent. Suppose that α 1 e α N e N = 0. Then 0 = (0, x 1 ) = (α 1 e α N e N, x 1 ) = α 1 (e 1, x 1 ) α N (e N, x 1 ) = α 1, and similarly for k = 2,..., N, so that we have α 1 =... = α N = 0. Thus e 1,..., e N are linearly independent. Hence {e j } is a basis of X. Let {x j } be a basis of X, and let {e 1,..., e N }, {e 1,..., e N } be vectors in X satisfying (17). Then by Theorem 2 {e j } and {e j } are bases of X and so that e i = N j=1 α (i) j e j, i = 1,..., N, (e 1, x 1 ) = α (1) 1 (e 1, x 1 ) + α (1) 2 (e 2, x 1 ) α (1) N (e N, x 1 ) = α (1) α (1) α (1) 2 0 = α (1) 1, and (e 1, x 2) = α (1) 2,...,(e 1, x N) = α (1) N. Hence α(1) j = δ j1 and e 1 = e 1. Similarly for j = 2,..., N we obtain e j = e j. Hence the basis {e j } of X that satisfies (17) is unique. It is called the basis adjoint to the basis {x j } of X. Theorem 2 shows that dimx = dimx. (18) Let {x j } and {x j } be two bases of X related to each other by (2). Then the corresponding adjoint bases {e j } and {e j } of X are related to each other by the formulas e j = k γ jk e k, e k = j ˆγ kj e j. (19) Furthermore we have γ jk = (e j, x k ), ˆγkj = (e k, x j). (20) Definition 5. Let f X. The norm f is defined by (f, u) f = sup = sup (f, u). (21) 0 u X u u =1 11
16 1.2 Linear operators Definition 6. Let X, Y be two vector spaces. A function T that sends every vector u of X into a vector v = Tu of Y is called a linear transformation or a linear operator on X to Y if for all u 1, u 2 X and all scalars α 1, α 2. T(α 1 u 1 + α 2 u 2 ) = α 1 Tu 1 + α 2 Tu 2 (22) If Y = X we say T is a linear operator in X. If M is a subspace of X, then T(M) is a subspace of Y, the subspace T(X) of Y is called the range of T and is denoted by R(T), dim(r(t)) is called the rank of T. The codimension of R(T) with respect to Y is called the deficiency of T and is denoted by def T, hence rank T + def T = dimy. (23) The set of all u X such that Tu = 0 is a subspace of X and is called the kernel or null space of T and is denoted by N(T). dim(n(t)) is denoted by nul T, and we have rank T + nul T = dimx. (24) If both nul T and def T are zero, then T is one-to-one. In this case the inverse operator T 1 is defined. Let {x k } be a basis of X. Each u X has the expansion (1) so that Tu = N ξ k Tx k, N = dimx. (25) k=1 Thus an operator T on X to Y is determined by giving the values of Tx k, k = 1,..., N. If {y j } is a basis of Y, each Tx k has the expansion Tx k = M τ jk y j, M = dimy. (26) j=1 Substituting (26) into (25), the coefficients η j of v = Tu are given by η j = N τ jk ξ k, j = 1,..., M. (27) k=1 12
17 In this way an operator T on X to Y is represented by an M N matrix (τ jk ) with respect to the bases {x k },{y j } of X, Y respectively. When (τ jk ) is the matrix representing the same operator T with respect to a new pair of bases {x k }, {y j }, we can find the relationship between the matrices (τ jk ) and (τ jk) by combining (26) and a similar expression for Tx k in terms of {y j } with the formulas (2),(4): Tx k = T( h ˆγ hk x h ) = h = h = i = i = j = j = j ˆγ hk Tx h M ˆγ hk τ ih y i i=1 τ ihˆγ hk y i h τ ihˆγ hk h j γ jiy j γ jiτ ihˆγ hk y j i h γ jiτ ihˆγ hk y j i,h τ jk y j, where x h = k γ kh x k and x k = h ˆγ hk x h, y i = j γ jiy j and y j = i γ ijy i. Thus the matrix τ jk is the product of three matrices (γ jk ), (τ jk) and (ˆγ jk ), (τ jk ) = (γ jk )(τ jk)(ˆγ jk ). (28) Thus when T is an operator on X to itself det (τ jk ) and the trace of (τ jk ), i.e., τ jj are determined by the operator T itself. More precisely, we shall show det (τ jk ) and trace (τ jk ) are the same for each choice of the basis for X. (28) becomes (τ jk ) = (γ jk)(τ jk )(ˆγ jk ). (29) We show tr (γτ) = tr (τγ). Let γτ = (a jk ) and τγ = b jk, then tr (γτ) = a jj = γ jk τ kj = τ kj γ jk = b kk = tr (τγ). Since (ˆγ jk ) is j j k k j k 13
18 the inverse of the matrix (γ jk ), and we know that det(γτˆγ) = det(γ)det(τ)det(ˆγ), we have det(τ ) = det(τ), and tr (τ ) = tr (γτˆγ) = tr (τγˆγ) = tr (τ). (30) Example 6. If {f j } is the basis of Y adjoint to {y j } then τ jk = (f j, Tx k ). (31) Proof. Since Tx k = i τ ik y i, then (f j, Tx k ) = (f j, i τ ik y i ) = i τ ik (f j, y i ) = i τ ik δ ij = τ jk. Example 7. Let {x j } and {e j } be the bases of X and X, respectively, which are adjoint to each other. If T is an operator on X to itself, we have tr T = j (e j, Tx j ). (32) Proof. Similarly as in the last example, Tx j = i τ ij x i, therefore (e j, Tx j ) = j j = j = j = j (e j, τ ij x i ) i ( τ ij (e j, x i )) i τ ij δ ij i τ jj. 14
19 If T and S are two linear operators on X to Y, then we define: (αs + βt)u = α(su) + β(tu). If S maps X to Y and T maps Y to Z, then we set (TS)u = T(Su). Example rank (S + T) rank S + rank T. 2. rank (T S) max(rank T, rank S). Proof. (1) Let R(S) = M 1, R(T) = M 2, R(S + T) = M. Since each v M can be expressed in the form v = v 1 + v 2, v 1 M 1, v 2 M 2, M = M 1 + M 2, thus dimm = dimm 1 +dimm 2 dim(m 1 M 2 ) dim M 1 +dimm 2. Hence rank (S + T) rank (S) + rank (T). (2) Let S : X Y, T : R(S) Z; TS : X Z. Then we have rank S + nul S = dim X and rank (TS) + nul (TS) = dimx. Since nul S nul TS, rank TS rank S. Let T : Y Z, TS : X Z. Since T : Y Z, rank T +def T = dimz, and rank (TS)+def (TS) = dimz. Since def T def (TS), rank (TS) rank T. Thus rank (TS) max(rank T, rank S). Let us denote by L(X, Y ) the set of all operators on X to Y. It is a vector space. Let L(X) = L(X, X), then we have: T0 = 0T = 0. T1 = 1T = T; (1 is the identity operator). T m T n = T m+n, (T m ) n = T mn, m, n = 0, 1,...,. If S, T L(X) are nonsingular, then T 1 T = 1, T n = (T 1 ) n, and (TS) 1 = S 1 T 1. For any polynomial P(z) = α 0 + α 1 z α n z n in the indeterminate z, we define the operator P(T) = α 0 + α 1 T α n T n. Example 9. If S L(X, Y ) and T L(Y, X), then ST L(Y ) and TS L(X). 15
20 1.3 Projections Let M, N be two complementary subspaces of X, X = M N. Thus each u X can be uniquely expressed in the form u = u + u, u M, u N. u is called the projecion of u on M along N. If v = v + v, then αu + βv has the projection αu + βv on M along N. If we set u = Pu, it follows that P is a linear operator in X called the projection operator or projection on M along N. 1 P is the projection on N along M, and we have Pu = u if and only if u M, Pu = 0 if and only if u N, that is R(P) = N(1 P) = M, N(P) = R(1 P) = N. Furthermore, PPu = Pu, that is P is idempotent, i.e., P 2 = P. Remark 1. Any idempotent operator P is a projection. To show, let M = R(P) and N = R(1 P). If u M, there is u such that Pu = u and therefore Pu = P 2 u = Pu = u. Similarly if u N. Now let u M N. Then u = Pu = 0. So M N = {0}. Thus each u X has the expression u = u + u with u = Pu M and u = (1 P)u N, proving that P is the projection on M along N. Example 10. If P is a projection, then we have tr P = dim R(P). Proof. Since P is an operator in X, it can be represented by an n n (n = dimx) matrix (τ jk ) with respect to the basis {x j } of X, and Pu = u when u M. This basis can be chosen so that x 1,..., x m M and x m+1,..., x n N, where N = (1 P)X. Then P(α 1 x α n x n ) = α 1 x α m x m. Hence (τ jk ) is diagonal with τ 11 =... = τ mm = 1, τ m+1,m+1 =... = τ nn = 0. So tr P = m = dim R(P). In general, if X = M 1... M s, then each u X can be uniquely expressed in the form u = u u s, u j M j, j = 1,..., s. Then the operator P j defined by P j u = u j M j, is a projection on M j along N j = M 1... M j 1 M j+1... M s. And we have Pj = 1, (33) 16
21 for ( j P j )( i u i ) = j P j ( i u i ) = P j u j = u j, and P k P j = δ jk P j, (34) because P k P j ( i u i ) = P k u j = δ kj u j = δ kj P j u. Note that, if we have (33) and (34), then X is the direct sum of subspaces R(P j ). To show that, let M j = R(P j ). For u X, by (33) we have u = Pj u = u j M M s. Moreover M i M j = {0} for i j, because by (34) if u M i M j, then u = P i u 1 = P j u 2 and u = P i u 1 = Pi 2u 1 = P i P j u 2 = 0. Hence X = M 1... M s. Since P j is idempotent, it follows from Remark 1 that P j is the projection on M j along N j. 1.4 The adjoint operator Definition 7. Let T L(X, Y ), a function T on Y to X is called the adjoint operator of T if : (T g, u) = (g, Tu), g Y, u X. (35) Then (T (α 1 g 1 +α 2 g 2 ), u) = (α 1 g 1 +α 2 g 2, Tu) = α 1 (g 1, Tu)+α 2 (g 2, Tu) = α 1 (T g 1, u)+α 2 (T g 2, u) so that T (α 1 g 1 +α 2 g 2 ) = α 1 T g 1 +α 2 T g 2. Therefore T is a linear operator on Y to X, that is, T L(Y, X ). The operation * has the following properties: 1. (αs + βt) = ᾱs + βt, for S, T L(X, Y ), and α, β C. 2. (TS) = S T, for T L(Y, Z) and S L(X, Y ). Note that S L(Y, X ) and T L(Z, Y ) so that S T L(Z, X ). Then ((TS) h, u) = (h, TSu) = (T h, Su) = (S T h, u), h Z, u X. Hence 2. holds. Example 11. If T L(X), we have 0 = 0, 1 = 1. If {x k }, {y j } are bases in X, Y respectively, and T L(X, Y ) is represented by a matrix (τ jk ) in these bases, and {e k }, {f j } are the adjoint bases of X, Y respectively, the operator T L(Y, X ) can be represented by a matrix (τ kj ). These matrices are given by τ jk = (f j, Tx k ) according to (31) and τ kj = (T f j, x k ) = (f j, Tx k ) (see the argument below), thus τ kj = τ jk, k = 1,..., N = dim X, j = 1,..., M = dimy. (36) 17
22 To show that (T f j, x k ) = τ kj, we first write T f j = i τ ijf i and then compute: (T f j, x k ) = ( i τ ijf i, x k ) = i τ ij(f i, x k ) = i τ ijδ ik = τ kj. Example 12. If T L(X), we have det T = det T, tr T = tr T (37) and (T ) 1 = (T 1 ). (38) Since det (τ jk ) and tr (τ jk ) are the same for each choice of the basis for X and similarly with (τkj ), (37) is satisfied according to (36). To prove (38) note that T (T 1 ) = (T 1 T) = 1 = 1. Definition 8. (Norm of T) The norm of T is defined by Example 13. Proof. Tu T = sup 0 u X u = sup Tu, T L(X, Y ). (39) u =1 T = (f, Tu) sup 0 u X f u = sup 0 f Y u =1 f =1 (f, Tu). (40) We first prove that the expression for T given by (40) is a norm. - T = sup - αt = sup u =1 f =1 - T+S = sup u =1 f =1 (f, Su) ) sup u =1 f =1 in (40) is a norm. u =1 f =1 (f, Tu) 0, and = 0 iff T = 0. (f, αtu) = sup u =1 f =1 (f, Tu+Su) = sup (f, Tu) + sup u =1 f =1 α(f, Tu) = α sup u =1 f =1 u =1 f =1 (f, Tu) = α T. (f, Tu)+(f, Su) sup ( (f, Tu) + u =1 f =1 (f, Su) = T + S. Hence T defined 18
23 We have to show that (39) and (40) are equivalent. To see this we recall (f, u) (21). Note that (f, u) f u. This implies that u = sup = 0 f X f sup (f, u) (see Section I.2.5 in [1]). It follows that Tu = sup (f, Tu) f =1 f =1 Tu = sup (f, Tu). and T = sup u =1 u =1 f =1 Tu Tu Since T = sup, then u =1 u u TSu T Su T S u, thus for T L(Y, Z) and S L(X, Y ). If T L(X, Y ), then T (Y, X ) and T, so Tu T u. Hence TS T S (41) T = T. (42) This follows from (40) according to which T = sup (T f, u) = sup (f, Tu) = T where u X, u = 1 and f X, f = The eigenvalue problem Definition 9. Let T L(X). A complex number λ is called an eigenvalue of T if there is a non-zero vector u X such that Tu = λu. (43) u is called an eigenvector of T belonging to the eigenvalue λ. The set N λ of all u X such that Tu = λu is a subspace of X called the eigenspace of T for the eigenvalue λ and dimn λ is called the multiplicity of λ. Example 14. λ is an eigenvalue of T if and only if λ ξ is an eigenvalue of T ξ. Proposition 2. The eigenvectors of T belonging to different eigenvalues are linearly independent. Proof. To prove that we will use induction. We shall first show that any two eigenvectors of T belonging to different 19
24 eigenvalues are linearly independent. Assume next this is true for k eigenvectors, and we shall prove it for k +1 eigenvectors. Let Tu 1 = λ 1 u 1, Tu 2 = λ 2 u 2, λ 1 λ 2, λ 1 0 and α 1 u 1 + α 2 u 2 = 0, we have α 1 λ 1 u 1 + α 2 λ 2 u 2 = 0. By multiplying the first equation by λ 1 and subtracting it from the second we obtain (λ 1 λ 2 )α 2 u 2 = 0. Hence α 2 u 2 = 0, but u 2 0, so α 2 = 0. Now α 1 u 1 = 0 which implies α 1 = 0 since u 1 0. Hence α 1 = α 2 = 0, that is u 1, u 2 are linearly independent. Now assume Tu 1 = λ 1 u 1,..., Tu k = λ k u k, λ i λ j for i j and u 1,..., u k are linearly independent. We shall show that u 1,..., u k, u k+1 are linearly independent where Tu k+1 = λ k+1 u k+1. We have two cases: λ k+1 = 0 and λ k+1 0. If λ k+1 = 0, then we have λ i 0, i = 1,..., k. If u 1,..., u k, u k+1 are linearly dependent then u 1 = α 2 u α k u k + α k+1 u k+1, and λ 1 u 1 = Tu 1 = α 2 λ 2 u α k λ k u k, thus u 1,..., u k are linearly dependent, and this contradicts our assumption. If λ k+1 0 suppose u k+1 = α 1 u α k u k, where α 1 0, then we have λ k+1 u k+1 = Tu k+1 = α 1 λ 1 u α k λ k u k = λ k+1 ( α 1λ 1 λ k+1 u α kλ k λ k+1 u k ). Since u 1,..., u k are linearly independent, we obtain λ 1 = λ k+1,..., λ k = λ k+1, and this is also a contradiction. It follows from this proposition that there are at most N eigenvalues of T, where N is the dimension of X. 20
25 Proposition 3. lim T n 1 n exists and is equal to inf T n 1 n. n n=1,2,... Proof. It follows from (41) that T m T n T m T n, T n T n, m, n = 0, 1,... (44) Set a n = log T n, what is to be proved is that The inequality (44) gives a n lim n n = inf a n n=1,2, n. (45) a m+n a m + a n. Let n = mq + r, where q, r are nonnegative integers with 0 r < m, then the last inequality gives a n a mq + a r. Let a mq = log T mq. By (44) a mq log T m q = q log T m = qa m, hence and Therefore Since lim n sup q n = 1 m a n qa m + a r a n n q n a m + 1 n a r. lim sup a n n n lim sup q n n a m + lim sup a r n n. Since this holds for all fixed m, and lim n sup a r n = 0, lim sup a n n n a m m. Obviously hence lim sup a n n n inf a m m. lim sup a n n n inf a m m, lim sup a n n n = inf a n n=1,2,... n. 21
26 Now we define: Definition 10. (Spectral radius of T) sprt =lim T n 1 n =inf T n 1 n. 1.6 The resolvent Let T L(X) and consider the equation (T ξ)u = v, where ξ is a given complex number, v X is given and u X is to be found. This equation has a solution u for every v if and only if T ξ is nonsingular, that is ξ is different from any eigenvalue λ k of T. Then the inverse (T ξ) 1 exists and the solution u is given by The operator u = (T ξ) 1 v. R(ξ) = R(ξ, T) = (T ξ) 1 (46) is called the resolvent of T. The complementary set of the spectrum Σ(T) is called the resolvent set of T and will be denoted by P(T). The resolvent R(ξ) is thus defined for ξ P(T). Example 15. R(ξ) commutes with T. And R(ξ) has exactly the eigenvalues (λ h ξ) 1 where λ h are eigenvalues of T. We show that R(ξ) commutes with T: Hence T = T 1 = T(T ξ)(t ξ) 1 = (T ξ)t(t ξ) 1. (T ξ) 1 T = (T ξ) 1 (T ξ)t(t ξ) 1 = T(T ξ) 1. Now we show that if λ is an eigenvalue of T, that is Tu = λu, then (λ ξ) 1 is an eigenvalue of R(ξ). Clearly we have (T ξ)u = (λ ξ)u, 22
27 or equivalently (λ ξ) 1 (T ξ)u = u. Then i.e. Hence (T ξ) 1 ((λ ξ) 1 (T ξ)u) = (T ξ) 1 u, (λ ξ) 1 ((T ξ) 1 (T ξ)u) = (T ξ) 1 u. (λ ξ) 1 u = (T ξ) 1 u. since Note that the resolvent satisfies the (first) resolvent equation R(ξ 1 ) R(ξ 2 ) = (ξ 1 ξ 2 )R(ξ 1 )R(ξ 2 ) (47) (T ξ 1 ) 1 (T ξ 2 ) 1 = (T ξ 1 ) 1 (T ξ 2 ) 1 (T ξ 2 )(T ξ 1 ) [(T ξ 1 ) 1 (T ξ 2 ) 1 ] = (T ξ 1 ) 1 (T ξ 2 ) 1 [(T ξ 2 ) (T ξ 1 )] = (ξ 1 ξ 2 )(T ξ 1 ) 1 (T ξ 2 ) 1 = (ξ 1 ξ 2 )R(ξ 1 )R(ξ 2 ). Here we have used the identity (T ξ 2 )(T ξ 1 ) = (T ξ 1 )(T ξ 2 ). We shall show that for each ξ 0 P(T) R(ξ) is holomorphic in some disk around ξ 0. Proposition 4. R(ξ) = (ξ ξ 0 ) n R(ξ 0 ) n+1 is absolutely convergent for ξ ξ 0 < (spr R(ξ 0 )) 1 where ξ 0 is a given complex number. To prove this we have to study the following Lemmas. Lemma 1. (Neumann series) The series T < 1. Moreover, (1 T) 1 = T n is absolutely convergent if n=0 T n, and (1 T) 1 (1 T ) 1, where T L(X). (48) n=0 23
28 Proof. This series is absolutely convergent because lim T n 1 n T < 1. n Set T n = S, then n=0 TS = = = n=0 n=1 T n+1 T n T n 1 n=0 = S 1, so that TS = ST = S 1. Hence (1 T)S = S(1 T) = 1 and S = (1 T) 1. Now we have (1 T) 1 = T n T n T n = (1 T ) 1. Lemma 2. The series (48) is absolutely convergent if T m < 1 for some positive integer m, or equivalently, if spr T < 1, and the sum is again equal to (1 T) 1 (see proof of Proposition 3). Proof. Since spr T = inf n=1,2,... T n 1 n n=0 n=0 n=0 = lim n T n 1 n and T m 1 m < 1, it follows that lim n T n 1 n < 1 and the series is absolutely convergent. The proof that the sum is equal to (1 T) 1 is the same as above (see the proof of Proposition 3). Lemma 3. S(t) = (1 tt) 1 = t n T n, (49) where t is a complex number. The convergence radius r of (49) is exactly equal to 1/spr T. Proof. By Lemma 2, (49) holds if spr(tt) < 1, i.e., t < 1/sprT, so the convergence radius r 1/sprT. If t > 1/sprT, then spr(tt) > 1, so lim n tn T n 1 n > 1 and the series diverges. Hence r = 1/sprT. n=0 24
29 Now we can complete the proof of Proposition 4. By (47), with ξ 1 = ξ and ξ 2 = ξ 0, we have R(ξ) = R(ξ 0 )(1 (ξ ξ 0 )R(ξ 0 )) 1. Let tt = (ξ ξ 0 )R(ξ 0 ) in Lemma 3. Then R(ξ) = By Proposition 4 we obtain: (ξ ξ 0 ) n R(ξ 0 ) n+1. n=0 Proposition 5. R(ξ) is holomorphic at ξ 0 in the disk ξ ξ 0 < (spr T) 1. Proposition 6. R(ξ) is holomorphic at. Proof. R(ξ) has the expansion R(ξ) = ξ 1 (1 ξ 1 T) 1 = ξ n 1 T n, (50) which is convergent if and only if ξ > spr T, thus R(ξ) is holomorphic at infinity. Example 16. R(ξ) ( ξ T ) 1 and R(ξ) + ξ 1 ξ 1 ( ξ T ) 1 T, for ξ > T. n=0 Proof. It follows from (50) that R(ξ) = ξ n 1 T n = ξ 1 (T/ξ) n ξ 1 ξ n T n ξ 1 ξ n T n = ξ 1 (1 ξ 1 T ) 1 = ( ξ T ) 1, 25
30 and R(ξ) + ξ 1 = = ξ n 1 T n + ξ 1 n=0 ξ n 1 T n n=1 ξ 1 ξ n T n n=1 = ξ 1 [(1 ξ 1 T ) 1 1] 1 = ξ T 1 ξ ξ ξ + T = ξ ( ξ T ) = ξ 1 ( ξ T ) 1 T. The spectrum Σ(T) is never empty; T has at least one eigenvalue. Otherwise R(ξ) would be an entire function such that R(ξ) 0 for ξ, then we must have R(ξ) = 0 by Liouville s theorem (see [3]). But this results in the contradiction that 1 = (T ξ)r(ξ) = 0. We can see that each eigenvalue of T is a singularity of the analytic function R(ξ). Since there is at least one singularity of R(ξ) on the convergence circle ξ = spr T according to (50), spr T coincides with the largest (in absolute value) eigenvalue of T: spr T = max λ h. (51) This shows that spr T is independent of the norm used in its definition. 2 Operators in unitary spaces 2.1 Unitary spaces A normed space X is a special case of a linear metric space in which the distance between any two points is defined by u v, where u and v belong to X. 26
31 Definition 11. (The complex inner product) Let u, v X, and let (u, v) be a complex number, then we say that the function (, ) is a complex inner product if it satisfies: -(αu 1 + βu 2, v) = α(u 1, v) + β(u 2, v). -(u, v) = (v, u). -(u, u) > 0, if u 0. From the second condition in the last definition we can obtain (u, kv) = k(u, v), since (u, kv) = (kv, u) = k(v, u) = k(u, v). Definition 12. A normed space H is called a unitary space if an inner product (u, v) is defined for all vectors u, v H. Definition 13. In a unitary space the function is a norm which is called the unitary norm. u = (u, u) 1 2 (52) We shall show the conditions in the Definition 2: -the first condition follows directly from the definition of the inner product. - αu = (αu, αu) 1 2 = [α(u, αu)] 1 2 = [αᾱ(u, u)] 1 2 = α u. - u + v = (u + v, u + v) 1 2 = [(u, u + v) + (v, u + v)] 1 2 = [(u, u) + (u, v) + (v, u) + (v, v)] 1 2 [(u, u) + (u, v) + (v, u) + (v, v)] 1 2. By the Schwarz inequality (u, v) u v, (53) hence u+v [ u 2 + (u, v) + (v, u) + v 2 ] 1 2 [ u 2 +2 u v + v 2 ] 1 2 = u + v. Example 17. For numerical vectors u = (ξ 1,..., ξ N ) and v = (η 1,..., η N ) set (u, v) = ξ j η j, u 2 = ξ j 2, with this inner product the space C N becomes a unitary space. 27
32 Remark 2. A characteristic property of a unitary space H is that the adjoint space H can be identified with H itself. To show that, let f, u H, we have the form (f, u) is linear in f and semilinear in u by Definition 11. Then f H by (15) and (16). Hence f can be considered as a vector in H or a vector in H. Thus H and H can be identified. Definition 14. (Orthogonal) If (u, v) = 0 we write u v and say that u, v are mutually orthogonal. If S, S are subsets of H we say u S if u v, v S (where u H), S S if u v, u S, v S. The set of all u H such that u S is denoted by S. Example 18. u S implies u M where M is the span of S. Let v M, then there are v 1,, v k S and α 1,..., α k C such that v = α 1 v α k v k. So thus u S. (u, v) = (u, α 1 v α k v k ) = (u, α 1 v 1 ) + + (u, α k v k ) = ᾱ 1 (u, v 1 ) + + ᾱ k (u, v k ) = ᾱ ᾱ k 0 = 0, Let dim H = N. If x 1,..., x N H have the property (x j, x k ) = δ jk, (54) then they form a basis of H, called an orthonormal basis, for α 1 x α N x N = 0, implies (α 1 x α N x N, x j ) = 0 and α j (x j, x j ) = 0 for all j = 1,..., N, hence α j = 0, showing that x 1,..., x N are linearly independent. 28
33 2.2 Symmetric operators Definition 15. (Sesquilinear form) Let H, H be two unitary spaces. A complex-valued function t[u, u ] defined for u H and u H is called a sesquilinear form on H H if it is linear in u and semilinear in u. If H = H we speak of a sesquilinear form on H. Let T be a linear operator on H to H, the function t[u, u ] = (Tu, u ) (55) is a sesquilinear form on H H. Conversely, an arbitrary sesquilinear form t[u, u ] on H H can be expressed in this form by a suitable choice of an operator T on H to H. Since t[u, u ] is a semilinear form on H for a fixed u, there exists a unique w H such that t[u, u ] = (w, u ) for all u H. Since w is determined by u, we define a function T by setting w = Tu. T is a linear operator on H to H. In the same way, t[u, u ] can also be expressed in the form t[u, u ] = (u, T u ). (56) Since H, H can be identified with H, H respectively, T can be considered as the adjoint of T on H to H. T T is a linear operator on H to itself. The relation (u, T Tv) = (T Tu, v) = (Tu, Tv) (57) shows that T T is the operator associated with the sesquilinear form (Tu, Tv) on H. Note that the first two members of (57) are the inner product in H while the last is that in H. It follows from (57),(40) that T (Tu, Tv) T = sup sup Tu 2 u v u 2 = T 2. By (41) and (42) we have T T T T = T 2. Hence T T = T 2. (58) Example 19. If T is an operator on H to itself, (Tu, u) = 0 for all u implies T = 0. 29
34 Proof. We have (T(u+v), u+v) = 0. On the other hand, (T(u+v), u+v) = (Tu, u) + (Tu, v) + (Tv, u) + (Tv, v) = (Tu, v) + (Tv, u) = 0. Since v is any vector in H, then (Tu, iv) + (Tiv, u) = i(tu, v) + i(tv, u) = 0, that is (Tu, v) + (Tv, u) = 0. Combining this equality with (Tu, v) + (Tv, u) = 0 yields (Tv, u) = 0, u, v. Hence T = 0. Remark 3. This property is not true when T is defined on a real space. To show that we can take T represented by the matrix ( ) 0 1 τ jk =, 1 0 then (Tu, u) = 0 for all u, but T 0. Definition 16. A sesquilinear form t[u, v] (or t in short) on a unitary space H is said to be symmetric if t[v, u] = t[u, v], for all u, v H. (59) If t is symmetric, t[u, u] is real-valued, and is denoted by t[u]. t is nonnegative if t[u] 0 for all u, and positive if t[u] > 0 for all u 0. The operator T associated with a symmetric form t[u, v] according to (55),(56) has the property that T = T. (60) Indeed, t[v, u] = (Tv, u) = (v, T u) and t[u, v] = (Tu, v) = (v, Tu). Hence T = T. Definition 17. An operator T on H to itself satisfying (60) is said to be symmetric. (Tu, u) is real for all u H if and only if T is symmetric. Indeed, if (Tu, u) is real, then (Tu, u) = (u, Tu) = (u, Tu) = (T u, u), hence by Example 19 T = T. Conversely if T = T, then (Tu, u) = (T u, u) = (u, T u) = (Tu, u), thus (Tu, u) is real. 30
35 A symmetric operator T is nonnegative (positive) if the associated form is nonnegative (positive), and we write T 0 to denote that T is nonnegative symmetric. More generally, we write T S or S T if S, T are symmetric operators such that T S 0. Example 20. If T is symmetric, P(T) is symmetric for any polynomial P with real coefficients. Since (αts) = αs T, it follows that T n = T n = T n, thus (α n T n α 0 ) = ᾱ n T n ᾱ 0 = α n T n α 0. Example 21. For any linear operator T on H to H, T T and TT are nonnegative symmetric operators in H and H, respectively. We have (T T) = T (T ) = T T, (T Tu, u) = (Tu, Tu) 0 and (TT ) = (T ) T = TT, (TT u, u ) = (T u, T u ) 0. Example 22. If T is symmetric, then T 2 0; T 2 = 0 T = 0. (T 2 u, u) = (Tu, T u) = (Tu, Tu) 0; T 2 = 0, that is (T 2 u, u) = (Tu, Tu) = 0 according to Example 19, and this is equivalent to T = 0. Example 23. R S and S T imply R T. S T and S T imply S = T. R S and S T is equivalent to S R 0 and T S 0, therefore ((S R)u, u) 0 and ((T S)u, u) 0, hence (Su, u) (Ru, u) 0 and (T u, u) (Su, u) 0. Adding the last two inequalities, we obtain (T u, u) (Ru, u) 0. Hence ((T R)u, u) 0, that is, T R 0. S T and S T is equivalent to S T 0 and S T 0, and this implies that S T = 0, that is (Su, u) (Tu, u) = 0 for all u H. Thus S = T by Example
36 2.3 Unitary, isometric and normal operators Definition 18. Let H and H be unitary spaces. An operator T on H to H is said to be isometric if Tu = u for every u H. (61) This is equivalent to (T Tu, u) = (Tu, Tu) = (u, u), thus This implies that T T = 1. (62) (Tu, Tv) = (u, v) for every u, v H. (63) Definition 19. An isometric operator T is said to be unitary if the range of T is the whole space H. Example 24. T L(H, H ) is unitary if and only if T 1 L(H, H) exists and T 1 = T. (64) If T is unitary, Tu = u implies that the mapping T is one-to-one, that is, T 1 exists. Since T T = 1, T 1 = T. If T 1 exists and T 1 = T, then T T = 1, hence T is unitary. Example 25. T is unitary T is. We have T 1 = T 1 and by Example 24 this is equivalent to (T ) = (T ) 1, which is equivalent to T being unitary. Example 26. If T L(H, H ) and S L(H, H ) are isometric, TS L(H, H ) is isometric. The same is true if isometric is replaced by unitary. We have S S = 1 and T T = 1, therefore (TS) TS = S (T T)S = S S = 1, and if the range of S is the whole space H, and the range of T is the whole space H, this implies that the range of TS is the whole of H. Definition 20. T L(H) is said to be normal if T and T commute, i.e. T T = TT. (65) 32
37 Symmetric operators and unitary operators on a unitary space into itself are special cases of normal operators. Example 27. Let T 1, T 2, T 3 be operators represented by the matrices i 1 i , 0 1 0, i 1 0, i respectively, then T 1 is normal, symmetric but not unitary, T 2 is normal, not symmetric but unitary, T 3 is normal, neither symmetric nor unitary. An important property of a normal operator T is that T n = T n, n = 1, 2,... (66) This implies that spr T = T. (67) We shall prove (66). If T is symmetric, by (58) T 2 = T 2. Since T 2 is symmetric we have T 4 = T 2 2 = T 4. Proceeding in this manner we obtain T n = T n for n = 2 m, m = 1, 2,. If T is normal but not necessarily symmetric, again by (58) we have T n 2 = T n T n. Since T n T n = (T T) n because T is normal, and T T is symmetric, by (58) T n 2 = (T T) n = T T n = T 2n for n = 2 m. Now if 2 m n = r 0, then T n T r = T n+r = T n+r T n T r T n T r. Thus T n T n. The opposite inequality is obvious. This proves (66). Example 28. If T is normal then (i) T n = 0 for some integer n implies T = 0. (ii)if T is nonsingular then T 1 is normal. (i) T n = 0 T n = 0 = T n T = 0 T = 0. (ii) We have T T = TT therefore (T T) 1 = (TT ) 1 therefore T 1 T 1 = T 1 T 1, since T 1 = T 1, therefore T 1 T 1 = T 1 T Orthogonal projections Definition 21. Let M be a subspace of H and H = M M, the projection operator P = P M on M along M is called the orthogonal projection on M. 33
38 P is a symmetric, nonnegative and idempotent operator, for (Pu, u) = (u, u + u ) = (u, u ) 0 (68) where u = u + u, u H, u M, u M, (69) by u u. (68) shows also that (Pu, u) is real. Hence P is symmetric, i.e. P = P. Idempotent follows from P being a projection. Example 29. If P is an orthogonal projection, then 1 P is a symmetric, nonnegative and idempotent, and we have 0 P 1, P = 1 if P 0. (1 P) = 1 P = 1 P, ((1 P)u, u) = (u u, u) = (u, u + u ) = (u, u ) 0, and (1 P) 2 = 1 2P + P 2 = 1 2P + P = 1 P. Now ((1 P)u, u) 0, therefore 1 P 0, and (Pu, u) 0, therefore P 0. Hence 0 P 1. Since P 2 = P and P is symmetric, P = P 2 = P 2 by (58), or equivalently P (1 P ) = 0. But P 0 for P 0, therefore P = 1. Example 30. (1 P M )u =dist(u, M), u H. dist(u, M) = inf v M u v. Since u v 2 = (u v ) + u 2 = u v 2 + u 2, the infimum is attained as v = u, and is equal to u = (1 P M )u. Example M N P M P N = The following three conditions are equivalent: (i)m N, (ii)p M P N, (iii)p M P N = P N. Proof. H = M M. (1) We have M N, therefore for each u H, 0 = (P M u, P N u) = (u, P M P Nu) = (u, P M P N u). Hence P M P N = 0. Conversely if P M P N = 0, then P M u = P M P N u = 0 for all u N. Thus M N. (2) Now we will show that (i) (ii), and (i) (iii). If M N, then H = N N 1 M, where N 1 is the orthogonal complement of N in M. Let u = u + u + u, u N, u N 1, u M. 34
39 (i) (ii): ((P M P N )u, u) = (P M u, u) (P N u, u) = (u + u, u) (u, u) = (u, u) = (u, u ) 0. (ii) (i): Let u N, then (P M u, u) (P N u, u) = (u, u), hence ((I P M )u, u) 0. Since I P M is the orthogonal projection on M, I P M 0 so ((I P M )u, u) = 0 which means that u M. (i) (iii) Since P N u N M, P M P N u = P N u. (iii) (i) Let u N. Then P M P N u = P N u by (iii), and P N u = u. So P M u = u, i.e., u M. 2.5 The eigenvalue problem Example 32. A symmetric operator has only real eigenvalues. We have proved that T = T is equivalent to (Tu, u) being real, and this implies that (λu, u) is real, for an eigenvector u with eigenvalue λ. Thus λ is real. Example 33. Each eigenvalue of a unitary operator has absolute value one. A normal operator with this property is unitary. Since Tu = u u H, Tu = λu implies λu = u, that is λ = 1. T being normal is equivalent to (T Tu, u) = (TT u, u). This is equivalent to (Tu, Tu) = (T u, T u), therefore we have (T u, T u) = (λu, λu) = λ 2 (u, u) = (u, u). On the other hand (T u, T u) = (TT u, u), hence TT = 1. Since T L(H) and is nonsingular, the range of T is the whole space H. Example 34. If T is normal, then we have 1. Tu = 0 if and only if T u = (T λi) is a normal. 3. The eigenvalues of T are the conjugates of the eigenvalues of T. 4. Any two eigenvectors that belong to different eigenvalues are orthogonal. Proof. 1. We will show that (Tu, Tu) = (T u, T u). (Tu, Tu) = (u, T Tu) = (u, TT u) = (T u, T u), so that Tu = 0 is equivalent to T u = We have to show that T λi commutes with its adjoint. (T λi)(t λi) = (T λi)(t λi) = TT λt λt + λ λi = T T λt λt + λ λi = (T λi)(t λi) = (T λi) (T λi), 35
40 hence T λi is a normal. 3. If Tu = λu, (T λi)u = 0. Since T λi is normal we have (T λi) u = 0, so (T λi)u = 0. Thus T u = λu. Hence any eigenvector of T is also an eigenvector of T, and the corresponding eigenvalues are conjugate to each other. 4. Let Tu 1 = λ 1 u 1, Tu 2 = λ 2 u 2 and λ 1 λ 2, then Since λ 1 λ 2, (u 1, u 2 ) = 0. λ 1 (u 1, u 2 ) = (λ 1 u 1, u 2 ) = (Tu 1, u 2 ) = (u 1, T u 2 ) = (u 1, λ 2 u 2 ) = λ 2 (u 1, u 2 ). 2.6 The minimax principle Let T be a symmetric operator in H. T is diagonalizable (see [2]) and has only real eigenvalues. For a subspace M of H, set (Tu, u) µ[m] = µ[t, M] = min (Tu, u) = min u M 0 u M u 2. (70) u =1 Arrange the eigenvalues of T in the following order µ 1 µ 2... µ N, where µ i can be repeated. The minimax (or maximin) principle is µ n = max µ[m] = max µ[m]. (71) codimm =n 1 codimm n 1 This is equivalent to the following two propositions: µ n µ[m] for any M with codimm n 1; (72) µ n µ[m 0 ] for some M 0 with codimm = n 1. (73) Let us prove these separately. 36
41 Let {ϕ n } be an orthonormal basis with the property Each u has the expansion u = in this basis. Then Tu = Tϕ n = µ n ϕ n, n = 1,..., N. (74) N ξ n ϕ n, ξ n = (u, ϕ n ), u 2 = n=1 N ξ n Tϕ n = n=1 N µ n ξ n ϕ n, (Tu, u) = n=1 N ξ n 2, n=1 N µ n ξ n 2. Let M be any subspace with codimm n 1. The n-dimensional subspace M spanned by ϕ 1,..., ϕ n contains a nonzero vector u in common with M by (7). This u has the coefficients ξ n+1, ξ n+2,... equal to zero, so that (Tu, u) = = N µ k ξ k 2 1 n µ k ξ k 2 1 n µ n ξ k 2 1 = µ n u 2. (Tu, u) Hence µ[m] = min 0 u M u 2 µ n. This proves (72). Let M 0 be the subspace consisting of all vectors orthogonal to ϕ 1,..., ϕ n 1, so that codimm 0 = n 1. Each u M 0 has the coefficients ξ 1,..., ξ n 1 zero. Hence N (Tu, u) = µ k ξ k 2 which implies (73). k=n N µ n ξ k 2 k=n = µ n u 2, 37 n=1
42 References [1] T. Kato, Perturbation Theory for Linear Operators, Springer, Berlin, [2] G. Strang, Linear Algebra and its Applications, Academic Press, New York, [3] G. Jones, D. Singerman, Complex Functins, University Press, New York,
2 Composition. Invertible Mappings
Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan Composition. Invertible Mappings In this section we discuss two procedures for creating new mappings from old ones, namely,
Διαβάστε περισσότεραChapter 6: Systems of Linear Differential. be continuous functions on the interval
Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations
Διαβάστε περισσότερα2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)
Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 2005-03-08 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok
Διαβάστε περισσότεραSCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions
SCHOOL OF MATHEMATICAL SCIENCES GLMA Linear Mathematics 00- Examination Solutions. (a) i. ( + 5i)( i) = (6 + 5) + (5 )i = + i. Real part is, imaginary part is. (b) ii. + 5i i ( + 5i)( + i) = ( i)( + i)
Διαβάστε περισσότεραLecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3
Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 1 State vector space and the dual space Space of wavefunctions The space of wavefunctions is the set of all
Διαβάστε περισσότεραOrdinal Arithmetic: Addition, Multiplication, Exponentiation and Limit
Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit Ting Zhang Stanford May 11, 2001 Stanford, 5/11/2001 1 Outline Ordinal Classification Ordinal Addition Ordinal Multiplication Ordinal
Διαβάστε περισσότεραExample Sheet 3 Solutions
Example Sheet 3 Solutions. i Regular Sturm-Liouville. ii Singular Sturm-Liouville mixed boundary conditions. iii Not Sturm-Liouville ODE is not in Sturm-Liouville form. iv Regular Sturm-Liouville note
Διαβάστε περισσότεραEvery set of first-order formulas is equivalent to an independent set
Every set of first-order formulas is equivalent to an independent set May 6, 2008 Abstract A set of first-order formulas, whatever the cardinality of the set of symbols, is equivalent to an independent
Διαβάστε περισσότεραChapter 6: Systems of Linear Differential. be continuous functions on the interval
Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations
Διαβάστε περισσότεραReminders: linear functions
Reminders: linear functions Let U and V be vector spaces over the same field F. Definition A function f : U V is linear if for every u 1, u 2 U, f (u 1 + u 2 ) = f (u 1 ) + f (u 2 ), and for every u U
Διαβάστε περισσότεραPartial Differential Equations in Biology The boundary element method. March 26, 2013
The boundary element method March 26, 203 Introduction and notation The problem: u = f in D R d u = ϕ in Γ D u n = g on Γ N, where D = Γ D Γ N, Γ D Γ N = (possibly, Γ D = [Neumann problem] or Γ N = [Dirichlet
Διαβάστε περισσότεραEE512: Error Control Coding
EE512: Error Control Coding Solution for Assignment on Finite Fields February 16, 2007 1. (a) Addition and Multiplication tables for GF (5) and GF (7) are shown in Tables 1 and 2. + 0 1 2 3 4 0 0 1 2 3
Διαβάστε περισσότεραMatrices and Determinants
Matrices and Determinants SUBJECTIVE PROBLEMS: Q 1. For what value of k do the following system of equations possess a non-trivial (i.e., not all zero) solution over the set of rationals Q? x + ky + 3z
Διαβάστε περισσότεραPhys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)
Phys460.nb 81 ψ n (t) is still the (same) eigenstate of H But for tdependent H. The answer is NO. 5.5.5. Solution for the tdependent Schrodinger s equation If we assume that at time t 0, the electron starts
Διαβάστε περισσότεραHomework 3 Solutions
Homework 3 Solutions Igor Yanovsky (Math 151A TA) Problem 1: Compute the absolute error and relative error in approximations of p by p. (Use calculator!) a) p π, p 22/7; b) p π, p 3.141. Solution: For
Διαβάστε περισσότεραConcrete Mathematics Exercises from 30 September 2016
Concrete Mathematics Exercises from 30 September 2016 Silvio Capobianco Exercise 1.7 Let H(n) = J(n + 1) J(n). Equation (1.8) tells us that H(2n) = 2, and H(2n+1) = J(2n+2) J(2n+1) = (2J(n+1) 1) (2J(n)+1)
Διαβάστε περισσότεραw o = R 1 p. (1) R = p =. = 1
Πανεπιστήµιο Κρήτης - Τµήµα Επιστήµης Υπολογιστών ΗΥ-570: Στατιστική Επεξεργασία Σήµατος 205 ιδάσκων : Α. Μουχτάρης Τριτη Σειρά Ασκήσεων Λύσεις Ασκηση 3. 5.2 (a) From the Wiener-Hopf equation we have:
Διαβάστε περισσότεραCongruence Classes of Invertible Matrices of Order 3 over F 2
International Journal of Algebra, Vol. 8, 24, no. 5, 239-246 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.2988/ija.24.422 Congruence Classes of Invertible Matrices of Order 3 over F 2 Ligong An and
Διαβάστε περισσότεραTridiagonal matrices. Gérard MEURANT. October, 2008
Tridiagonal matrices Gérard MEURANT October, 2008 1 Similarity 2 Cholesy factorizations 3 Eigenvalues 4 Inverse Similarity Let α 1 ω 1 β 1 α 2 ω 2 T =......... β 2 α 1 ω 1 β 1 α and β i ω i, i = 1,...,
Διαβάστε περισσότεραLecture 13 - Root Space Decomposition II
Lecture 13 - Root Space Decomposition II October 18, 2012 1 Review First let us recall the situation. Let g be a simple algebra, with maximal toral subalgebra h (which we are calling a CSA, or Cartan Subalgebra).
Διαβάστε περισσότεραC.S. 430 Assignment 6, Sample Solutions
C.S. 430 Assignment 6, Sample Solutions Paul Liu November 15, 2007 Note that these are sample solutions only; in many cases there were many acceptable answers. 1 Reynolds Problem 10.1 1.1 Normal-order
Διαβάστε περισσότεραST5224: Advanced Statistical Theory II
ST5224: Advanced Statistical Theory II 2014/2015: Semester II Tutorial 7 1. Let X be a sample from a population P and consider testing hypotheses H 0 : P = P 0 versus H 1 : P = P 1, where P j is a known
Διαβάστε περισσότεραNew bounds for spherical two-distance sets and equiangular lines
New bounds for spherical two-distance sets and equiangular lines Michigan State University Oct 8-31, 016 Anhui University Definition If X = {x 1, x,, x N } S n 1 (unit sphere in R n ) and x i, x j = a
Διαβάστε περισσότερα4.6 Autoregressive Moving Average Model ARMA(1,1)
84 CHAPTER 4. STATIONARY TS MODELS 4.6 Autoregressive Moving Average Model ARMA(,) This section is an introduction to a wide class of models ARMA(p,q) which we will consider in more detail later in this
Διαβάστε περισσότεραUNIT - I LINEAR ALGEBRA. , such that αν V satisfying following condition
UNIT - I LINEAR ALGEBRA Definition Vector Space : A non-empty set V is said to be vector space over the field F. If V is an abelian group under addition and if for every α, β F, ν, ν 2 V, such that αν
Διαβάστε περισσότεραNowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in
Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in : tail in X, head in A nowhere-zero Γ-flow is a Γ-circulation such that
Διαβάστε περισσότεραSCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018
Journal of rogressive Research in Mathematics(JRM) ISSN: 2395-028 SCITECH Volume 3, Issue 2 RESEARCH ORGANISATION ublished online: March 29, 208 Journal of rogressive Research in Mathematics www.scitecresearch.com/journals
Διαβάστε περισσότερα6.1. Dirac Equation. Hamiltonian. Dirac Eq.
6.1. Dirac Equation Ref: M.Kaku, Quantum Field Theory, Oxford Univ Press (1993) η μν = η μν = diag(1, -1, -1, -1) p 0 = p 0 p = p i = -p i p μ p μ = p 0 p 0 + p i p i = E c 2 - p 2 = (m c) 2 H = c p 2
Διαβάστε περισσότεραQuadratic Expressions
Quadratic Expressions. The standard form of a quadratic equation is ax + bx + c = 0 where a, b, c R and a 0. The roots of ax + bx + c = 0 are b ± b a 4ac. 3. For the equation ax +bx+c = 0, sum of the roots
Διαβάστε περισσότεραA Note on Intuitionistic Fuzzy. Equivalence Relation
International Mathematical Forum, 5, 2010, no. 67, 3301-3307 A Note on Intuitionistic Fuzzy Equivalence Relation D. K. Basnet Dept. of Mathematics, Assam University Silchar-788011, Assam, India dkbasnet@rediffmail.com
Διαβάστε περισσότερα( ) 2 and compare to M.
Problems and Solutions for Section 4.2 4.9 through 4.33) 4.9 Calculate the square root of the matrix 3!0 M!0 8 Hint: Let M / 2 a!b ; calculate M / 2!b c ) 2 and compare to M. Solution: Given: 3!0 M!0 8
Διαβάστε περισσότεραSection 8.3 Trigonometric Equations
99 Section 8. Trigonometric Equations Objective 1: Solve Equations Involving One Trigonometric Function. In this section and the next, we will exple how to solving equations involving trigonometric functions.
Διαβάστε περισσότεραNumerical Analysis FMN011
Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 12 Periodic data A function g has period P if g(x + P ) = g(x) Model: Trigonometric polynomial of order M T M (x) =
Διαβάστε περισσότεραLecture 10 - Representation Theory III: Theory of Weights
Lecture 10 - Representation Theory III: Theory of Weights February 18, 2012 1 Terminology One assumes a base = {α i } i has been chosen. Then a weight Λ with non-negative integral Dynkin coefficients Λ
Διαβάστε περισσότεραk A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +
Chapter 3. Fuzzy Arithmetic 3- Fuzzy arithmetic: ~Addition(+) and subtraction (-): Let A = [a and B = [b, b in R If x [a and y [b, b than x+y [a +b +b Symbolically,we write A(+)B = [a (+)[b, b = [a +b
Διαβάστε περισσότεραMATH1030 Linear Algebra Assignment 5 Solution
2015-16 MATH1030 Linear Algebra Assignment 5 Solution (Question No. referring to the 8 th edition of the textbook) Section 3.4 4 Note that x 2 = x 1, x 3 = 2x 1 and x 1 0, hence span {x 1, x 2, x 3 } has
Διαβάστε περισσότεραSOME PROPERTIES OF FUZZY REAL NUMBERS
Sahand Communications in Mathematical Analysis (SCMA) Vol. 3 No. 1 (2016), 21-27 http://scma.maragheh.ac.ir SOME PROPERTIES OF FUZZY REAL NUMBERS BAYAZ DARABY 1 AND JAVAD JAFARI 2 Abstract. In the mathematical
Διαβάστε περισσότεραLecture 15 - Root System Axiomatics
Lecture 15 - Root System Axiomatics Nov 1, 01 In this lecture we examine root systems from an axiomatic point of view. 1 Reflections If v R n, then it determines a hyperplane, denoted P v, through the
Διαβάστε περισσότεραCHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS
CHAPTER 5 SOLVING EQUATIONS BY ITERATIVE METHODS EXERCISE 104 Page 8 1. Find the positive root of the equation x + 3x 5 = 0, correct to 3 significant figures, using the method of bisection. Let f(x) =
Διαβάστε περισσότεραOther Test Constructions: Likelihood Ratio & Bayes Tests
Other Test Constructions: Likelihood Ratio & Bayes Tests Side-Note: So far we have seen a few approaches for creating tests such as Neyman-Pearson Lemma ( most powerful tests of H 0 : θ = θ 0 vs H 1 :
Διαβάστε περισσότεραUniform Convergence of Fourier Series Michael Taylor
Uniform Convergence of Fourier Series Michael Taylor Given f L 1 T 1 ), we consider the partial sums of the Fourier series of f: N 1) S N fθ) = ˆfk)e ikθ. k= N A calculation gives the Dirichlet formula
Διαβάστε περισσότεραLecture 21: Properties and robustness of LSE
Lecture 21: Properties and robustness of LSE BLUE: Robustness of LSE against normality We now study properties of l τ β and σ 2 under assumption A2, i.e., without the normality assumption on ε. From Theorem
Διαβάστε περισσότεραORDINAL ARITHMETIC JULIAN J. SCHLÖDER
ORDINAL ARITHMETIC JULIAN J. SCHLÖDER Abstract. We define ordinal arithmetic and show laws of Left- Monotonicity, Associativity, Distributivity, some minor related properties and the Cantor Normal Form.
Διαβάστε περισσότεραMath 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.
Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequalit for metrics: Let (X, d) be a metric space and let x,, z X. Prove that d(x, z) d(, z) d(x, ). (ii): Reverse triangle inequalit for norms:
Διαβάστε περισσότεραProblem Set 3: Solutions
CMPSCI 69GG Applied Information Theory Fall 006 Problem Set 3: Solutions. [Cover and Thomas 7.] a Define the following notation, C I p xx; Y max X; Y C I p xx; Ỹ max I X; Ỹ We would like to show that C
Διαβάστε περισσότεραInverse trigonometric functions & General Solution of Trigonometric Equations. ------------------ ----------------------------- -----------------
Inverse trigonometric functions & General Solution of Trigonometric Equations. 1. Sin ( ) = a) b) c) d) Ans b. Solution : Method 1. Ans a: 17 > 1 a) is rejected. w.k.t Sin ( sin ) = d is rejected. If sin
Διαβάστε περισσότεραΑπόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.
Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο The time integral of a force is referred to as impulse, is determined by and is obtained from: Newton s 2 nd Law of motion states that the action
Διαβάστε περισσότεραCHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS
CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS EXERCISE 01 Page 545 1. Use matrices to solve: 3x + 4y x + 5y + 7 3x + 4y x + 5y 7 Hence, 3 4 x 0 5 y 7 The inverse of 3 4 5 is: 1 5 4 1 5 4 15 8 3
Διαβάστε περισσότεραderivation of the Laplacian from rectangular to spherical coordinates
derivation of the Laplacian from rectangular to spherical coordinates swapnizzle 03-03- :5:43 We begin by recognizing the familiar conversion from rectangular to spherical coordinates (note that φ is used
Διαβάστε περισσότεραFractional Colorings and Zykov Products of graphs
Fractional Colorings and Zykov Products of graphs Who? Nichole Schimanski When? July 27, 2011 Graphs A graph, G, consists of a vertex set, V (G), and an edge set, E(G). V (G) is any finite set E(G) is
Διαβάστε περισσότερα= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y
Stat 50 Homework Solutions Spring 005. (a λ λ λ 44 (b trace( λ + λ + λ 0 (c V (e x e e λ e e λ e (λ e by definition, the eigenvector e has the properties e λ e and e e. (d λ e e + λ e e + λ e e 8 6 4 4
Διαβάστε περισσότεραMath221: HW# 1 solutions
Math: HW# solutions Andy Royston October, 5 7.5.7, 3 rd Ed. We have a n = b n = a = fxdx = xdx =, x cos nxdx = x sin nx n sin nxdx n = cos nx n = n n, x sin nxdx = x cos nx n + cos nxdx n cos n = + sin
Διαβάστε περισσότερα6.3 Forecasting ARMA processes
122 CHAPTER 6. ARMA MODELS 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss a linear
Διαβάστε περισσότερα3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β
3.4 SUM AND DIFFERENCE FORMULAS Page Theorem cos(αβ cos α cos β -sin α cos(α-β cos α cos β sin α NOTE: cos(αβ cos α cos β cos(α-β cos α -cos β Proof of cos(α-β cos α cos β sin α Let s use a unit circle
Διαβάστε περισσότερα3.2 Simple Roots and Weyl Group
44 CHAPTER 3. ROOT SYSTEMS 3.2 Simple Roots and Weyl Group In this section, we fix a root system Φ of rank l in a euclidean space E, with Weyl group W. 3.2.1 Bases and Weyl chambers Def. A subset of Φ
Διαβάστε περισσότεραHOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:
HOMEWORK 4 Problem a For the fast loading case, we want to derive the relationship between P zz and λ z. We know that the nominal stress is expressed as: P zz = ψ λ z where λ z = λ λ z. Therefore, applying
Διαβάστε περισσότεραSolutions to Exercise Sheet 5
Solutions to Eercise Sheet 5 jacques@ucsd.edu. Let X and Y be random variables with joint pdf f(, y) = 3y( + y) where and y. Determine each of the following probabilities. Solutions. a. P (X ). b. P (X
Διαβάστε περισσότεραFourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics
Fourier Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Introduction Not all functions can be represented by Taylor series. f (k) (c) A Taylor series f (x) = (x c)
Διαβάστε περισσότεραGeneral 2 2 PT -Symmetric Matrices and Jordan Blocks 1
General 2 2 PT -Symmetric Matrices and Jordan Blocks 1 Qing-hai Wang National University of Singapore Quantum Physics with Non-Hermitian Operators Max-Planck-Institut für Physik komplexer Systeme Dresden,
Διαβάστε περισσότεραSecond Order Partial Differential Equations
Chapter 7 Second Order Partial Differential Equations 7.1 Introduction A second order linear PDE in two independent variables (x, y Ω can be written as A(x, y u x + B(x, y u xy + C(x, y u u u + D(x, y
Διαβάστε περισσότερα5. Choice under Uncertainty
5. Choice under Uncertainty Daisuke Oyama Microeconomics I May 23, 2018 Formulations von Neumann-Morgenstern (1944/1947) X: Set of prizes Π: Set of probability distributions on X : Preference relation
Διαβάστε περισσότεραApproximation of distance between locations on earth given by latitude and longitude
Approximation of distance between locations on earth given by latitude and longitude Jan Behrens 2012-12-31 In this paper we shall provide a method to approximate distances between two points on earth
Διαβάστε περισσότεραLecture 2. Soundness and completeness of propositional logic
Lecture 2 Soundness and completeness of propositional logic February 9, 2004 1 Overview Review of natural deduction. Soundness and completeness. Semantics of propositional formulas. Soundness proof. Completeness
Διαβάστε περισσότεραdim(u) = n 1 and {v j } j i
SOLUTIONS Math B4900 Homework 1 2/7/2018 Unless otherwise specified, U, V, and W denote vector spaces over a common field F ; ϕ and ψ denote linear transformations; A, B, and C denote bases; A, B, and
Διαβάστε περισσότεραMINIMAL CLOSED SETS AND MAXIMAL CLOSED SETS
MINIMAL CLOSED SETS AND MAXIMAL CLOSED SETS FUMIE NAKAOKA AND NOBUYUKI ODA Received 20 December 2005; Revised 28 May 2006; Accepted 6 August 2006 Some properties of minimal closed sets and maximal closed
Διαβάστε περισσότεραThe ε-pseudospectrum of a Matrix
The ε-pseudospectrum of a Matrix Feb 16, 2015 () The ε-pseudospectrum of a Matrix Feb 16, 2015 1 / 18 1 Preliminaries 2 Definitions 3 Basic Properties 4 Computation of Pseudospectrum of 2 2 5 Problems
Διαβάστε περισσότεραJordan Form of a Square Matrix
Jordan Form of a Square Matrix Josh Engwer Texas Tech University josh.engwer@ttu.edu June 3 KEY CONCEPTS & DEFINITIONS: R Set of all real numbers C Set of all complex numbers = {a + bi : a b R and i =
Διαβάστε περισσότεραChapter 3: Ordinal Numbers
Chapter 3: Ordinal Numbers There are two kinds of number.. Ordinal numbers (0th), st, 2nd, 3rd, 4th, 5th,..., ω, ω +,... ω2, ω2+,... ω 2... answers to the question What position is... in a sequence? What
Διαβάστε περισσότεραExercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.
Exercises 0 More exercises are available in Elementary Differential Equations. If you have a problem to solve any of them, feel free to come to office hour. Problem Find a fundamental matrix of the given
Διαβάστε περισσότεραJordan Journal of Mathematics and Statistics (JJMS) 4(2), 2011, pp
Jordan Journal of Mathematics and Statistics (JJMS) 4(2), 2011, pp.115-126. α, β, γ ORTHOGONALITY ABDALLA TALLAFHA Abstract. Orthogonality in inner product spaces can be expresed using the notion of norms.
Διαβάστε περισσότεραA Two-Sided Laplace Inversion Algorithm with Computable Error Bounds and Its Applications in Financial Engineering
Electronic Companion A Two-Sie Laplace Inversion Algorithm with Computable Error Bouns an Its Applications in Financial Engineering Ning Cai, S. G. Kou, Zongjian Liu HKUST an Columbia University Appenix
Διαβάστε περισσότεραStatistical Inference I Locally most powerful tests
Statistical Inference I Locally most powerful tests Shirsendu Mukherjee Department of Statistics, Asutosh College, Kolkata, India. shirsendu st@yahoo.co.in So far we have treated the testing of one-sided
Διαβάστε περισσότεραSecond Order RLC Filters
ECEN 60 Circuits/Electronics Spring 007-0-07 P. Mathys Second Order RLC Filters RLC Lowpass Filter A passive RLC lowpass filter (LPF) circuit is shown in the following schematic. R L C v O (t) Using phasor
Διαβάστε περισσότεραOptimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices
Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices Chi-Kwong Li Department of Mathematics The College of William and Mary Williamsburg, Virginia 23187-8795
Διαβάστε περισσότεραCRASH COURSE IN PRECALCULUS
CRASH COURSE IN PRECALCULUS Shiah-Sen Wang The graphs are prepared by Chien-Lun Lai Based on : Precalculus: Mathematics for Calculus by J. Stuwart, L. Redin & S. Watson, 6th edition, 01, Brooks/Cole Chapter
Διαβάστε περισσότεραforms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with
Week 03: C lassification of S econd- Order L inear Equations In last week s lectures we have illustrated how to obtain the general solutions of first order PDEs using the method of characteristics. We
Διαβάστε περισσότεραSolution Series 9. i=1 x i and i=1 x i.
Lecturer: Prof. Dr. Mete SONER Coordinator: Yilin WANG Solution Series 9 Q1. Let α, β >, the p.d.f. of a beta distribution with parameters α and β is { Γ(α+β) Γ(α)Γ(β) f(x α, β) xα 1 (1 x) β 1 for < x
Διαβάστε περισσότεραCoefficient Inequalities for a New Subclass of K-uniformly Convex Functions
International Journal of Computational Science and Mathematics. ISSN 0974-89 Volume, Number (00), pp. 67--75 International Research Publication House http://www.irphouse.com Coefficient Inequalities for
Διαβάστε περισσότεραThe semiclassical Garding inequality
The semiclassical Garding inequality We give a proof of the semiclassical Garding inequality (Theorem 4.1 using as the only black box the Calderon-Vaillancourt Theorem. 1 Anti-Wick quantization For (q,
Διαβάστε περισσότεραSOLVING CUBICS AND QUARTICS BY RADICALS
SOLVING CUBICS AND QUARTICS BY RADICALS The purpose of this handout is to record the classical formulas expressing the roots of degree three and degree four polynomials in terms of radicals. We begin with
Διαβάστε περισσότεραANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?
Teko Classes IITJEE/AIEEE Maths by SUHAAG SIR, Bhopal, Ph (0755) 3 00 000 www.tekoclasses.com ANSWERSHEET (TOPIC DIFFERENTIAL CALCULUS) COLLECTION # Question Type A.Single Correct Type Q. (A) Sol least
Διαβάστε περισσότεραArithmetical applications of lagrangian interpolation. Tanguy Rivoal. Institut Fourier CNRS and Université de Grenoble 1
Arithmetical applications of lagrangian interpolation Tanguy Rivoal Institut Fourier CNRS and Université de Grenoble Conference Diophantine and Analytic Problems in Number Theory, The 00th anniversary
Διαβάστε περισσότεραFinite Field Problems: Solutions
Finite Field Problems: Solutions 1. Let f = x 2 +1 Z 11 [x] and let F = Z 11 [x]/(f), a field. Let Solution: F =11 2 = 121, so F = 121 1 = 120. The possible orders are the divisors of 120. Solution: The
Διαβάστε περισσότεραHomework 8 Model Solution Section
MATH 004 Homework Solution Homework 8 Model Solution Section 14.5 14.6. 14.5. Use the Chain Rule to find dz where z cosx + 4y), x 5t 4, y 1 t. dz dx + dy y sinx + 4y)0t + 4) sinx + 4y) 1t ) 0t + 4t ) sinx
Διαβάστε περισσότεραTMA4115 Matematikk 3
TMA4115 Matematikk 3 Andrew Stacey Norges Teknisk-Naturvitenskapelige Universitet Trondheim Spring 2010 Lecture 12: Mathematics Marvellous Matrices Andrew Stacey Norges Teknisk-Naturvitenskapelige Universitet
Διαβάστε περισσότεραAreas and Lengths in Polar Coordinates
Kiryl Tsishchanka Areas and Lengths in Polar Coordinates In this section we develop the formula for the area of a region whose boundary is given by a polar equation. We need to use the formula for the
Διαβάστε περισσότερα( y) Partial Differential Equations
Partial Dierential Equations Linear P.D.Es. contains no owers roducts o the deendent variables / an o its derivatives can occasionall be solved. Consider eamle ( ) a (sometimes written as a ) we can integrate
Διαβάστε περισσότεραAreas and Lengths in Polar Coordinates
Kiryl Tsishchanka Areas and Lengths in Polar Coordinates In this section we develop the formula for the area of a region whose boundary is given by a polar equation. We need to use the formula for the
Διαβάστε περισσότεραCommutative Monoids in Intuitionistic Fuzzy Sets
Commutative Monoids in Intuitionistic Fuzzy Sets S K Mala #1, Dr. MM Shanmugapriya *2 1 PhD Scholar in Mathematics, Karpagam University, Coimbatore, Tamilnadu- 641021 Assistant Professor of Mathematics,
Διαβάστε περισσότερα= {{D α, D α }, D α }. = [D α, 4iσ µ α α D α µ ] = 4iσ µ α α [Dα, D α ] µ.
PHY 396 T: SUSY Solutions for problem set #1. Problem 2(a): First of all, [D α, D 2 D α D α ] = {D α, D α }D α D α {D α, D α } = {D α, D α }D α + D α {D α, D α } (S.1) = {{D α, D α }, D α }. Second, {D
Διαβάστε περισσότεραOn Numerical Radius of Some Matrices
International Journal of Mathematical Analysis Vol., 08, no., 9-8 HIKARI Ltd, www.m-hikari.com https://doi.org/0.988/ijma.08.75 On Numerical Radius of Some Matrices Shyamasree Ghosh Dastidar Department
Διαβάστε περισσότεραTrigonometric Formula Sheet
Trigonometric Formula Sheet Definition of the Trig Functions Right Triangle Definition Assume that: 0 < θ < or 0 < θ < 90 Unit Circle Definition Assume θ can be any angle. y x, y hypotenuse opposite θ
Διαβάστε περισσότεραSrednicki Chapter 55
Srednicki Chapter 55 QFT Problems & Solutions A. George August 3, 03 Srednicki 55.. Use equations 55.3-55.0 and A i, A j ] = Π i, Π j ] = 0 (at equal times) to verify equations 55.-55.3. This is our third
Διαβάστε περισσότεραParametrized Surfaces
Parametrized Surfaces Recall from our unit on vector-valued functions at the beginning of the semester that an R 3 -valued function c(t) in one parameter is a mapping of the form c : I R 3 where I is some
Διαβάστε περισσότεραMATRICES
MARICES 1. Matrix: he arrangement of numbers or letters in the horizontal and vertical lines so that each horizontal line contains same number of elements and each vertical row contains the same numbers
Διαβάστε περισσότεραHomomorphism in Intuitionistic Fuzzy Automata
International Journal of Fuzzy Mathematics Systems. ISSN 2248-9940 Volume 3, Number 1 (2013), pp. 39-45 Research India Publications http://www.ripublication.com/ijfms.htm Homomorphism in Intuitionistic
Διαβάστε περισσότεραOrbital angular momentum and the spherical harmonics
Orbital angular momentum and the spherical harmonics March 8, 03 Orbital angular momentum We compare our result on representations of rotations with our previous experience of angular momentum, defined
Διαβάστε περισσότεραDIRECT PRODUCT AND WREATH PRODUCT OF TRANSFORMATION SEMIGROUPS
GANIT J. Bangladesh Math. oc. IN 606-694) 0) -7 DIRECT PRODUCT AND WREATH PRODUCT OF TRANFORMATION EMIGROUP ubrata Majumdar, * Kalyan Kumar Dey and Mohd. Altab Hossain Department of Mathematics University
Διαβάστε περισσότεραProblem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ.
Chemistry 362 Dr Jean M Standard Problem Set 9 Solutions The ˆ L 2 operator is defined as Verify that the angular wavefunction Y θ,φ) Also verify that the eigenvalue is given by 2! 2 & L ˆ 2! 2 2 θ 2 +
Διαβάστε περισσότεραGenerating Set of the Complete Semigroups of Binary Relations
Applied Mathematics 06 7 98-07 Published Online January 06 in SciRes http://wwwscirporg/journal/am http://dxdoiorg/036/am067009 Generating Set of the Complete Semigroups of Binary Relations Yasha iasamidze
Διαβάστε περισσότερα