Multilinear Algebra 1

Μέγεθος: px
Εμφάνιση ξεκινά από τη σελίδα:

Download "Multilinear Algebra 1"

Transcript

1 Multilinear Algebra 1 Tin-Yau Tam Department of Mathematics and Statistics 221 Parker Hall Auburn University AL 36849, USA tamtiny@auburn.edu November 30, Some portions are from B.Y. Wang s Foundation of Multilinear Algebra (1985 in Chinese)

2 2

3 Chapter 1 Review of Linear Algebra 1.1 Linear extension In this course, U, V, W are finite dimensional vector spaces over C, unless specified. All bases are ordered bases. Denote by Hom (V, W ) the set of all linear maps from V to W and End V := Hom (V, V ) the set of all linear operators on V. Notice that Hom (V, W ) is a vector space under usual addition and scalar multiplication. For T Hom (V, W ), the image and kernel are Im T = {T v : v V } W, Ker T = {v : T v = 0} V which are subspaces. It is known that T is injective if and only if Ker T = 0. The rank of T is rank T := dim Im T. Denote by C m n the space of m n complex matrices. Each A C m n can be viewed as in Hom (C n, C m ) in the obvious way. So we have the concepts, like rank, inverse, image, kernel, etc for matrices. Theorem Let E = {e 1,..., e n } be a basis of V and let w 1,..., w n W. Then there exists a unique T Hom (V, W ) such that T (e i ) = w i, i = 1,..., n. Proof. For each v = n a ie i, define T v = n a it e i = n a iw i. Such T is clearly linear. If S, T Hom (V, W ) such that T (e i ) = w i, i = 1,..., n, then Sv = n a ise i = n a it e i = T v for all v = n a ie i V so that S = T. In other words, a linear map is completely determined by the images of basis elements in V. A bijective T Hom (V, W ) is said to be invertible and its inverse (T 1 T = I V and T T 1 = I W ) is linear, i.e., T 1 Hom (W, V ): T 1 (α 1 w 1 + α 2 w 2 ) = v means that T v = α 1 w 1 + α 2 w 2 = T (α 1 T 1 w 1 + α 2 T 1 w 2 ), 3

4 4 CHAPTER 1. REVIEW OF LINEAR ALGEBRA i.e., v = T 1 w 1 + α 2 T 1 w 2 for all w 1, w 2 W and v V. We will simply write ST for S T for S Hom (W, U), T Hom (V, W ). Two vector spaces V and W are said to be isomorphic if there is an invertible T Hom (V, W ). Theorem Let T Hom (V, W ) and dim V = n. 1. Then rank T = k if and only if there is a basis {v 1,..., v k, v k+1,..., v n } for V such that T v 1,..., T v k are linearly independent and T v k+1 = = T v n = dim V = dim Im T + dim Ker T. Proof. Since rank T = k, there is a basis {T v 1,..., T v k } for Im T. Let {v k+1,..., v k+l } be a basis of Ker T. Set E = {v 1,..., v k, v k+1,..., v k+l }. For each v V, T v = k a it v i since T v Im T. So T (v k a iv i ) = 0, i.e., v k a iv i Ker T. Thus v k a iv i = k+l i=k+1 a iv i, i.e., v = k+l a iv i so that E spans V. So it suffices to show that E is linearly independent. Suppose k+l a iv i = 0. Applying T on both sides, k a it v i = 0 so that a 1 = = a k = 0. Hence k+l i=k+1 a iv i = 0 and we have a k+1 = = a k+l = 0 since v k+1,..., v k+l are linear independent. Thus E is linearly independent and hence a basis of V ; so k + l = n. Theorem Let A C m n. 1. rank A A = rank A. 2. For each A C m n, rank A = rank A = rank A T, i.e., column rank and row rank of A are the same. Proof. (1) Notice Ax = 0 if and only if A Ax = 0. It is because A Ax = 0 implies that (Ax) (Ax) = x A Ax = 0. So Ax 1,..., Ax k are linearly independent if and only if A Ax 1,..., A Ax k are linearly independent. Hence rank A A = rank A. (2) Since rank A = rank A A rank A from Problem 3 and thus rank A rank A since (A ) = A. Hence rank A = rank A T (why?). Problems 1. Show that dim Hom (V, W ) = dim V dim W. 2. Let T Hom (V, W ). Prove that rank T min{dim V, dim W }. 3. Show that if T Hom (V, U), S Hom (U, W ), then rank ST min{rank S, rank T }. 4. Show that the inverse of T Hom (V, W ) is unique, if exists. Moreover T is invertible if and only if rank T = dim V = dim W. 5. Show that V and W are isomorphic if and only dim V = dim W.

5 1.2. MATRIX REPRESENTATIONS OF LINEAR MAPS 5 6. Show that if T Hom (V, U), S Hom (U, W ) are invertible, then ST is also invertible. In this case (ST ) 1 = T 1 S Show that A C m n is invertible only if m = n, i.e., A has to be square. Show that A C n n is invertible if and only if the columns of A are linearly independent. 8. Prove that if A, B C n n such that AB = I n, then BA = I n. Solutions to Problems Let {e 1,..., e n } and {f 1,..., f m } be bases of V and W respectively. Then ξ ij Hom (V, W ) defined by ξ ij (e k ) = δ ik f j, i, j, k = 1,..., n, form a basis of Hom (V, W ). Thus dim Hom (V, W ) = dim V dim W. 2. From definition rank T dim W. From Theorem rank T dim V. 3. Since Im ST Im S, rank ST rank S. By Theorem rank ST = dim V dim Ker ST. But Ker T Ker ST so that rank ST dim V dim Ker T = rank T. 4. Suppose that S and S Hom (V, W ) are inverses of T Hom (V, W ). Then S = S(T S ) = (ST )S = S ; inverse is unique. T Hom (V, W ) invertible T is bijective (check!). Now injective T Ker T = 0, and surjective T rank T = dim W 5. One implication follows from Problem 4. If dim V = dim W, let E = {e 1,..., e n } and {f 1,..., f n } be bases of V and W, define T Hom (V, W ) by T e i = f i for all i which is clearly invertible since T 1 f i = e i. 6. (T 1 S 1 )(ST ) = I V and (ST )(T 1 S 1 ) = I W. 7. From Problem 4, a matrix A is invertible only if A is square. The second statement follows from the fact the rank A is the dimension of the column space of A. 8. If AB = I, then rank AB = n so rank A = n(= rank B) by Problem 3. So Ker A = 0 and Ker B = 0 by Theorem Thus A and B is invertible. Then I = A 1 ABB 1 = A 1 B 1 (BA) 1 so that BA = I. (or, without using Theorem 1.1.2, note that B = B(AB) = (BA)B so that (I BA)B = 0. Since Im B = C n, I BA = 0, i.e., BA = I). 1.2 Matrix representations of linear maps In this section, U, V, W are finite dimensional vector spaces. Matrices A C m n can be viewed as elements in Hom (C n, C m ). On the other hand, each T Hom (V, W ) can be realized as a matrix once we fix bases for V and W.

6 6 CHAPTER 1. REVIEW OF LINEAR ALGEBRA Let T Hom (V, W ) with bases E = {e 1,..., e n } for V and F = {f 1,..., f m } for W. Since T e j W, we have T e j = m a ij f i, j = 1,..., n. The matrix A = (a ij ) C m n is called the matrix representation of T with respect to the bases E and F, denoted by [T ] F E = A. From Theorem [S] F E = [T ]F E if and only if S = T, where S, T Hom (V, W ). The coordinate vector of v V with respect to the basis E = {e 1,..., e n } for V is denoted by [v] E := (a 1,..., a n ) T C n where v = n a iv i. Indeed we can view v Hom (C, V ) with 1 as a basis of C. Then v 1 = n a ie i so that [v] E is indeed [v] E 1. Theorem Let T Hom (V, W ) and S Hom (W, U) with bases E = {e 1,..., e n } for V, F = {f 1,..., f m } for W and G = {g 1,..., g l } for U. Then [ST ] G E = [S] G F [T ] F E and in particular [T v] F = [T ] F E[v] E, v V. Proof. Let A = [T ] F E and B = [S]G F, i.e., T e j = m a ijf i, j = 1,..., n. and T f i = l k=1 b kig k, i = 1,..., m. So m m l ST e j = a ij Sf i = a ij ( b ki )g k = So [ST ] G E = BA = [S]G F [T ]F E. k=1 l (BA) kj g k. Consider I := I V End V. The matrix [I] E E is called the transitive matrix from E to E since k=1 [v] E = [Iv] E = [I] E E [v] E, v V, i.e., [I] E E transforms the coordinate vector [v] E with respect to E to [v] E with respect to E. From Theorem 1.2.1, [I] E E = ([I]E E ) 1. Two operators S, T End V are said to be similar if there is an invertible P End V such that S = P 1 AP. Similarity is an equivalence relation and is denoted by. Theorem Let T End V with dim V = n and let E and E be bases of V. Then A := [T ] E E and B := [T ]E E are similar, i.e., there is an invertible P C n n such that P AP 1 = B. Conversely, similar matrices are matrix representations of the same operator with respect to different bases.

7 1.2. MATRIX REPRESENTATIONS OF LINEAR MAPS 7 Proof. Let I = I V. By Theorem 1.2.1, [I] E E [I]E E = [I]E E = I n where I n is n n identity matrix. Denote by P := [I] E E (the transitive matrix from E to E) so that P 1 = [I] E E. Thus [T ] E E = [IT I]E E = [I]E E [T ] E E[I] E E = P 1 [T ] E EP. Suppose that A and B are similar, i.e., B = R 1 BR. Let E be a basis of V. By Theorem A uniquely determines T End V such that [T ] E E = A. By Theorem 1.2.1, an invertible R C n n uniquely determines a basis E of V such that [T ] E E = R so that [T ] E E = [I]E E [T ] E E[I] E E = R 1 [T ] E ER = B. So functions ϕ : C n n C that take constant values on similarity orbits of C n n, i.e., ϕ(a) = P 1 AP for all invertible P C n n, are defined for operators, for examples, determinant and trace of a operator. Problems 1. Let E = {e 1,..., e n } and F = {f 1,..., f m } be bases for V and W. Show that the matrix representation ϕ := [ ] F E : Hom (V, W ) C m n is an isomorphism. 2. Let E and F be bases of V and W respectively. Show that if T Hom (V, W ) is invertible, then [T 1 ] E F = ([T ]F E ) Let E and F be bases of V and W respectively. Let T Hom (V, W ) and A = [T ] F E. Show that rank A = rank T. Solutions to Problems It is straightforward to show that ϕ is linear by comparing the (ij) entry of [αs+βt ] F E and α[s]f E +β[t ]F E. Since dim Hom (V, W ) = mn = dim C m n, it suffices to show that ϕ is injective. From Theorem 1.1.1, ϕ is injective. 2. From Theorem 1.2.1, [T 1 ] E F [T ]F E = [T 1 T ] E E = [I V ] E E = I n where dim V = n. Thus use Problem 1.8 to have [T 1 ] E F = ([T ]F E ) 1, or show that [T ] F E [T 1 ] E F = I n similarly. 3. (Roy) Let rank T = k and let E = {v 1,..., v n } and F = {w 1,..., w m } be bases for V and W. When we view v V as an element in Hom (C, V ), by Problem 1, the coordinate maps [ ] E : V C n and [ ] F : W C m are isomorphisms. So rank T = dim Im T = dim T v 1,..., T v n = dim [T v 1 ] F,..., [T v n ] F = dim [T ] F E[v 1 ] E,..., [T ] F E[v n ] E = dim Im A = rank A.

8 8 CHAPTER 1. REVIEW OF LINEAR ALGEBRA 1.3 Inner product spaces Let V be a vector space. An inner product on V is a function (, ) : V V C such that 1. (u, v) = (v, u) for all u, v V. 2. (α 1 v 1 + α 2 v 2, u) = α 1 (v 1, u) + α 2 (v 2, u) for all v 1, v 2, u V, α 1, α 2 C. 3. (v, v) 0 for all v V and (v, v) = 0 if and only if v = 0. The space V is then called an inner product space. The norm induced by the inner product is defined as v = (v, v), v V. Vectors v satisfying v = 1 are called unit vectors. Two vectors u, v V are said to be orthogonal if (u, v) = 0, denoted by u v. A basis E = {e 1,..., e n } is called an orthogonal basis if the vectors are orthogonal. It is said to be orthonormal if (e i, e j ) = δ ij, i, j = 1,..., n. where is the Kronecker delta notation. δ ij = { 1 if i = j 0 if i j Theorem (Cauchy-Schwarz inequality) Let V be an inner product space. Then (u, v) u v, u, v V. Equality holds if and only if u and v are linearly dependent, i.e., one is a scalar multiple of the other. Proof. It is trivial when v = 0. Suppose v 0. Let w = u (u,v) v 2 v. Clearly (w, v) = 0 so that 0 (w, w) = (w, u) = (u, u) (u, v) v 2 (v, u) = (u, u 2 v) 2 v 2. Equality holds if and only if w = 0, i.e. u and v are linearly dependent. Theorem (Triangle inequality) Let V be an inner product space. Then Proof. u + v u + v, u, v V. u + v 2 = (u + v, u + v) = u 2 + 2Re (u, v) + v 2 u (u, v) + v 2 By Theorem 1.3.1, we have u + v 2 u u v + v 2 = ( u + v ) 2.

9 1.3. INNER PRODUCT SPACES 9 Theorem Let E = {e 1,..., e n } be an orthonormal basis of V. For any u, v V, u = (u, v) = n (u, e i )e i, n (u, e i )(e i, v). Proof. Let u = n j=1 a je j. Then (u, e i ) = ( n j=1 a je j, e i ) = a i, i = 1,..., n. Now (u, v) = ( n (u, e i)e i, v) = n (u, e i)(e i, v). Denote by v 1,..., v k the span of the vectors v 1,..., v k. Theorem (Gram-Schmidt orthogonalization) Let V be an inner product space with basis {v 1,..., v n }. Then there is an orthonormal basis {e 1,..., e n } such that v 1,..., v k = e 1,..., e k, k = 1,..., n. Proof. Let e 1 = e 2 = e n = v 1 v 1, v 2 (v 2, e 1 )e 1 v 2 (v 2, e 1 )e 1... v n (v n, e 1 )e 1 (v n, e n 1 )e n 1 v n (v n, e 1 )e 1 (v n, e n 1 )e n 1 It is direct computation to check that {e 1,..., e n } is the desired orthonormal basis. The inner product on C n : for any x = (x 1,..., x n ) T and y = (y 1,..., y n ) T C n, n (x, y) := x i y i is called the standard inner product on C n. The induced norm is called the 2-norm and is denoted by x 2 := (x, x) 1/2. Problems 1. Let E = {e 1,..., e n } be a basis of V and v V. Prove that v = 0 if and only if (v, e i ) = 0 for all i = 1,..., n.

10 10 CHAPTER 1. REVIEW OF LINEAR ALGEBRA 2. Show that each orthogonal set of nonzero vectors is linearly independent. 3. Let E = {e 1,..., e n } be an orthonormal basis of V. If u = n a ie i and n = n b ie i, then (u, v) = n a ib i. 4. Show that an inner product is completely determined by an orthonormal basis, i.e., if the inner products (, ) and, have the same orthonormal basis, then (, ) and, are the same. 5. Show that (A, B) = tr (B A) defines an inner product on C m n, where B denotes the complex conjugate transpose of B. 6. Prove tr (A B) tr (A A)tr (B B) for all A, B C m n. Solutions to Problems v = 0 (v, u) = 0 for all u V (v, e i ) = 0 for all i. Conversely each u V is of the form n a ie i so that (v, e i ) = 0 for all i implies (v, u) = 0 for all u V. 2. Suppose that S = {v 1,..., v n } is an orthogonal set of nonzero vectors. With n a iv i = 0, a j (v j, v j ) = ( n a iv i, v j ) = 0 so that a j = 0 for all j since (v j, v j ) 0. Thus S is linearly independent. 3. Follows from Theorem Follows from Problem Straightforward computation. 6. Apply Cauchy-Schwarz inequality on the inner product defined in Problem Adjoints Let V, W be inner product spaces. For each T Hom (V, W ), the adjoint of T is S Hom (W, V ) such that (T v, w) W = (v, Sw) V for all v V, w W and is denoted by T. Clearly (T ) = T. Theorem Let W, V be inner product spaces. Each T Hom (V, W ) has a unique adjoint. Proof. By Theorem 1.3.4, let E = {e 1,..., e n } be an orthonormal basis of V. For any w W, define S Hom (W, V ) by Sw := n (w, T e i ) W e i.

11 1.4. ADJOINTS 11 For any v V, by Theorem 1.3.3, v = n (v, e i) V e i so that (v, Sw) V = (v, = (T n (w, T e i ) W e i ) V = n (T e i, w) W (v, e i ) V n (v, e i ) V e i, w) W = (T v, w) W. Uniqueness follows from Problem 1. Theorem Let E = {e 1,..., e n } and F = {f 1,..., f m } be orthonormal bases of the inner product spaces V and W respectively. Let T Hom (V, W ). Then [T ] E F = ([T ]F E ), where the second denotes the complex conjugate transpose. Proof. Let A := [T ] F E, i.e., T e j = m k=1 a kjf k, j = 1,..., n. So (T e j, f i ) W = a ij. By the proof of Theorem T f j = n (f j, T e i ) W e i = So [T ] E F = A = ([T ] F E ). n (T e i, f j ) W e i = n a ji e i. Notice that if E and F are not orthonormal, then [T ] E F = ([T ]F E ) may not hold. Problems 1. Let S, T Hom (V, W ) where V, W are inner product spaces. Prove that (a) (T v, w) = 0 for all v V and w W if and only if T = 0. (b) (Sv, w) = (T v, w) for all v V and w W if and only if S = T. 2. Show that (ST ) = T S where T Hom (V, W ) and S L(W, U) and U, V, W are inner product spaces. 3. Let E and F be orthonormal bases of inner product space V. Prove that ([I] F E ) = ([I] F E ) 1 = [I] E F. 4. Let G be a basis of the inner product space V. Let T End V. Prove that [T ] G G and ([T ]G G ) are similar. 5. Let V, W be inner product spaces. Prove that if T Hom (V, W ), then rank T = rank T. 6. The adjoint : Hom (V, W ) Hom (W, V ) is an isomorphism.

12 12 CHAPTER 1. REVIEW OF LINEAR ALGEBRA Solutions to Problems (a) (T v, w) = 0 for all v V and w W T v = 0 for all v V, i.e., T = 0. (b) Consider S T. 2. Since (v, T S w) V = (T v, S w) V = (ST v, w) V, by the uniqueness of adjoint, (ST ) = T S. 3. Notice that I = I where I := I V. So from Theorem ([I] F E ) = [I ] E F = [I]F E. Then by Problem 2.2, ([I]F E ) 1 = [I] E F. 4. (Roy and Alex) Let E be an orthonormal basis of V. Let P = [I V ] E G (the transition matrix from G to E). Then [T ] G G = P 1 [T ] E E P and [T ] E E = P [T ]G G P 1. By Theorem [T ] E E = ([T ]E E ). Together with Problem 2 [T ] G G = P 1 [T ] E EP = P 1 ([T ] E E) P = P 1 (P [T ] G GP 1 ) P = (P P ) 1 ([T ] G G) (P P ), i.e. ([T ] G G ) and ([T ] G G ) are similar. Remark: If G is an orthonormal basis, then P is unitary and thus ([T ] G G ) = ([T ] G G ), a special case of Theorem Follows from Theorem 1.4.2, Problem 2.3 and rank A = rank A where A is a matrix. 6. It is straightforward to show that : Hom (V, W ) Hom (W, V ) is a linear map. Since dim(v, W ) = dim(w, V ) it remains to show that is injective. First show that (T ) = T : for all v V, w W, (T w, v) = (v, T w) V = (T v, w) V = (w, T v) Then for any S, T Hom (V, W ), S = T S = T. 1.5 Normal operators and matrices An operator T End V on the inner product space V is normal if T T = T T ; Hermitian if T = T, positive semi-definite, abbreviated as psd or T 0 if (T x, x) 0 for all x V ; positive definite, abbreviated as pd or T > 0 if (T x, x) > 0 for all 0 x V ; unitary if T T = I. Unitary operators form a group. When V = C n is equipped with the standard inner product and orthonormal basis, linear operators are viewed as matrices in C n n and the adjoint is simply the complex conjugate transpose. Thus a matrix A C n n is said to be normal if A A = AA ; Hermitian if A = A, psd if (Ax, x) 0 for all x C n ; pd if (Ax, x) > 0 for all 0 x C n ; unitary if A A = I. Unitary matrices in

13 1.5. NORMAL OPERATORS AND MATRICES 13 C n n form a group and is denoted by U n (C). One can see immediately that A is unitary if A has orthonormal columns (rows since AA = I as well from Problem 1.8). Theorem (Schur triangularization theorem) Let A C n n. There is U U n (C) such that U AU is upper triangular. Proof. Let λ 1 be an eigenvalue of A with unit eigenvector x 1, i.e., Ax 1 = λ 1 x 1 with x 1 2 = 1. Extend via Theorem to an orthonormal basis {x 1,..., x n } for C n and set Q = [x 1 x n ] which is unitary. Since Q Ax 1 = λ 1 Q x 1 = λ 1 (1,..., 0) T, λ 1 0 Â := Q AQ =. A 1 0 where A 1 C (n 1) (n 1). By induction, there is U 1 U n 1 (C) such that U1 A 1 U 1 is upper triangular. Set P :=. U 1 0 which is unitary. So is upper triangular and set U = QP. λ 1 P 0 ÂP =. U1 A 1 U 1 0 Theorem Let T End V, where V is an inner product space. Then there is an orthonormal basis E of V such that [T ] E E is upper triangular. Proof. For any orthonormal basis E = {e 1,..., e n} of V let A := [T ] E E. By Theorem there is U U n (C), where n = dim V, such that U AU is upper triangular. Since U is unitary, E = {e 1,..., e n } is also an orthonormal basis, where e j = n u ije i, j = 1,..., n (check!) and [I]E E = U. Hence [T ] E E = [I]E E [T ]E E [I]E E = U AU since U = U 1. Lemma Let A C n n be normal and let r be fixed. Then a rj = 0 for all j r if and only if a ir = 0 for all i r. Proof. From a rj = 0 for all j r and AA = A A, n a rr 2 = a rj 2 = (AA ) rr = (A A) rr = a rr 2 + a ir 2. j=1 i r So a ir = 0 for all i r. Then apply it on the normal A.

14 14 CHAPTER 1. REVIEW OF LINEAR ALGEBRA Theorem Let A C n n. Then 1. A is normal if and only if A is unitarily similar to a diagonal matrix. 2. A is Hermitian (psd, pd) if and only if A is unitarily similar to a real (nonnegative, positive) diagonal matrix. 3. A is unitary if and only if A is unitarily similar to a diagonal unitary matrix. Proof. Notice that if A is normal, Hermitian, psd, pd, unitary, so is U AU for any U U n (C). By Theorem 1.5.1, there is U U n (C) such that U AU is upper triangular and also normal. By Lemma 1.5.3, the result follows. The rest follows similarly. From Gram-Schmidt orthgonalization (Theorem 1.3.4), one has the QR decomposition of an invertible matrix. Theorem (QR decomposition) Each invertible A C n n can be decomposed as A = QR, where Q U n (C) and R is upper triangular. The diagonal entries of R may be chosen positive; in this case the decomposition is unique. Problems 1. Show that upper triangular in Theorem may be replaced by lower triangular. 2. Prove that A End V is unitary if and only if Av = v for all v V. 3. Show that A C n n is psd (pd) if and only if A = B B for some (invertible) matrix B. In particular B may be chosen lower or upper triangular. 4. Let V be an inner product space. Prove that (i) (Av, v) = 0 for all v V if and only if A = 0, (ii) (Av, v) R for all v V if and only if A is Hermitian. What happens if the underlying field C is replaced by R? 5. Show that if A is psd, then A is Hermitian. What happens if the underlying field C is replaced by R? 6. Prove Theorem Prove that if T Hom (V, W ) where V and W are inner product spaces, then T T 0 and T T 0. If T is invertible, then T T > 0 and T T > 0. Solutions to Problems Apply Theorem on A and take complex conjugate transpose back.

15 1.5. NORMAL OPERATORS AND MATRICES A End V is unitary A A = I V So unitary A implies Av 2 = (Av, Av) = (A Av, v) = (v, v) = v 2. Conversely, Av = v for all v V implies ((A A I)v, v) = 0 where A A I is Hermitian. Apply Problem If A = B B, then x Ax = xb Bx = Bx for all x C n. Conversely if A is psd, we can define a psd A 1/2 via Theorem Apply QR on A 1/2 to have A 1/2 = QR where Q is unitary and R is upper triangular. Hence A = A 1/2 A 1/2 = (A 1/2 ) A 1/2 = R Q QR = R R. Set B := R. Let A 1/2 = P L where P is unitary and L is lower triangular (apply QR decomposition on (A ) 1 ). Then A = L L. 4. (i) It suffices to show for Hermitian A because of Hermitian decomposition A = H +ik where H := (A+A )/2 and K = (A A )/(2i) are Hermitian and (Av, v) = (Hv, v) + i(kv, v) and (Hv, v), (Kv, v) R. So (Av, v) = 0 for all v if and only if (Hv, v) = (Kv, v) = 0 for all v V. Suppose A is Hermitian and (Av, v) = 0 for all v V. Then there is an orthonormal basis of eigenvectors {v 1,..., v n } of A with corresponding eigenvalues λ 1,..., λ n, but all eigenvalues are zero because λ i (v i, v i ) = (Av i, v i ) = 0. When C is replaced by R, we cannot conclude that A = 0 since for any real skew symmetric matrix A R n n, x T Ax = 0 (why?). (Brice) Notice that A is psd (because (Av, v) = 0). From Problem 3 via matrix-operator approach, A = B B for some B End V. Thus from Problem 4.2, A = A. (ii) 2[(Av, w) + (Aw, v)] = (A(v + w), v + w) (A(v w), v w) R so that Im [(Av, w) + (Aw, v)] = 0 for all v, w. Similarly 2i[ (Av, w) + (Aw, v)] = (A(v + iw), v + iw) (A(v iw), v iw) R so that Re [ (Av, w) + (Aw, v)] = 0 for all v, w. So (Av, w) = (Aw, v) = (v, Aw), i.e., A = A. When C is replaced by R, we cannot conclude that A is real symmetric since for any real skew symmetric matrix A R n n, x T Ax = Follow from Problem Express the columns (basis of C n ) of A in terms of the (orthonormal) columns of Q. 7. (T T v, v) = (T v, T v) 0 for all v V. So T T 0 and similar for T T 0. When T is invertible, T v 0 for all v 0 so (T T v, v) = (T v, T v) > 0.

16 16 CHAPTER 1. REVIEW OF LINEAR ALGEBRA 1.6 Inner product and positive operators Theorem Let V be an inner product space with inner product (, ) and let T End V. Then u, v := (T u, v), u, v V, defines an inner product if and only if T is pd with respect to (, ). Proof. Suppose u, v := (T u, v), u, v V, defines an inner product, where (, ) is an inner product for V. So (T u, v) = u, v = v, u = (T v, u) = (T u, v) for all u, v V. So T = T, i.e., self-adjoint. For any v 0, 0 < v, v = (T v, v) so that T is pd with respect to (, ). The other implication is trivial. The next theorem shows that in the above manner inner products are in one-one correspondence with pd matrices. Theorem Let (, ) and, be inner products of V. Then there exists a unique T End V such that u, v := (T u, v), u, v V. Moreover, T is positive definite with respect to both inner products. Proof. Let E = {e 1,..., e n } be an orthonormal basis of V with respect to (, ). So each v V can be written as v = n (v, e i)e i. Now for each v V defines T End V by n T v := v, e i e i Clearly (T u, v) = n n u, e i (e i, v) = u, (v, e i )e i = u, v. Since, is an inner product, from Theorem T is pd with respect to (, ). When v 0, T v, v = (T 2 v, v) = (T v, T v) > 0) so that T is pd with respect to,. The uniqueness follows from Problem 4.1. Theorem Let F = {f 1,..., f n } be a basis of V. There exists a unique inner product (, ) on V such that F is an orthonormal basis. Proof. Let (, ) be an inner product with orthonormal basis E = {e 1,..., e n }. By Problem 1.4 S End V defined by Sf i = e i is invertible. Set T := S S > 0 ( and T > 0 from Problem 5.7 are with respect to (, )). So u, v = (T u, v) is an inner product by Theorem and f 1,..., f n are orthonormal with respect to,. It is straightforward to show the uniqueness. Problems

17 1.7. INVARIANT SUBSPACES Let E = {e 1,..., e n } be a basis of V. For any u = n a ie i and v = n b ie i, show that (u, v) := n a ib i is the unique inner product on V so that E is an orthonormal basis. 2. Find an example that there are inner products (, ) on, on V and T End V so that T is pd with respect to one but not to the other. 3. Let V be an inner product space. Show that for each A C n n there are u 1,..., u n ; v 1,..., v n C n such that a ij = (u i, v j ) 1 i, j n. 4. Suppose that (, ) on, are inner products on V, T End V and let T ( ) and T be the corresponding adjoints. Show that T ( ) and T are similar. Solutions to Problems Uniqueness follows from Theorem By Theorem there is a pd S End V with respect to both inner products such that u, v = (Su, v) for all u, v V and thus S 1 u, v = (u, v). So u, T v = T u, v = (ST u, v) = (u, T ( ) S ( ) v) = S 1 u, T ( ) S ( ) v Since S ( ) = S = S, say, and (A 1 ) = (A ) 1 for any A End V ((A 1 ) A = (AA 1 ) = I by Problem 4.2). Then u, T v = u, (S 1 ) T ( ) S v = u, (S ) 1 T ( ) S v. Hence T = (S ) 1 T ( ) S. 1.7 Invariant subspaces Let W be a subspace of V. For any T Hom (V, U), the restriction of T, denote by T W, is the unique T 1 Hom (W, U) such that T 1 (w) = T (w) for all w W. Let W be a subspace of V and T End V. Then W is said to be invariant under T if T w W for all w W, i.e., T (W ) W. The trivial invariant subspaces are 0 and V. Other invariant subspaces are called nontrivial or proper invariant subspaces. If W is invariant under T, then the restriction T W Hom (W, V ) of T induces T W End W (we use the same notation T W ).

18 18 CHAPTER 1. REVIEW OF LINEAR ALGEBRA Theorem Let V be an inner product space over C and T End V. If W is an invariant subspace under T and T, then (a) (T W ) = T W. (b) If T is normal, Hermitian, psd, pd, unitary, so is T W. Proof. (a) For any x, y W, T W x = T x, T W y = T y. By the assumption, (T W x, y) W = (T x, y) V = (x, T y) V = (x, T W y) W. So (T W ) = T W. (b) Follows from Problem 7.3. Problems 1. Prove that if W V is a subspace and T 1 Hom (W, V ), then there is T Hom (V, U) so that T W = T 1. Is T unique? 2. Show that the restriction on W V of the sum of S, T Hom (V, U) is the sum of their restrictions. How about restriction of scalar multiple of T on W? 3. Show that if S, T End V and the subspace W is invariant under S and T, then (ST ) W = (S W )(T W ). 4. Let T End V and S End W and H Hom (V, W ) satisfy SH = HT. Prove that Im H is invariant under S and Ker H is invariant under T. Solutions to Problems Let {e 1,..., e m } be a basis of W and extend it to E = {e 1,..., e m, e m+1,..., e n } a basis of V. Define T Hom (V, U) by T e i = T 1 e i, i = 1,..., m and T e j = w j where j = m + 1,..., n and w m+1,..., w n W are arbitrary. Clearly T W = T 1 but T is not unique. 2. (S + T ) W v = (S + T )v = Sv + T v = S W v + T W v for all v V. So (S + T ) W = S W + T W. 3. (S W )(T W )v = S W (T v) = ST v since T v W and thus (S W )(T W )v = ST W v for all v. 4. For any v Ker H, H(T v) = SHv = 0, i.e., T v Ker H so that Ker H is an invariant under T. For any Hv Im H (v V ), S(Hv) = HT (v) Im H.

19 1.8. PROJECTIONS AND DIRECT SUMS Projections and direct sums The vector space V is said to be a direct sum of the subspaces W 1,..., W m if each v V can be uniquely expressed as v = n w i where w i W i, i = 1,..., n and is denoted by V = W 1 W m. In other words, V = W W m and if w w m = w w m where w i W i, then w i = w i for all i. Theorem V = W 1 W m if and only if V = W W m and W i (W 1 + +Ŵi+ +W m ) = 0 for all i = 1,..., m. Here Ŵi denotes deletion. In particular, V = W 1 W 2 if and only if W 1 + W 2 = V and W 1 W 2 = 0. Proof. Problem 7. In addition, if V is an inner product space, and W i W j if i j, i.e., w i w j whenever w i W i, w j W j, we call V an orthogonal sum of W 1,..., W m, denoted by V = W 1 + +W m. Suppose V = W 1 W m. Each P i End V defined by P i v = w i, i = 1,..., m satisfies Pi 2 = P i and Im P i = W i. In general P End V is called a projection if P 2 = P. Notice that P is a projection if and only if I V P is a projection; in this case Im P = Ker (I V P ). Theorem Let V be a vector space and let P End V be a projection. Then the eigenvalues of P are either 0 or 1 and rank P = tr P. Proof. Let rank P = k. By Theorem 1.1.2, there are v 1,..., v n V such that P v 1,..., P v k is a basis of Im P and {v k+1,..., v n } is a basis of Ker P. From P 2 = P, E = {P v 1,..., P v k, v k+1,..., v n } is linearly independent (check) and thus a basis of V. So [P ] E E = diag (1,..., 1, 0,..., 0) in which there are k ones. So we have the desired results. The notions direct sum and projections are closely related. Theorem Let V be a vector space and let P 1,..., P m be projections such that P P m = I V. Then V = Im P 1 Im P m. Conversely, if V = W 1 W m, then there are unique projections P 1,..., P m such that P P m = I V and Im P i = W i, i = 1,..., m. Proof. From P 1 + +P m = I V, each v V can be written as v = P 1 v+ +P m v so that V = Im P Im P m. By Theorem m m m dim V = tr I V = tr P i = rank P i = dim Im P i. So V = Im P 1 Im P m (Problem 8.1). Conversely if V = W 1 W m, then for any v V, there is unique factorization v = w w m, w i W i, i = 1,..., m. Define P i End V by P i v = w i. It is easy to see each P i is a projection and Im P i = W i and P P m = I V. For the uniqueness, if there are projections Q 1,..., Q m such that Q Q m = I V and Im Q i = W i for all i, then for each v V, from the unique factorization Q i v = P i v so that P i = Q i for all i.

20 20 CHAPTER 1. REVIEW OF LINEAR ALGEBRA Corollary If P End V is a projection, then V = Im P Ker P. Conversely if V = W W, then there is a projection P End V such that Im P = W and Ker P = W. We call P End V an orthogonal projection if P 2 = P = P. Notice that P is an orthogonal projection if and only if I V P is an orthogonal projection. The notions orthogonal sum and orthogonal projections are closely related. Theorem Let V be an inner product space and let P 1,..., P m be orthogonal projections such that P P m = I V. Then V = Im P 1 + +Im P m. Conversely, if V = W 1 + +W m, then there are unique orthogonal projections P 1,..., P m such that P P m = I V and Im P i = W i, i = 1,..., m. Proof. From Theorem 1.8.3, if P 1,..., P m are orthogonal projections such that P P m = I V, then V = Im P 1 Im P m. It suffices to show that the sum is indeed orthogonal. From the proof of Theorem 1.8.3, P i v = v i Im P i for all i, where v V is expressed as v = m v i. Thus for all u, v V, if i j, then (P i u, P j v) = (P i u, v j ) = (u, P i v j ) = (u, P i v j ) = (u, 0) = 0. Conversely if V = W 1 + +W m, then from Theorem there are unique projections P 1,..., P m such that P P m = I V and Im P i = W i, for all i. It remains to show that each projection P i is an orthogonal projection, i.e., P i = Pi. For u, v V, write u = m u i, v = m v i where u i, v i W i for all i. Then for all i m m (P i u, v) = (u i, v) = (u i, v i ) = (u i, v i ) = ( u i, v i ) = (u, P i v). Corollary Let V be an inner product space. If P End V is an orthogonal projection, then V = Im P +Ker P. Conversely if V = W +W, then there is an orthogonal projection P End V such that Im P = W and Ker P = W. Proof. Apply Theorem on P and I V P. Problems 1. Show that if V = W W m, then V = W 1 W m if and only if dim V = dim W dim W m. 2. Let P 1,..., P m End V be projections on V and P P m = I V. Show that P i P j = 0 whenever i j. 3. Let P 1,..., P m End V be projections on V. Show that P P m is a projection if and only if P i P j = 0 whenever i j.

21 1.8. PROJECTIONS AND DIRECT SUMS Show that if P End V is an orthogonal projection, then P v v for all v V. 5. Prove that if P 1,..., P m End V are projections on V with P P m = I V, then there is an inner product so that P 1,..., P n are orthogonal projections. 6. Show that if P End V is an orthogonal projection on the inner product space and if W is P -invariant subspace of V, then P W End W is an orthogonal projection. 7. Prove Theorem Solutions to Problems By Theorem 1.8.3, V = Im P 1 Im P m so that for any v V, P i P j v = P i v j = 0 if i j. 3. P P m is a projection means P i + P i P j = i i j i P 2 i + i j P i P j = (P P m ) 2 = P P m. So if P i P j = 0 for i j, then P P m is a projection. Conversely, if P P m is a projection, so is I V (P P m ). (Roy) Suppose that P := P 1 + +P m is a projection. Then the restriction of P on its image P (V ) is the identity operator and P i P (V ) is still a projection for each i = 1,..., m. According to Theorem 1.8.3, P (V ) = m Im P i P (V ) = m Im P i, which implies that P i P j = 0 whenever i j. 4. I P is also an orthogonal projection. Write v = P v + (I P )v so that v 2 = P v 2 + (I P )v 2 P v By Theorem 1.8.3, V = Im P 1 Im P m. Let {e i1,..., e ini } be a basis of Im P i for all i. Then there is an inner product, by Theorem so that the basis {e 11,..., e 1n1,..., e m1,..., e mnm } is orthonormal. Thus P 1,..., P m are orthogonal projections by Theorem Notice that P 2 W = P 2 W = P W since P 2 = P and from Theorem (P W ) = P W = P W since P = P. 7.

22 22 CHAPTER 1. REVIEW OF LINEAR ALGEBRA 1.9 Dual spaces and Cartesian products The space V = Hom (V, C) is called the dual space of V. Elements in V are called (linear) functionals. Theorem Let E = {e 1,..., e n } be a basis of V. Define e 1,..., e n V by e i (e j ) = δ ij, i, j = 1,..., n. Then E = {e 1,..., e n} is a basis of V. Proof. Notice that each f V can be written as f = n f(e i)e i since both sides take the same values on e 1,..., e n. Now if n a ie i = 0, then 0 = n a ie i (e j) = a j, j = 1,..., n. So e 1,..., e n are linearly independent. The basis E is called a basis of V dual to E, or simply dual basis. If T Hom (V, W ), then its dual map (or transpose) T Hom (W, V ) is defined by T (ϕ) = ϕ T, ϕ W. The functional T (ϕ) is in V, and is called the pullback of ϕ along T. Immediately the following identity holds for all ϕ W and v V : T (ϕ), v = ϕ, T (v) where the bracket f, v := f(v) is the duality pairing of V with its dual space, and that on the right is the duality pairing of W with its dual. This identity characterizes the dual map T, and is formally similar to the definition of the adjoint (if V and W are inner product spaces). We remark that we use the same notation for adjoint and dual map of T Hom (V, W ). But it should be clear in the context since adjoint requires inner products but dual map does not. Theorem The map : Hom (V, W ) Hom (W, V ) defined by (T ) = T is an isomorphism. Proof. Problem 10. When V has an inner product, linear functionals have nice representation. Theorem (Riesz) Let V be an inner product space. For each f V, there is a unique u V such that f(v) = (v, u) for all v V. Hence the map ξ : V V defined by ξ(u) = (, u) is an isomorphism. Proof. Let E = {e 1,..., e n } be an orthonormal basis of V. Then for all v V, by Theorem f(v) = f( n (v, e i )e i ) = n n f(e i )(v, e i ) = (v, f(e i )e i )

23 1.9. DUAL SPACES AND CARTESIAN PRODUCTS 23 so that u := n f(e i)e i. Uniqueness follows from the positive definiteness of the inner product: If (v, u ) = (v, u) for all v V, then (v, u u ) = 0 for all v. Pick v = u u to have u = u. The map ξ : V V defined by ξ(u) = (, u) is clearly a linear map bijective from the previous statement. Let V 1,..., V m be m vector spaces over C. m i V i = V 1 V m = {(v 1,..., v m ) : v i V i, i = 1,..., m} is called the Cartesian product of V 1,..., V m. It is a vector space under the natural addition and scalar multiplication. Remark: If V is not finite-dimensional but has a basis {e α : α A} (axiom of choice is needed) where A is the (infinite) index set, then the same construction as in the finite-dimensional case (Theorem 1.9.1) yields linearly independent elements {e α : α A} in V, but they will not form a basis. Example (for Alex s question): The space R, whose elements are those sequences of real numbers which have only finitely many non-zero entries, has a basis {e i : i = 1, 2,..., } where e i = (0,..., 0, 1, 0,... ) in which the only nonzero entry 1 is at the ith position. The dual space of R is R ℵ, the space of all sequences of real numbers and such a sequence (a n ) is applied to an element (x n ) R to give n a nx n, which is a finite sum because there are only finitely many nonzero x n. The dimension of R is countably infinite but R ℵ does not have a countable basis. Problems 1. Show that Theorem remains true if the inner product is replaced by a nondegenerate bilinear form B(, ) on a real vector space. Nondegeneracy means that B(u, v) = 0 for all v V implies that u = Let {e 1,..., e n } be an orthonormal basis of the inner product space V. Show that {f 1,..., f n } is a dual basis if and only if f j (v) = (v, e i ) for all v V and j = 1,..., n. 3. Let {f 1,..., f n } be a basis of V. Show that if v V such that f j (v) = 0 for all j = 1,..., n, then v = Let {f 1,..., f} be a basis of V. Prove that there is a basis E = {e 1,..., e n } of V such that f i (e j ) = δ ij, i, j = 1,..., n. 5. Let E = {e 1,..., e n } and F = {f 1,..., f n } be two bases of V and let E and F be their dual bases. If [I V ] F E = P and [I V ]E F = Q, prove that Q = (P 1 ) T. 6. Show that if S Hom (W, U) and T Hom (V, W ), then (ST ) = T S.

24 24 CHAPTER 1. REVIEW OF LINEAR ALGEBRA 7. Show that ξ : V V defined by ξ(v)(ϕ) = ϕ(v) for all v V and V V, is an (canonical) isomorphism. 8. Suppose E and F are bases for V and W and let E and F be their dual bases. For any T Hom (V, W ), what is the relation between [T ] F E and [T ] E F? 9. Show that dim(v 1 V m ) = dim V dim V m. 10. Prove Theorem Solutions to Problems Let B(, ) be a nondegenerate bilinear form on a real vector space V. Define ϕ : V V by ϕ(v) = B(v, ). Clearly ϕ is linear and we need to show that ϕ is surjective. Since dim V = dim V, it suffices to show that ϕ is injective (and thus bijective). Let v Ker ϕ, i.e., B(v, w) = 0 for all w V. By the nondegeneracy of B(, ), v = Notice that (e i, e j ) = δ ij. If F is a dual basis, then for each v = n (v, e i)e i, f j (v) = f j ( n (v, e i)e i ) = n (v, e i)f j (e i ) = (v, e j ). On the other hand, if f j (v) = (v, e j ) for all v, then f j (e i ) = (e i, e j ) = δ ij, i.e., F is a dual basis. 3. The assumption implies that f(v) = 0 for all f V. If v 0, extends it to basis E = {v, v 2,..., v n } of V. Define g V by g(v) = 1 and g(v i ) = 0 for all i = 2,..., n. 4. Introduce an inner product on V so that by Theorem we determine u 1,..., u n via f j (v) = (v, u j ). By Theorem and Theorem there is a pd T End V such that (T u i, u j ) = δ ij. Set e i = T u i for all i. Uniqueness is clear. Another approach: Since {f 1,..., f n } is a basis, each f i 0 so that dim Ker f i = n 1. So pick v i j i Ker f j 0 (why?) such that f i (v i ) = 1. Then f j (v i ) = 0 for all j For any ϕ U, (T S )ϕ = T (ϕ S) = (ϕ S) T = ϕ (ST ) = (ST ) ϕ. So (ST ) = T S. 7. Similar to Problem 10. Notice that ξ(e i ) = e i because of e e. for all i. It is canonical 8. Let A = [T ] F E, i.e., T e j = n a ijf i. Write v = i α ie i so that (T f j )v = f j (T v) = f j ( i α i T e i ) = f j ( i,k α i a ki f k ) = i,k α i a ki f j (f k ) = i α i a ji.

25 1.10. NOTATIONS 25 On the other hand ( i a ji e i )(v) = i a ji e i ( k α k e k ) = i α i a ji. So T fj = i a jie i, i.e., [T ] E F = AT, i.e., ([T ] F E )T = [T ] E F. explains why T is also called the transpose of T. This (Roy) Suppose E = {e 1,..., e n } and F = {f 1,..., f m }. Let [T ] F E = (a ij ) C m n and [T ] E F = (b pq) C n m. By definition, T e j = m a ijf i and T fq = n p=1 b pqe p. Since e j (e i) = δ ij and fj (f i) = δ ij, the definition (T fj )e i = fj (T e i) implies that b ij = a ji. Thus [T ] E F = ([T ]F E )T. 9. If E i = {e i1,..., e ini } is a basis of V i, i = 1,..., m, then E = {(e 11,..., e m1 ),..., (e 1n1,..., e mnm )} is a basis of V 1 V m (check!). 10. is linear since for any scalars α, β, (αs + βt ) ϕ = ϕ(αs + βt ) = αϕs + βϕt = αs ϕ + βt ϕ = (αs + βt )ϕ. It is injective since if T = 0, then ϕ T = T ϕ = 0 for all ϕ V. By Problem 3, T = 0. Thus is an isomorphism Notations Denote by S m the symmetric group on {1,..., m} which is the group of all bijections on {1,..., m}. Each σ S m is represented by ( ) 1,..., m σ =. σ(1),..., σ(m) The sign function ε on S m is ε(σ) = { 1 if σ is even 1 if σ is odd where σ S m. Let α, β, γ denote finite sequences whose components are natural numbers Γ(n 1,..., n m ) := {α : α = (α(1),..., α(m)), 1 α(i) n i, i = 1,..., m} Γ m,n := {α : α = (α(1),..., α(m)), 1 α(i) n, i = 1,..., m} G m,n := {α Γ m,n : α(1) α(m)} D m,n := {α Γ m,n : α(i) α(j) whenever i j} Q m,n := {α Γ m,n : α(1) < < α(m)}.

26 26 CHAPTER 1. REVIEW OF LINEAR ALGEBRA For α Q m,n and σ S m, define Then ασ D m,n and ασ := (α(σ(1)),..., ασ(m))). D m,n = {ασ : α Q m.n, σ S m } = Q m,n S m. (1.1) Suppose n > m. For each ω Q m,n, denote by ω the complementary sequence of ω, i.e., ω Q n m,n, the components of ω and ω are 1,..., n, and ( ) 1,..., m, m + 1,..., n σ = ω(1),..., ω(m), ω (1),..., ω S (n m) n. It is known that ε(σ) = ( 1) s(ω)+m(m+1)/2, (1.2) where s(ω) := ω(1)+ +ω(m). Similarly for ω Q m,n, θ S m and π S n m, ( ) 1,..., m, m + 1,..., n σ = ωθ(1),..., ωθ(m), ω π(1),..., ω S π(n m) n and Moreover S n = {σ = ε(σ) = ε(θ)ε(π)( 1) s(ω)+m(m+1)/2, (1.3) ( ) 1,..., m, m + 1,..., n ωθ(1),..., ωθ(m), ω π(1),..., ω : π(n m) ω Q m,n, θ S m, π S n m }. (1.4) Let A C n k. For any 1 m n, 1 l k, α Q m,n and β Q l,k. Let A[α β] denote the submatrix of A by taking the rows α(1),..., α(m) of A and the columns β(1),..., β(l). So the (i, j) entry of A[α β] is a α(i)β(j). The submatrix of A complementary to A[α β] is denoted by A(α β) := A[α β ] C (n m) (k l). Similarly we define A(α β] := A[α β] C (n m) k and A[α β) := A[α β ] C n (k l). Recall that if A C n n, the determinant function is given by It is easy to deduce that det A = det A = σ S n ε(σ) σ S n ε(π)ε(σ) n a iσ(i). (1.5) n a π(i),σ(i). (1.6) Let A C n k. For any 1 m min{n, k}, α Q m,n and β Q m,k, det A[α β] = σ S m ε(σ) a α(i),βσ(i). (1.7)

27 1.10. NOTATIONS 27 For any θ S n, we have det A[αθ β] = det A[α βθ] = ε(θ) det A[α β]. (1.8) When α Γ m,n, β Γ m,k, A[α β] C m m whose (i, j) entry is a α(i)β(j). However if α Γ m,n and α D m,n, then A[α β] has two identical rows so that det A[α β] = 0, α Γ m,n \ D m,n. (1.9) Theorem (Cauchy-Binet) Let A C r n, B C n l and C = AB. Then for any 1 m min{r, n, l} and α Q m,r and β Q m,l, det C[α β] = det A[α ω] det B[ω β]. ω Q m,n Proof. det C[α β] = σ S m ε(σ) = σ S m ε(σ) = = = = = = = σ S m ε(σ) j=1 c α(i),βσ(i) n a α(i),j b j,βσ(i) τ Γ m,n a α(i),γ(i) b γ(i),βσ(i) a α(i),γ(i) ε(σ)b γ(i),βσ(i) τ Γ m,n σ S m τ Γ m,n ω D m,n m a α(i),γ(i) det B[γ β] a α(i),γ(i) det B[γ β] ω Q m,n σ S m ω Q m,n ( σ S m ε(σ) a α(i),ωσ(i) det B[ωσ β] a α(i),ωσ(i) ) det B[ω β] ω Q m,n det A[α ω] det B[ω β]. Theorem (Laplace) Let A C n n, 1 m n and α Q m,n. Then det A = det A[α ω]( 1) s(α)+s(ω) det A(α ω). ω Q m,n

28 28 CHAPTER 1. REVIEW OF LINEAR ALGEBRA Proof. The right side of the formula is ( 1) s(α) = ( 1) s(α) = ( 1) s(α) = n ε(π)ε(σ) σ S n = det A. ω Q m,n ( 1) s(ω) det A[α ω] det A[α ω ] π S n m ε(θ)ε(π)( 1) s(ω) n m a α(i)ωθ(i) ω Q m,n θ S m j=1 n ε(σ)( 1) m(m+1)/2 a α(i)σ(i) a α (i m)σ(i) σ S n i=m+1 a π(i)σ(i) a α (j)ω π(j) Theorem Let U U n (C). Then for α, β Q m,n with 1 m n, det U det U[α β] = ( 1) s(α)+s(β) det U(α β). (1.10) In particular det U[α β] = det U(α β). (1.11) Proof. Because det I[β ω] = δ βω and U [β γ] = U[γ β] T, apply Cauchy-Binet theorem on I n = U U to yield δ βω = det U[γ β] det U[γ ω]. (1.12) τ Q m,n Using Laplace theorem, we have δ αγ det U = det U[γ ω]( 1) s(α)+s(ω) det U(α ω). ω Q m,n Multiplying both sides by U[γ β] and sum over τ S m, the left side becomes γ Q m,n δ αγ det U det U[γ β] = det U det U[α β] and the right side becomes = γ Q m,n ω Q m,n det U[γ β] det U[γ ω]( 1) s(α)+s(ω) det U(α ω) ω Q m,n δ βω ( 1) s(α)+s(ω) det U(α ω) = ( 1) s(α)+s(β) det U(α β) Since det U = 1, we have the desired result.

29 1.10. NOTATIONS 29 The permanent function of A is per A = n σ S n which is also known as positive determinant. a iσ(i) Problems 1. Prove equation (1.2). 2. Prove that m n j=1 a ij = γ Γ m,n m a iγ(i). 3. Prove a general Laplace expansion theorem: Let A C n n, α, β Q m,n, and 1 m n. δ αβ det A = det A[α ω]( 1) s(α)+s(ω) det A(β ω). ω Q m,n 4. Show that Γ(n 1,..., n m ) = D m,n = ( ) n + m 1 n i, Γ m,n = n m, G m,n =, m ( ) ( ) n n m!, Q m,n =. m m Solutions to Problems (Roy) If A = (a ij ) C m n, the left side of m n j=1 a ij = m γ Γ m,n a iγ(i) is the product of row sums of A. Fix n and use induction on m. If m = 1, then both sides are equal to the sum of all entries of A. Suppose that the identity holds for m 1. We have γ Γ m,n a iγ(i) = = = m 1 γ Γ m 1,n m 1 n j=1 n j=1 a ij a ij a iγ(i) n j=1 n a mj j=1 a mj by induction hypothesis

30 30 CHAPTER 1. REVIEW OF LINEAR ALGEBRA The ith slot of each sequence in Γ(n 1,..., n m ) has n i choices. So Γ(n 1,..., n m ) = m n i and thus Γ m,n = n m follows from Γ(n 1,..., n m ) = m n i with n i = n for all i. ). Now Qm,n is the number of picking m distinct num- G m,n = ( n+m 1 m bers from 1,..., n. So Q m,n = ( n ) m) is clear and Dm,n = Q m,n m! = m! ( n m

31 Chapter 2 Group representation theory 2.1 Symmetric groups Let S n be the symmetric group on {1,..., n}. A matrix P C n n is said to be a permutation matrix if p i,σ(j) = δ i,σ(j), i, j = 1,..., n where σ S n and we denote by P (σ) for P because of its association with σ. Denote by P n the set of all permutation matrices in C n n. Notice that ϕ : S n P n (δ P (σ)) is an isomorphism. Theorem (Cayley) Each finite group G with n elements is isomorphic to a subgroup of S n. Proof. Let Sym (G) denotes the group of all bijections of G. For each σ G, define the left translation l σ : G G by l σ (x) = σx for all x G. It is easy to show that l στ = l σ l τ for all σ, τ G so that l : G Sym (G) is a group homomorphism. The map l is injective since l σ = l τ implies that σx = τx for any x G and thus σ = τ. So G is isomorphic to some subgroup of Sym (G) and thus to some subgroup of S n. The following is another proof in matrix terms. Let G = {σ 1,..., σ n }. Define the regular representation Q : G GL n (C): Q(σ) ij := (δ σi,σσ j ) GL n (C). It is easy to see that Q(σ) is a permutation matrix and Q : σ Q(σ) is injective. So it suffices to show that Q is a homomorphism. Now (Q(σ)Q(π)) ij = n Q(σ) ik Q(π) kj = k=1 n δ σi,σσ k δ σk,πσ j = δ σi,σπσ j = Q(σπ) ij. k=1 31

32 32 CHAPTER 2. GROUP REPRESENTATION THEORY So we view each finite group as a subgroup of S n for some n. The element σ S n is called a cycle of length k (1 k n), if there exist 1 i 1,..., i k n such that σ(i t ) = i t+1, t = 1,..., k 1 σ(i k ) = i 1 σ(i) = i, i {i 1,..., i + k} and we write σ = (i 1,..., i k ). Two cycles (i 1,..., i k ) and (j 1,..., j m ) are said to be disjoint if there is no intersection between the two set {i 1,..., i k } and {j 1,..., j m }. From the definition, we may express the cycle σ as (i 1, σ(i 1 ),..., σ k 1 (i 1 )). For any σ S n there is 1 k n such that σ k (i) = i and let k be the smallest such integer. Then (i σ(i) σ k 1 (i)) is a cycle but is not necessarily equal to σ. The following theorem asserts that σ can be reconstructed from these cycles. Theorem Each element σ S n can be written as a product of disjoint cycles. The decomposition is unique up to permutation of the cycles. Proof. Let i n be a positive integer. Then there is a smallest positive integer r such that σ r (i) = i. If r = n, then the cycle (i σ(i) σ r 1 (i)) is σ. If r < n, then there is positive integer j n such that j {i, σ(i),..., σ r 1 (i)}. Then there is a smallest positive integer s such that σ s (j) = j. Clearly the two cycles (i σ(i) σ r 1 (i)) and (j σ(j) σ s 1 (j)) are disjoint. Continue the process we have σ = (i σ(i) σ r 1 (i))(j σ(j) σ s 1 (j)) (k σ(k) σ t 1 (k)) For example, ( ) = ( ) ( 4 7 ) ( 6 ) = ( ) ( 4 7 ). So c(σ) = 3 (which include the cycle of length 1) ( ) = ( ) ( 3 4 ) = ( 3 4 ) ( ) = ( 3 4 ) ( ) So c(σ) = 2. Cycles of length 2, i.e., (i j), are called transpositions. Theorem Each element σ S n can be written (not unique) as a product of transposition.

33 2.1. SYMMETRIC GROUPS 33 Proof. Use Theorem (i 1 i k ) = (i 1 i 2 )(i 2 i 3 ) (i k 1 i k ) = (i 1 i k )(i 1 i k 1 ) (i 1 i 2 ). (2.1) Though the decomposition is not unique, the parity (even number or odd number) of transpositions in each σ S n is unique (see Merris p.56-57). Problems 1. Prove that each element of S n is a product of transpositions. (Hint: (i 1 i 2 i k ) = (i 1 i 2 )(i 2 i 3 ) (i k 1 i k ) = (i 1 i k )(i 1 i k 1 ) (i 1 i 2 )). 2. Show that S n is generated by the transposition (12), (23),..., (n 1, n) (Hint: express each tranposition as a product of (12), (23),..., (n 1, n)). 3. Express the following permutations as a product of nonintersecting cycles: (a) (a b)(a i 1... i r b j 1... j r ). (b) (a b)(a i 1... i r b j 1... j r )(a b). (c) (a b)(a i 1... i r ). (d) (a b)(a i 1... i r )(a b). 4. Express σ = (24)(1234)(34)(356)(56) S 7 as a product of disjoint cycles and find c(σ). Solutions to Problems Use Theorem and the hint. 2. (i, i + 2) = (i + 1, i + 2)(i, i + 1)(i + 1, i + 2) and each (ij) is obtained by conjugating (i, i + 1) by the product (j, j + 1) (i + 1, i + 2) where i < j. So S n is generated by (12), (23),..., (n 1, n). (Zach) (ij) = (1i)(1j)(1i). By Problem 1, every cycle is a product of transpositions. So S n is generated by (12), (13),..., (1n). Now (1k) = (1, k 1)(k 1, k)(1, k 1) for all k > (a) (a b)(a i 1... i r b j 1... j r ) = (a i 1... i r )(b j 1... j s ). (b) (a b)(a i 1... i r b j 1... j s )(a b) = (a i 1... i r )(b j 1... j s )(a b) = (a j 1... j s b i 1... i r ). (c) (a b)(a i 1... i r ) = (a i 1... i r b). (d) (a b)(a i 1... i r )(a b) = (b i 1... i r ). ( ) σ = (24)(1234)(34)(356)(56) = = (14235)(6)(7) So c(σ) = 3.

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 1 State vector space and the dual space Space of wavefunctions The space of wavefunctions is the set of all

Διαβάστε περισσότερα

2 Composition. Invertible Mappings

2 Composition. Invertible Mappings Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan Composition. Invertible Mappings In this section we discuss two procedures for creating new mappings from old ones, namely,

Διαβάστε περισσότερα

Reminders: linear functions

Reminders: linear functions Reminders: linear functions Let U and V be vector spaces over the same field F. Definition A function f : U V is linear if for every u 1, u 2 U, f (u 1 + u 2 ) = f (u 1 ) + f (u 2 ), and for every u U

Διαβάστε περισσότερα

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit Ting Zhang Stanford May 11, 2001 Stanford, 5/11/2001 1 Outline Ordinal Classification Ordinal Addition Ordinal Multiplication Ordinal

Διαβάστε περισσότερα

Every set of first-order formulas is equivalent to an independent set

Every set of first-order formulas is equivalent to an independent set Every set of first-order formulas is equivalent to an independent set May 6, 2008 Abstract A set of first-order formulas, whatever the cardinality of the set of symbols, is equivalent to an independent

Διαβάστε περισσότερα

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions SCHOOL OF MATHEMATICAL SCIENCES GLMA Linear Mathematics 00- Examination Solutions. (a) i. ( + 5i)( i) = (6 + 5) + (5 )i = + i. Real part is, imaginary part is. (b) ii. + 5i i ( + 5i)( + i) = ( i)( + i)

Διαβάστε περισσότερα

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Chapter 6: Systems of Linear Differential. be continuous functions on the interval Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations

Διαβάστε περισσότερα

EE512: Error Control Coding

EE512: Error Control Coding EE512: Error Control Coding Solution for Assignment on Finite Fields February 16, 2007 1. (a) Addition and Multiplication tables for GF (5) and GF (7) are shown in Tables 1 and 2. + 0 1 2 3 4 0 0 1 2 3

Διαβάστε περισσότερα

Congruence Classes of Invertible Matrices of Order 3 over F 2

Congruence Classes of Invertible Matrices of Order 3 over F 2 International Journal of Algebra, Vol. 8, 24, no. 5, 239-246 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.2988/ija.24.422 Congruence Classes of Invertible Matrices of Order 3 over F 2 Ligong An and

Διαβάστε περισσότερα

Example Sheet 3 Solutions

Example Sheet 3 Solutions Example Sheet 3 Solutions. i Regular Sturm-Liouville. ii Singular Sturm-Liouville mixed boundary conditions. iii Not Sturm-Liouville ODE is not in Sturm-Liouville form. iv Regular Sturm-Liouville note

Διαβάστε περισσότερα

Matrices and Determinants

Matrices and Determinants Matrices and Determinants SUBJECTIVE PROBLEMS: Q 1. For what value of k do the following system of equations possess a non-trivial (i.e., not all zero) solution over the set of rationals Q? x + ky + 3z

Διαβάστε περισσότερα

UNIT - I LINEAR ALGEBRA. , such that αν V satisfying following condition

UNIT - I LINEAR ALGEBRA. , such that αν V satisfying following condition UNIT - I LINEAR ALGEBRA Definition Vector Space : A non-empty set V is said to be vector space over the field F. If V is an abelian group under addition and if for every α, β F, ν, ν 2 V, such that αν

Διαβάστε περισσότερα

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p) Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 2005-03-08 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

Διαβάστε περισσότερα

Tridiagonal matrices. Gérard MEURANT. October, 2008

Tridiagonal matrices. Gérard MEURANT. October, 2008 Tridiagonal matrices Gérard MEURANT October, 2008 1 Similarity 2 Cholesy factorizations 3 Eigenvalues 4 Inverse Similarity Let α 1 ω 1 β 1 α 2 ω 2 T =......... β 2 α 1 ω 1 β 1 α and β i ω i, i = 1,...,

Διαβάστε περισσότερα

dim(u) = n 1 and {v j } j i

dim(u) = n 1 and {v j } j i SOLUTIONS Math B4900 Homework 1 2/7/2018 Unless otherwise specified, U, V, and W denote vector spaces over a common field F ; ϕ and ψ denote linear transformations; A, B, and C denote bases; A, B, and

Διαβάστε περισσότερα

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in : tail in X, head in A nowhere-zero Γ-flow is a Γ-circulation such that

Διαβάστε περισσότερα

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Partial Differential Equations in Biology The boundary element method. March 26, 2013 The boundary element method March 26, 203 Introduction and notation The problem: u = f in D R d u = ϕ in Γ D u n = g on Γ N, where D = Γ D Γ N, Γ D Γ N = (possibly, Γ D = [Neumann problem] or Γ N = [Dirichlet

Διαβάστε περισσότερα

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y Stat 50 Homework Solutions Spring 005. (a λ λ λ 44 (b trace( λ + λ + λ 0 (c V (e x e e λ e e λ e (λ e by definition, the eigenvector e has the properties e λ e and e e. (d λ e e + λ e e + λ e e 8 6 4 4

Διαβάστε περισσότερα

Lecture 13 - Root Space Decomposition II

Lecture 13 - Root Space Decomposition II Lecture 13 - Root Space Decomposition II October 18, 2012 1 Review First let us recall the situation. Let g be a simple algebra, with maximal toral subalgebra h (which we are calling a CSA, or Cartan Subalgebra).

Διαβάστε περισσότερα

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Chapter 6: Systems of Linear Differential. be continuous functions on the interval Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations

Διαβάστε περισσότερα

Lecture 15 - Root System Axiomatics

Lecture 15 - Root System Axiomatics Lecture 15 - Root System Axiomatics Nov 1, 01 In this lecture we examine root systems from an axiomatic point of view. 1 Reflections If v R n, then it determines a hyperplane, denoted P v, through the

Διαβάστε περισσότερα

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER ORDINAL ARITHMETIC JULIAN J. SCHLÖDER Abstract. We define ordinal arithmetic and show laws of Left- Monotonicity, Associativity, Distributivity, some minor related properties and the Cantor Normal Form.

Διαβάστε περισσότερα

w o = R 1 p. (1) R = p =. = 1

w o = R 1 p. (1) R = p =. = 1 Πανεπιστήµιο Κρήτης - Τµήµα Επιστήµης Υπολογιστών ΗΥ-570: Στατιστική Επεξεργασία Σήµατος 205 ιδάσκων : Α. Μουχτάρης Τριτη Σειρά Ασκήσεων Λύσεις Ασκηση 3. 5.2 (a) From the Wiener-Hopf equation we have:

Διαβάστε περισσότερα

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required) Phys460.nb 81 ψ n (t) is still the (same) eigenstate of H But for tdependent H. The answer is NO. 5.5.5. Solution for the tdependent Schrodinger s equation If we assume that at time t 0, the electron starts

Διαβάστε περισσότερα

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

6.1. Dirac Equation. Hamiltonian. Dirac Eq. 6.1. Dirac Equation Ref: M.Kaku, Quantum Field Theory, Oxford Univ Press (1993) η μν = η μν = diag(1, -1, -1, -1) p 0 = p 0 p = p i = -p i p μ p μ = p 0 p 0 + p i p i = E c 2 - p 2 = (m c) 2 H = c p 2

Διαβάστε περισσότερα

Numerical Analysis FMN011

Numerical Analysis FMN011 Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 12 Periodic data A function g has period P if g(x + P ) = g(x) Model: Trigonometric polynomial of order M T M (x) =

Διαβάστε περισσότερα

( ) 2 and compare to M.

( ) 2 and compare to M. Problems and Solutions for Section 4.2 4.9 through 4.33) 4.9 Calculate the square root of the matrix 3!0 M!0 8 Hint: Let M / 2 a!b ; calculate M / 2!b c ) 2 and compare to M. Solution: Given: 3!0 M!0 8

Διαβάστε περισσότερα

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β 3.4 SUM AND DIFFERENCE FORMULAS Page Theorem cos(αβ cos α cos β -sin α cos(α-β cos α cos β sin α NOTE: cos(αβ cos α cos β cos(α-β cos α -cos β Proof of cos(α-β cos α cos β sin α Let s use a unit circle

Διαβάστε περισσότερα

4.6 Autoregressive Moving Average Model ARMA(1,1)

4.6 Autoregressive Moving Average Model ARMA(1,1) 84 CHAPTER 4. STATIONARY TS MODELS 4.6 Autoregressive Moving Average Model ARMA(,) This section is an introduction to a wide class of models ARMA(p,q) which we will consider in more detail later in this

Διαβάστε περισσότερα

C.S. 430 Assignment 6, Sample Solutions

C.S. 430 Assignment 6, Sample Solutions C.S. 430 Assignment 6, Sample Solutions Paul Liu November 15, 2007 Note that these are sample solutions only; in many cases there were many acceptable answers. 1 Reynolds Problem 10.1 1.1 Normal-order

Διαβάστε περισσότερα

A Note on Intuitionistic Fuzzy. Equivalence Relation

A Note on Intuitionistic Fuzzy. Equivalence Relation International Mathematical Forum, 5, 2010, no. 67, 3301-3307 A Note on Intuitionistic Fuzzy Equivalence Relation D. K. Basnet Dept. of Mathematics, Assam University Silchar-788011, Assam, India dkbasnet@rediffmail.com

Διαβάστε περισσότερα

Section 8.3 Trigonometric Equations

Section 8.3 Trigonometric Equations 99 Section 8. Trigonometric Equations Objective 1: Solve Equations Involving One Trigonometric Function. In this section and the next, we will exple how to solving equations involving trigonometric functions.

Διαβάστε περισσότερα

Fractional Colorings and Zykov Products of graphs

Fractional Colorings and Zykov Products of graphs Fractional Colorings and Zykov Products of graphs Who? Nichole Schimanski When? July 27, 2011 Graphs A graph, G, consists of a vertex set, V (G), and an edge set, E(G). V (G) is any finite set E(G) is

Διαβάστε περισσότερα

ST5224: Advanced Statistical Theory II

ST5224: Advanced Statistical Theory II ST5224: Advanced Statistical Theory II 2014/2015: Semester II Tutorial 7 1. Let X be a sample from a population P and consider testing hypotheses H 0 : P = P 0 versus H 1 : P = P 1, where P j is a known

Διαβάστε περισσότερα

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X. Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequalit for metrics: Let (X, d) be a metric space and let x,, z X. Prove that d(x, z) d(, z) d(x, ). (ii): Reverse triangle inequalit for norms:

Διαβάστε περισσότερα

Homework 3 Solutions

Homework 3 Solutions Homework 3 Solutions Igor Yanovsky (Math 151A TA) Problem 1: Compute the absolute error and relative error in approximations of p by p. (Use calculator!) a) p π, p 22/7; b) p π, p 3.141. Solution: For

Διαβάστε περισσότερα

Lecture 10 - Representation Theory III: Theory of Weights

Lecture 10 - Representation Theory III: Theory of Weights Lecture 10 - Representation Theory III: Theory of Weights February 18, 2012 1 Terminology One assumes a base = {α i } i has been chosen. Then a weight Λ with non-negative integral Dynkin coefficients Λ

Διαβάστε περισσότερα

Inverse trigonometric functions & General Solution of Trigonometric Equations. ------------------ ----------------------------- -----------------

Inverse trigonometric functions & General Solution of Trigonometric Equations. ------------------ ----------------------------- ----------------- Inverse trigonometric functions & General Solution of Trigonometric Equations. 1. Sin ( ) = a) b) c) d) Ans b. Solution : Method 1. Ans a: 17 > 1 a) is rejected. w.k.t Sin ( sin ) = d is rejected. If sin

Διαβάστε περισσότερα

Chapter 3: Ordinal Numbers

Chapter 3: Ordinal Numbers Chapter 3: Ordinal Numbers There are two kinds of number.. Ordinal numbers (0th), st, 2nd, 3rd, 4th, 5th,..., ω, ω +,... ω2, ω2+,... ω 2... answers to the question What position is... in a sequence? What

Διαβάστε περισσότερα

Concrete Mathematics Exercises from 30 September 2016

Concrete Mathematics Exercises from 30 September 2016 Concrete Mathematics Exercises from 30 September 2016 Silvio Capobianco Exercise 1.7 Let H(n) = J(n + 1) J(n). Equation (1.8) tells us that H(2n) = 2, and H(2n+1) = J(2n+2) J(2n+1) = (2J(n+1) 1) (2J(n)+1)

Διαβάστε περισσότερα

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R + Chapter 3. Fuzzy Arithmetic 3- Fuzzy arithmetic: ~Addition(+) and subtraction (-): Let A = [a and B = [b, b in R If x [a and y [b, b than x+y [a +b +b Symbolically,we write A(+)B = [a (+)[b, b = [a +b

Διαβάστε περισσότερα

TMA4115 Matematikk 3

TMA4115 Matematikk 3 TMA4115 Matematikk 3 Andrew Stacey Norges Teknisk-Naturvitenskapelige Universitet Trondheim Spring 2010 Lecture 12: Mathematics Marvellous Matrices Andrew Stacey Norges Teknisk-Naturvitenskapelige Universitet

Διαβάστε περισσότερα

General 2 2 PT -Symmetric Matrices and Jordan Blocks 1

General 2 2 PT -Symmetric Matrices and Jordan Blocks 1 General 2 2 PT -Symmetric Matrices and Jordan Blocks 1 Qing-hai Wang National University of Singapore Quantum Physics with Non-Hermitian Operators Max-Planck-Institut für Physik komplexer Systeme Dresden,

Διαβάστε περισσότερα

MATRICES

MATRICES MARICES 1. Matrix: he arrangement of numbers or letters in the horizontal and vertical lines so that each horizontal line contains same number of elements and each vertical row contains the same numbers

Διαβάστε περισσότερα

3.2 Simple Roots and Weyl Group

3.2 Simple Roots and Weyl Group 44 CHAPTER 3. ROOT SYSTEMS 3.2 Simple Roots and Weyl Group In this section, we fix a root system Φ of rank l in a euclidean space E, with Weyl group W. 3.2.1 Bases and Weyl chambers Def. A subset of Φ

Διαβάστε περισσότερα

CRASH COURSE IN PRECALCULUS

CRASH COURSE IN PRECALCULUS CRASH COURSE IN PRECALCULUS Shiah-Sen Wang The graphs are prepared by Chien-Lun Lai Based on : Precalculus: Mathematics for Calculus by J. Stuwart, L. Redin & S. Watson, 6th edition, 01, Brooks/Cole Chapter

Διαβάστε περισσότερα

Uniform Convergence of Fourier Series Michael Taylor

Uniform Convergence of Fourier Series Michael Taylor Uniform Convergence of Fourier Series Michael Taylor Given f L 1 T 1 ), we consider the partial sums of the Fourier series of f: N 1) S N fθ) = ˆfk)e ikθ. k= N A calculation gives the Dirichlet formula

Διαβάστε περισσότερα

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω 0 1 2 3 4 5 6 ω ω + 1 ω + 2 ω + 3 ω + 4 ω2 ω2 + 1 ω2 + 2 ω2 + 3 ω3 ω3 + 1 ω3 + 2 ω4 ω4 + 1 ω5 ω 2 ω 2 + 1 ω 2 + 2 ω 2 + ω ω 2 + ω + 1 ω 2 + ω2 ω 2 2 ω 2 2 + 1 ω 2 2 + ω ω 2 3 ω 3 ω 3 + 1 ω 3 + ω ω 3 +

Διαβάστε περισσότερα

New bounds for spherical two-distance sets and equiangular lines

New bounds for spherical two-distance sets and equiangular lines New bounds for spherical two-distance sets and equiangular lines Michigan State University Oct 8-31, 016 Anhui University Definition If X = {x 1, x,, x N } S n 1 (unit sphere in R n ) and x i, x j = a

Διαβάστε περισσότερα

SOME PROPERTIES OF FUZZY REAL NUMBERS

SOME PROPERTIES OF FUZZY REAL NUMBERS Sahand Communications in Mathematical Analysis (SCMA) Vol. 3 No. 1 (2016), 21-27 http://scma.maragheh.ac.ir SOME PROPERTIES OF FUZZY REAL NUMBERS BAYAZ DARABY 1 AND JAVAD JAFARI 2 Abstract. In the mathematical

Διαβάστε περισσότερα

Problem Set 3: Solutions

Problem Set 3: Solutions CMPSCI 69GG Applied Information Theory Fall 006 Problem Set 3: Solutions. [Cover and Thomas 7.] a Define the following notation, C I p xx; Y max X; Y C I p xx; Ỹ max I X; Ỹ We would like to show that C

Διαβάστε περισσότερα

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics Fourier Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Introduction Not all functions can be represented by Taylor series. f (k) (c) A Taylor series f (x) = (x c)

Διαβάστε περισσότερα

Finite Field Problems: Solutions

Finite Field Problems: Solutions Finite Field Problems: Solutions 1. Let f = x 2 +1 Z 11 [x] and let F = Z 11 [x]/(f), a field. Let Solution: F =11 2 = 121, so F = 121 1 = 120. The possible orders are the divisors of 120. Solution: The

Διαβάστε περισσότερα

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS CHAPTER 5 SOLVING EQUATIONS BY ITERATIVE METHODS EXERCISE 104 Page 8 1. Find the positive root of the equation x + 3x 5 = 0, correct to 3 significant figures, using the method of bisection. Let f(x) =

Διαβάστε περισσότερα

Jordan Form of a Square Matrix

Jordan Form of a Square Matrix Jordan Form of a Square Matrix Josh Engwer Texas Tech University josh.engwer@ttu.edu June 3 KEY CONCEPTS & DEFINITIONS: R Set of all real numbers C Set of all complex numbers = {a + bi : a b R and i =

Διαβάστε περισσότερα

Srednicki Chapter 55

Srednicki Chapter 55 Srednicki Chapter 55 QFT Problems & Solutions A. George August 3, 03 Srednicki 55.. Use equations 55.3-55.0 and A i, A j ] = Π i, Π j ] = 0 (at equal times) to verify equations 55.-55.3. This is our third

Διαβάστε περισσότερα

Other Test Constructions: Likelihood Ratio & Bayes Tests

Other Test Constructions: Likelihood Ratio & Bayes Tests Other Test Constructions: Likelihood Ratio & Bayes Tests Side-Note: So far we have seen a few approaches for creating tests such as Neyman-Pearson Lemma ( most powerful tests of H 0 : θ = θ 0 vs H 1 :

Διαβάστε περισσότερα

Space-Time Symmetries

Space-Time Symmetries Chapter Space-Time Symmetries In classical fiel theory any continuous symmetry of the action generates a conserve current by Noether's proceure. If the Lagrangian is not invariant but only shifts by a

Διαβάστε περισσότερα

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices Chi-Kwong Li Department of Mathematics The College of William and Mary Williamsburg, Virginia 23187-8795

Διαβάστε περισσότερα

DIRECT PRODUCT AND WREATH PRODUCT OF TRANSFORMATION SEMIGROUPS

DIRECT PRODUCT AND WREATH PRODUCT OF TRANSFORMATION SEMIGROUPS GANIT J. Bangladesh Math. oc. IN 606-694) 0) -7 DIRECT PRODUCT AND WREATH PRODUCT OF TRANFORMATION EMIGROUP ubrata Majumdar, * Kalyan Kumar Dey and Mohd. Altab Hossain Department of Mathematics University

Διαβάστε περισσότερα

derivation of the Laplacian from rectangular to spherical coordinates

derivation of the Laplacian from rectangular to spherical coordinates derivation of the Laplacian from rectangular to spherical coordinates swapnizzle 03-03- :5:43 We begin by recognizing the familiar conversion from rectangular to spherical coordinates (note that φ is used

Διαβάστε περισσότερα

Quadratic Expressions

Quadratic Expressions Quadratic Expressions. The standard form of a quadratic equation is ax + bx + c = 0 where a, b, c R and a 0. The roots of ax + bx + c = 0 are b ± b a 4ac. 3. For the equation ax +bx+c = 0, sum of the roots

Διαβάστε περισσότερα

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch: HOMEWORK 4 Problem a For the fast loading case, we want to derive the relationship between P zz and λ z. We know that the nominal stress is expressed as: P zz = ψ λ z where λ z = λ λ z. Therefore, applying

Διαβάστε περισσότερα

Lecture 21: Properties and robustness of LSE

Lecture 21: Properties and robustness of LSE Lecture 21: Properties and robustness of LSE BLUE: Robustness of LSE against normality We now study properties of l τ β and σ 2 under assumption A2, i.e., without the normality assumption on ε. From Theorem

Διαβάστε περισσότερα

6.3 Forecasting ARMA processes

6.3 Forecasting ARMA processes 122 CHAPTER 6. ARMA MODELS 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss a linear

Διαβάστε περισσότερα

Math221: HW# 1 solutions

Math221: HW# 1 solutions Math: HW# solutions Andy Royston October, 5 7.5.7, 3 rd Ed. We have a n = b n = a = fxdx = xdx =, x cos nxdx = x sin nx n sin nxdx n = cos nx n = n n, x sin nxdx = x cos nx n + cos nxdx n cos n = + sin

Διαβάστε περισσότερα

MATH1030 Linear Algebra Assignment 5 Solution

MATH1030 Linear Algebra Assignment 5 Solution 2015-16 MATH1030 Linear Algebra Assignment 5 Solution (Question No. referring to the 8 th edition of the textbook) Section 3.4 4 Note that x 2 = x 1, x 3 = 2x 1 and x 1 0, hence span {x 1, x 2, x 3 } has

Διαβάστε περισσότερα

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ. Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο The time integral of a force is referred to as impulse, is determined by and is obtained from: Newton s 2 nd Law of motion states that the action

Διαβάστε περισσότερα

12. Radon-Nikodym Theorem

12. Radon-Nikodym Theorem Tutorial 12: Radon-Nikodym Theorem 1 12. Radon-Nikodym Theorem In the following, (Ω, F) is an arbitrary measurable space. Definition 96 Let μ and ν be two (possibly complex) measures on (Ω, F). We say

Διαβάστε περισσότερα

de Rham Theorem May 10, 2016

de Rham Theorem May 10, 2016 de Rham Theorem May 10, 2016 Stokes formula and the integration morphism: Let M = σ Σ σ be a smooth triangulated manifold. Fact: Stokes formula σ ω = σ dω holds, e.g. for simplices. It can be used to define

Διαβάστε περισσότερα

DETERMINANT AND PFAFFIAN OF SUM OF SKEW SYMMETRIC MATRICES. 1. Introduction

DETERMINANT AND PFAFFIAN OF SUM OF SKEW SYMMETRIC MATRICES. 1. Introduction Unspecified Journal Volume 00, Number 0, Pages 000 000 S????-????(XX)0000-0 DETERMINANT AND PFAFFIAN OF SUM OF SKEW SYMMETRIC MATRICES TIN-YAU TAM AND MARY CLAIR THOMPSON Abstract. We completely describe

Διαβάστε περισσότερα

On the Galois Group of Linear Difference-Differential Equations

On the Galois Group of Linear Difference-Differential Equations On the Galois Group of Linear Difference-Differential Equations Ruyong Feng KLMM, Chinese Academy of Sciences, China Ruyong Feng (KLMM, CAS) Galois Group 1 / 19 Contents 1 Basic Notations and Concepts

Διαβάστε περισσότερα

The ε-pseudospectrum of a Matrix

The ε-pseudospectrum of a Matrix The ε-pseudospectrum of a Matrix Feb 16, 2015 () The ε-pseudospectrum of a Matrix Feb 16, 2015 1 / 18 1 Preliminaries 2 Definitions 3 Basic Properties 4 Computation of Pseudospectrum of 2 2 5 Problems

Διαβάστε περισσότερα

Exercise 1.1. Verify that if we apply GS to the coordinate basis Gauss form ds 2 = E(u, v)du 2 + 2F (u, v)dudv + G(u, v)dv 2

Exercise 1.1. Verify that if we apply GS to the coordinate basis Gauss form ds 2 = E(u, v)du 2 + 2F (u, v)dudv + G(u, v)dv 2 Math 209 Riemannian Geometry Jeongmin Shon Problem. Let M 2 R 3 be embedded surface. Then the induced metric on M 2 is obtained by taking the standard inner product on R 3 and restricting it to the tangent

Διαβάστε περισσότερα

arxiv: v1 [math.ra] 19 Dec 2017

arxiv: v1 [math.ra] 19 Dec 2017 TWO-DIMENSIONAL LEFT RIGHT UNITAL ALGEBRAS OVER ALGEBRAICALLY CLOSED FIELDS AND R HAHMED UBEKBAEV IRAKHIMOV 3 arxiv:7673v [mathra] 9 Dec 7 Department of Math Faculty of Science UPM Selangor Malaysia &

Διαβάστε περισσότερα

Section 7.6 Double and Half Angle Formulas

Section 7.6 Double and Half Angle Formulas 09 Section 7. Double and Half Angle Fmulas To derive the double-angles fmulas, we will use the sum of two angles fmulas that we developed in the last section. We will let α θ and β θ: cos(θ) cos(θ + θ)

Διαβάστε περισσότερα

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with Week 03: C lassification of S econd- Order L inear Equations In last week s lectures we have illustrated how to obtain the general solutions of first order PDEs using the method of characteristics. We

Διαβάστε περισσότερα

Second Order Partial Differential Equations

Second Order Partial Differential Equations Chapter 7 Second Order Partial Differential Equations 7.1 Introduction A second order linear PDE in two independent variables (x, y Ω can be written as A(x, y u x + B(x, y u xy + C(x, y u u u + D(x, y

Διαβάστε περισσότερα

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS EXERCISE 01 Page 545 1. Use matrices to solve: 3x + 4y x + 5y + 7 3x + 4y x + 5y 7 Hence, 3 4 x 0 5 y 7 The inverse of 3 4 5 is: 1 5 4 1 5 4 15 8 3

Διαβάστε περισσότερα

Generating Set of the Complete Semigroups of Binary Relations

Generating Set of the Complete Semigroups of Binary Relations Applied Mathematics 06 7 98-07 Published Online January 06 in SciRes http://wwwscirporg/journal/am http://dxdoiorg/036/am067009 Generating Set of the Complete Semigroups of Binary Relations Yasha iasamidze

Διαβάστε περισσότερα

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018 Journal of rogressive Research in Mathematics(JRM) ISSN: 2395-028 SCITECH Volume 3, Issue 2 RESEARCH ORGANISATION ublished online: March 29, 208 Journal of rogressive Research in Mathematics www.scitecresearch.com/journals

Διαβάστε περισσότερα

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007 Οδηγίες: Να απαντηθούν όλες οι ερωτήσεις. Αν κάπου κάνετε κάποιες υποθέσεις να αναφερθούν στη σχετική ερώτηση. Όλα τα αρχεία που αναφέρονται στα προβλήματα βρίσκονται στον ίδιο φάκελο με το εκτελέσιμο

Διαβάστε περισσότερα

Jordan Journal of Mathematics and Statistics (JJMS) 4(2), 2011, pp

Jordan Journal of Mathematics and Statistics (JJMS) 4(2), 2011, pp Jordan Journal of Mathematics and Statistics (JJMS) 4(2), 2011, pp.115-126. α, β, γ ORTHOGONALITY ABDALLA TALLAFHA Abstract. Orthogonality in inner product spaces can be expresed using the notion of norms.

Διαβάστε περισσότερα

Statistical Inference I Locally most powerful tests

Statistical Inference I Locally most powerful tests Statistical Inference I Locally most powerful tests Shirsendu Mukherjee Department of Statistics, Asutosh College, Kolkata, India. shirsendu st@yahoo.co.in So far we have treated the testing of one-sided

Διαβάστε περισσότερα

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1 Conceptual Questions. State a Basic identity and then verify it. a) Identity: Solution: One identity is cscθ) = sinθ) Practice Exam b) Verification: Solution: Given the point of intersection x, y) of the

Διαβάστε περισσότερα

Trigonometric Formula Sheet

Trigonometric Formula Sheet Trigonometric Formula Sheet Definition of the Trig Functions Right Triangle Definition Assume that: 0 < θ < or 0 < θ < 90 Unit Circle Definition Assume θ can be any angle. y x, y hypotenuse opposite θ

Διαβάστε περισσότερα

Homomorphism in Intuitionistic Fuzzy Automata

Homomorphism in Intuitionistic Fuzzy Automata International Journal of Fuzzy Mathematics Systems. ISSN 2248-9940 Volume 3, Number 1 (2013), pp. 39-45 Research India Publications http://www.ripublication.com/ijfms.htm Homomorphism in Intuitionistic

Διαβάστε περισσότερα

Bounding Nonsplitting Enumeration Degrees

Bounding Nonsplitting Enumeration Degrees Bounding Nonsplitting Enumeration Degrees Thomas F. Kent Andrea Sorbi Università degli Studi di Siena Italia July 18, 2007 Goal: Introduce a form of Σ 0 2-permitting for the enumeration degrees. Till now,

Διαβάστε περισσότερα

If we restrict the domain of y = sin x to [ π, π ], the restrict function. y = sin x, π 2 x π 2

If we restrict the domain of y = sin x to [ π, π ], the restrict function. y = sin x, π 2 x π 2 Chapter 3. Analytic Trigonometry 3.1 The inverse sine, cosine, and tangent functions 1. Review: Inverse function (1) f 1 (f(x)) = x for every x in the domain of f and f(f 1 (x)) = x for every x in the

Διαβάστε περισσότερα

= {{D α, D α }, D α }. = [D α, 4iσ µ α α D α µ ] = 4iσ µ α α [Dα, D α ] µ.

= {{D α, D α }, D α }. = [D α, 4iσ µ α α D α µ ] = 4iσ µ α α [Dα, D α ] µ. PHY 396 T: SUSY Solutions for problem set #1. Problem 2(a): First of all, [D α, D 2 D α D α ] = {D α, D α }D α D α {D α, D α } = {D α, D α }D α + D α {D α, D α } (S.1) = {{D α, D α }, D α }. Second, {D

Διαβάστε περισσότερα

Homework 8 Model Solution Section

Homework 8 Model Solution Section MATH 004 Homework Solution Homework 8 Model Solution Section 14.5 14.6. 14.5. Use the Chain Rule to find dz where z cosx + 4y), x 5t 4, y 1 t. dz dx + dy y sinx + 4y)0t + 4) sinx + 4y) 1t ) 0t + 4t ) sinx

Διαβάστε περισσότερα

5. Choice under Uncertainty

5. Choice under Uncertainty 5. Choice under Uncertainty Daisuke Oyama Microeconomics I May 23, 2018 Formulations von Neumann-Morgenstern (1944/1947) X: Set of prizes Π: Set of probability distributions on X : Preference relation

Διαβάστε περισσότερα

From root systems to Dynkin diagrams

From root systems to Dynkin diagrams From root systems to Dynkin diagrams Heiko Dietrich Abstract. We describe root systems and their associated Dynkin diagrams; these notes follow closely the book of Erdman & Wildon ( Introduction to Lie

Διαβάστε περισσότερα

Parametrized Surfaces

Parametrized Surfaces Parametrized Surfaces Recall from our unit on vector-valued functions at the beginning of the semester that an R 3 -valued function c(t) in one parameter is a mapping of the form c : I R 3 where I is some

Διαβάστε περισσότερα

If we restrict the domain of y = sin x to [ π 2, π 2

If we restrict the domain of y = sin x to [ π 2, π 2 Chapter 3. Analytic Trigonometry 3.1 The inverse sine, cosine, and tangent functions 1. Review: Inverse function (1) f 1 (f(x)) = x for every x in the domain of f and f(f 1 (x)) = x for every x in the

Διαβάστε περισσότερα

MINIMAL CLOSED SETS AND MAXIMAL CLOSED SETS

MINIMAL CLOSED SETS AND MAXIMAL CLOSED SETS MINIMAL CLOSED SETS AND MAXIMAL CLOSED SETS FUMIE NAKAOKA AND NOBUYUKI ODA Received 20 December 2005; Revised 28 May 2006; Accepted 6 August 2006 Some properties of minimal closed sets and maximal closed

Διαβάστε περισσότερα

Homomorphism of Intuitionistic Fuzzy Groups

Homomorphism of Intuitionistic Fuzzy Groups International Mathematical Forum, Vol. 6, 20, no. 64, 369-378 Homomorphism o Intuitionistic Fuzz Groups P. K. Sharma Department o Mathematics, D..V. College Jalandhar Cit, Punjab, India pksharma@davjalandhar.com

Διαβάστε περισσότερα

Areas and Lengths in Polar Coordinates

Areas and Lengths in Polar Coordinates Kiryl Tsishchanka Areas and Lengths in Polar Coordinates In this section we develop the formula for the area of a region whose boundary is given by a polar equation. We need to use the formula for the

Διαβάστε περισσότερα

Lecture 2. Soundness and completeness of propositional logic

Lecture 2. Soundness and completeness of propositional logic Lecture 2 Soundness and completeness of propositional logic February 9, 2004 1 Overview Review of natural deduction. Soundness and completeness. Semantics of propositional formulas. Soundness proof. Completeness

Διαβάστε περισσότερα

Phys624 Quantization of Scalar Fields II Homework 3. Homework 3 Solutions. 3.1: U(1) symmetry for complex scalar

Phys624 Quantization of Scalar Fields II Homework 3. Homework 3 Solutions. 3.1: U(1) symmetry for complex scalar Homework 3 Solutions 3.1: U(1) symmetry for complex scalar 1 3.: Two complex scalars The Lagrangian for two complex scalar fields is given by, L µ φ 1 µ φ 1 m φ 1φ 1 + µ φ µ φ m φ φ (1) This can be written

Διαβάστε περισσότερα