SCHOOL OF MATHEMATICAL SCIENCES GLMA Linear Mathematics 00- Examination Solutions. (a) i. ( + 5i)( i) = (6 + 5) + (5 )i = + i. Real part is, imaginary part is. (b) ii. + 5i i ( + 5i)( + i) = ( i)( + i) = + 7i 0 = 0 + 7 0 7 i Real part is, imaginary part is 0 0. + i π /4 π / i Arg ( + i) = π + π 4 = π 4 Arg ( i) = π + i = + = 8 so + i = 8e i(π/4) i = i = e iπ/. + ( ) = 4 = so (c) ( i) 9 = (e i π ) 9 = 9 e i9 π = 9 e iπ = 5 ( ) = 5.
. (a) a = + + 4 = b = + + = 9 = a b = ( ) + ( ) + 4 ( ) = a b = ( 8, 4 + 4, 4 ) = ( 6, 0, ). a b = a b cos(θ) gives us cos(θ) = a b a b = = 4. (b) 0 (,, ) p a = (,, ) p is the component of a along (,, ). (a b) ( 6 + 4) Set b = (,, ). So p = b = b = b = (,, ). (b b) 9 Distance from a to L is then a p = + + 0 = 5.. (a) 0 0 0 4 [ i i 5 4 5 ] [ + i i = = 0 4 6 + 4 + 5 0 + 6 4 + 0 7 6 8 6 6 ] [ ] i + 4 + = + i i i i [ ] + i =. + i 4i
(b) Using row operations which do not change the value of the determinant, 6 6 6 0 7 = R R 0 0 R R 0 4 4 0 8 R 4 R 0 4 9 = R 4 R 0 0 0 4 4 = R R 0 0 0 0 4 0 0 0 5 0 0 0 5 = () ( ) 5 = 0 as the last is the determinant of a triangular matrix. [ ] [ ] ([ ]) 0 0 0 0 (c) We may take A =, B =. Then det(a + B) = det = 0 while 0 0 0 5 0 5 det(a) = det(b) = 0. Thus det(a + B) = det(a) + det(b) + 0. as required. 4. (a) We form the augmented matrix and row reduce to echelon form. 6 9 R R 0 0 4 8 R 7 R 0 0 R R 0 0 0 0 0 0 0 This is now in echelon form and we can see that the equations are consistent, with leading variables x, x and free variables x, x 4. To find the general solution, set x = s, x 4 = t. The second row of the echelon matrix gives x x 4 = so x = ( + x 4) = + t. The first row gives us x x + x + x 4 =, i.e. ( x s + ) + t + t =, giving ( x = s + 9 ) 5 t = s 5 t 5. The general solution is thus (x, x, x, x 4 ) = (s 5 t 5, s, + t ), t ( = 5, 0, ) (, 0 + s(,, 0, 0) + t 5, 0, ),.
(b) Using Gauss-Jordan inversion 0 0 4 0 0 R R 0 0 7 R 0 0 R 0 0 0 R R 0 0 0 0 0 0 0 R + R 0 0 0 0 0 0 0 5 R R 0 0 0 0 0 0 0 0 0 So the inverse to the original matrix is 5 0. (This may also be done using det and adj.) The second matrix is quickly[ inverted ] using the[ formula for ] inverting matrices. When a b d b ad bc 0, the inverse of is. Here ad bc is c d ad bc c a i( i) i( + i) =. Thus the required inverse matrix is [ ] [ i i = ( + i) i i i i i ]. 5. W V is a subset of V if w + kw W for all w, w W and all scalars k. { } W = A M n,n (R) A = A Let a a... a n b b... b n a a... a n w =...., w b b... b n =.... a n a n... a nn b n b n... b nn w W a ij = a ji, w W b ij = b ji 4
a a... a n b b... b n a a... a n w + kw =.... + k b b... b n.... a n a n... a nn b n b n... b nn a a... a n kb kb... kb n a a... a n =.... + kb kb... kb n.... a n a n... a nn kb n kb n... kb nn a + kb a + kb... a n + kb n a + kb a + kb... a n + kb n =.... a n + kb n a n + kb n... a nn + kb nn If the (i, j) th element of this matrix is c ij, then c ij = a ij + kb ij = a ji + kb ji = c ji. w + kw W for each w, w W, k R and hence W is a subspace. 0 0 0 0 0 0 0 0 0 Natural basis :.....,,..... 0 0 0 0 0. 0 0 0 0 0 0 0...., 0 0 0,.....,.... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0....,,,......... 0 0 0 5
a a a... a n a a a... a n If w W, w = a a a... a N (a ij = a ji )..... a n a n a n... a nn 0 0 0 0 0 0 0 0 0 0 0 Then w = a + a......... 0 0 0 0 0 0 0 + + a n + a 0 0 0........ 0 0 0 0 0 0 0 0 0 + + a + a + + a nn............ 0 0 0 and so this set certainly spans W. If we form 0 0 0 0 0 0 0 0 0 0 0 k + k + +......... 0 0 0 0 k n n..... + k..... + + k nn.... 0 0 0 0 0 0 0 k k k... k n k k k... k n we get...... k n k n k n... k nn k k k... k n k k k... k n If this is to sum to 0, then..... =..... k n k n k n... k nn 6
i.e. each k ij = 0, establishing linear independence. Hence, the stated set is a basis. The number of elements in this basis is (the number of strictly upper diagonal elements) + (number of diagonal elements) = (n n) + n = (n + n) = n(n + ). 6. { R(T ) = w V { ker(t ) = v V w = T (v) for some v V T (v) = 0 } } (a) For T to be a linear transformation, T (v + kv ) = T (v ) + kt (v ) for all v, v V, k R. v = x y z, v = x y z x kx kv = k y = ky z kz x kx x + kx v + kv = y + ky = y + ky z kz z + kz x + kx (x + kx ) + (y + ky ) T (v + kv ) = T y + ky = (x + kx ) + (y + ky ) + (z + kz ) z + kz (y + ky ) + (z + kz ) x + y + k(x + y ) x + y k(x + y ) = x + y + z + k(x + y + z ) = x + y + z + k(x + y + z ) y + z + k(y + z ) y + z k(y + z ) x + y x + y x x = x + y + z + k x + y + z = T y + kt y. y + z y + z z z i.e. T (v + kv ) = T (v ) + kt (v ) and hence T is a linear transformation. Let v ker(t ). Then T (v) = 0. x x x + y 0 v = y T (v) = T y = x + y + z = 0 z z y + z 0 x + y = 0 () x + y + z = 0 () y + z = 0 () () () z = x. In (), x + y + x = 0 x + y = 0 (4). (4) () y = 0. Hence, x = z = 0 also. 7
If v ker(t ), then v = 0 i.e. ker(t ) = {0} rank (T ) = dimension of k(t ) nullity (T ) = dimension of ker(t ) rank and nullity formula: rank (T ) + nullity (T ) = dim(v ) In this case, dim(v ) = dim(r ) = ; nullity (T ) = dim{0} = 0. rank (T ) = 0 =. If T (v ) = T (v ), then T (v ) T (v ) = 0. But since T is a linear transformation, T (v ) T (v ) = T (v v ). T (v v ) = 0, so v v ker(t ). But ker(t ) = {0}, so v v = 0 i.e. v = v. (b) Let u represent the function f and v represent g. T (u + kv) = (f + kg) (x) = f (x) + kg (x) = T (u) + kt (v). T is a linear transformation. { } ker(t ) = v V T (v) = 0 i.e. if v represents the function f, then f (x) 0. i.e. ker(t ) comprises all continuous, linear functions of x. 7. λ 0 det(a λi) λ 0 λ = 0 λ 0 C C C : λ ( λ) λ = 0 0 i.e. ( λ) 0 λ λ = 0. Expand by R: { [( ] } ( λ) λ)( λ) [0 + ] = 0 ( λ)(λ 5λ + 4) = 0 ( λ)(λ 4)(λ ) = 0. The eigenvalues are λ =, λ = 4, λ α 0 α α λ = : v = β β = β γ 0 γ γ 8
α + β = α α + β + γ = β β + γ = γ (4) (5) (6) (4) & (6) β = 0, (5) α = γ An eigenvector is α 0, α 0. α 0 α 4α λ = 4 : v = β β = 4β γ 0 γ 4γ α + β = 4α α + β + γ = 4β β + γ = 4γ (7) (8) (9) (9) β = γ, (7) β = α so α = γ, β = α An eigenvector is α, α 0. α 0 α α λ = : v = β, β = β γ 0 γ γ α + β = α α + β + γ = β β + γ = γ (0) () () (0) β = α, () β = γ, in () β + β β = β α = γ, β = α An eigenvector is α, α 0. Normalised eigenvectors are therefore / / 6 v = 0 / ; v = / / 6 / ; v = / 6 / / / 6 / P = 0 / 6 / / / 6 / 9
will be such that 0 0 P AP = D = 0 4 0. 0 0 NB other column orders of P, D are possible. x 0 Q = x Ax, x = x, A = x 0 check : x Ax = [ ] 0 x x x x x 0 x = [ ] x + x x x x x + x + x x + x Set x = P y = x (x + x ) + x (x + x + x ) + x (x + x ) = x + x + x + x x + x x. x Ax = (P y) AP y = y P AP y = y Dy = [ ] 0 0 y y y y 0 4 0 y = y + 4y + y. 0 0 y For this choice of P, D (other choices are possible), Q = αy + βy + γy α =, β = 4, γ = y = P x = P x (since P is orthogonal) i.e. y y = y / 0 / / 6 / 6 / 6 / / / x x x y = (x x ) y = 6 (x + x + x ) y = (x x + x ). 0
8. k 0 i 0 0 + k + k = 0 i i 0 k + ik = 0 () k + k = 0 () ik + ik k = 0 () () ( i) k + k + ik = 0. Since k + ik = 0 (from ()), this gives k = 0. Hence, k = 0 (from ()) and then k = 0 (from (()). k = k = k = 0. Hence, the vectors {v, v, v } are independent. Since V is three-dimensional, any set of three independent vectors is a basis. Such a set is {v, v, v }, which is therefore a basis. Define w = v and choose w such that w = v λv w, w = v λv, v = v, v λ v, v. choosing λ = v, v v, v will yield w, w = 0. v, v = [0, + i] }{{} v v, v = [ 0 i ] 0 = + i v 0 i = λ = 0 w = / 0 = i i i/ w = v µw νw w, w = v, w µ w, w ν w, w w, w = 0 since w and w are othogonal. w, w = v, w µ w, w.
Choosing µ = v, w / w, w guarantees that w, w are orthogonal. v, w = [ i ] = i + + i = i/ w, w = [ / i/ ] / = 4 + + 4 = i/ µ = / (/) = / w, w = v, v λ w, w ν w, w = v, v ν w, w since w, w = 0. Choosing ν = v, v / w, w will guarantee that 0 = w, w, so w and w are orthogonal. v, v = [ i ] 0 = i + i = i i w, w = [ 0 i ] 0 = + = i ν = i = i. Then i w = / i 0 = /. i/ i i/