SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Σχετικά έγγραφα
Matrices and Determinants

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Example Sheet 3 Solutions

Srednicki Chapter 55

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

EE512: Error Control Coding

Section 8.3 Trigonometric Equations

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

2 Composition. Invertible Mappings

Numerical Analysis FMN011

Reminders: linear functions

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

Homework 3 Solutions

Jordan Form of a Square Matrix

Tridiagonal matrices. Gérard MEURANT. October, 2008

Finite Field Problems: Solutions

MATHEMATICS. 1. If A and B are square matrices of order 3 such that A = -1, B =3, then 3AB = 1) -9 2) -27 3) -81 4) 81

Inverse trigonometric functions & General Solution of Trigonometric Equations

MATH1030 Linear Algebra Assignment 5 Solution

Partial Differential Equations in Biology The boundary element method. March 26, 2013

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

( ) 2 and compare to M.

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Second Order Partial Differential Equations

Concrete Mathematics Exercises from 30 September 2016

Solutions to Exercise Sheet 5

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

CRASH COURSE IN PRECALCULUS

C.S. 430 Assignment 6, Sample Solutions

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

w o = R 1 p. (1) R = p =. = 1

Section 7.6 Double and Half Angle Formulas

MATRICES

Space-Time Symmetries

5. Choice under Uncertainty

Congruence Classes of Invertible Matrices of Order 3 over F 2

The Jordan Form of Complex Tridiagonal Matrices

The Simply Typed Lambda Calculus

A Note on Intuitionistic Fuzzy. Equivalence Relation

Mock Exam 7. 1 Hong Kong Educational Publishing Company. Section A 1. Reference: HKDSE Math M Q2 (a) (1 + kx) n 1M + 1A = (1) =

Areas and Lengths in Polar Coordinates

x j (t) = e λ jt v j, 1 j n

Matrices and vectors. Matrix and vector. a 11 a 12 a 1n a 21 a 22 a 2n A = b 1 b 2. b m. R m n, b = = ( a ij. a m1 a m2 a mn. def

Trigonometric Formula Sheet

Solution Series 9. i=1 x i and i=1 x i.

PARTIAL NOTES for 6.1 Trigonometric Identities

Notes on the Open Economy

derivation of the Laplacian from rectangular to spherical coordinates

Uniform Convergence of Fourier Series Michael Taylor

Statistical Inference I Locally most powerful tests

Lecture 10 - Representation Theory III: Theory of Weights

Parametrized Surfaces

Areas and Lengths in Polar Coordinates

Math221: HW# 1 solutions

Approximation of distance between locations on earth given by latitude and longitude

( y) Partial Differential Equations

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

Every set of first-order formulas is equivalent to an independent set

Homework 8 Model Solution Section

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

ECE Spring Prof. David R. Jackson ECE Dept. Notes 2

ST5224: Advanced Statistical Theory II

Lecture 13 - Root Space Decomposition II

Fractional Colorings and Zykov Products of graphs

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 6/5/2006

Second Order RLC Filters

ES440/ES911: CFD. Chapter 5. Solution of Linear Equation Systems

Lecture 2. Soundness and completeness of propositional logic

dim(u) = n 1 and {v j } j i

= {{D α, D α }, D α }. = [D α, 4iσ µ α α D α µ ] = 4iσ µ α α [Dα, D α ] µ.

SOLVING CUBICS AND QUARTICS BY RADICALS

Bounding Nonsplitting Enumeration Degrees

Section 9.2 Polar Equations and Graphs

6.3 Forecasting ARMA processes

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Differential equations

The ε-pseudospectrum of a Matrix

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)

TMA4115 Matematikk 3

CHAPTER 101 FOURIER SERIES FOR PERIODIC FUNCTIONS OF PERIOD

Lecture 34 Bootstrap confidence intervals

Problem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ.

4.6 Autoregressive Moving Average Model ARMA(1,1)

D Alembert s Solution to the Wave Equation

g-selberg integrals MV Conjecture An A 2 Selberg integral Summary Long Live the King Ole Warnaar Department of Mathematics Long Live the King

F19MC2 Solutions 9 Complex Analysis

Empirical best prediction under area-level Poisson mixed models

Other Test Constructions: Likelihood Ratio & Bayes Tests

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER

Transcript:

SCHOOL OF MATHEMATICAL SCIENCES GLMA Linear Mathematics 00- Examination Solutions. (a) i. ( + 5i)( i) = (6 + 5) + (5 )i = + i. Real part is, imaginary part is. (b) ii. + 5i i ( + 5i)( + i) = ( i)( + i) = + 7i 0 = 0 + 7 0 7 i Real part is, imaginary part is 0 0. + i π /4 π / i Arg ( + i) = π + π 4 = π 4 Arg ( i) = π + i = + = 8 so + i = 8e i(π/4) i = i = e iπ/. + ( ) = 4 = so (c) ( i) 9 = (e i π ) 9 = 9 e i9 π = 9 e iπ = 5 ( ) = 5.

. (a) a = + + 4 = b = + + = 9 = a b = ( ) + ( ) + 4 ( ) = a b = ( 8, 4 + 4, 4 ) = ( 6, 0, ). a b = a b cos(θ) gives us cos(θ) = a b a b = = 4. (b) 0 (,, ) p a = (,, ) p is the component of a along (,, ). (a b) ( 6 + 4) Set b = (,, ). So p = b = b = b = (,, ). (b b) 9 Distance from a to L is then a p = + + 0 = 5.. (a) 0 0 0 4 [ i i 5 4 5 ] [ + i i = = 0 4 6 + 4 + 5 0 + 6 4 + 0 7 6 8 6 6 ] [ ] i + 4 + = + i i i i [ ] + i =. + i 4i

(b) Using row operations which do not change the value of the determinant, 6 6 6 0 7 = R R 0 0 R R 0 4 4 0 8 R 4 R 0 4 9 = R 4 R 0 0 0 4 4 = R R 0 0 0 0 4 0 0 0 5 0 0 0 5 = () ( ) 5 = 0 as the last is the determinant of a triangular matrix. [ ] [ ] ([ ]) 0 0 0 0 (c) We may take A =, B =. Then det(a + B) = det = 0 while 0 0 0 5 0 5 det(a) = det(b) = 0. Thus det(a + B) = det(a) + det(b) + 0. as required. 4. (a) We form the augmented matrix and row reduce to echelon form. 6 9 R R 0 0 4 8 R 7 R 0 0 R R 0 0 0 0 0 0 0 This is now in echelon form and we can see that the equations are consistent, with leading variables x, x and free variables x, x 4. To find the general solution, set x = s, x 4 = t. The second row of the echelon matrix gives x x 4 = so x = ( + x 4) = + t. The first row gives us x x + x + x 4 =, i.e. ( x s + ) + t + t =, giving ( x = s + 9 ) 5 t = s 5 t 5. The general solution is thus (x, x, x, x 4 ) = (s 5 t 5, s, + t ), t ( = 5, 0, ) (, 0 + s(,, 0, 0) + t 5, 0, ),.

(b) Using Gauss-Jordan inversion 0 0 4 0 0 R R 0 0 7 R 0 0 R 0 0 0 R R 0 0 0 0 0 0 0 R + R 0 0 0 0 0 0 0 5 R R 0 0 0 0 0 0 0 0 0 So the inverse to the original matrix is 5 0. (This may also be done using det and adj.) The second matrix is quickly[ inverted ] using the[ formula for ] inverting matrices. When a b d b ad bc 0, the inverse of is. Here ad bc is c d ad bc c a i( i) i( + i) =. Thus the required inverse matrix is [ ] [ i i = ( + i) i i i i i ]. 5. W V is a subset of V if w + kw W for all w, w W and all scalars k. { } W = A M n,n (R) A = A Let a a... a n b b... b n a a... a n w =...., w b b... b n =.... a n a n... a nn b n b n... b nn w W a ij = a ji, w W b ij = b ji 4

a a... a n b b... b n a a... a n w + kw =.... + k b b... b n.... a n a n... a nn b n b n... b nn a a... a n kb kb... kb n a a... a n =.... + kb kb... kb n.... a n a n... a nn kb n kb n... kb nn a + kb a + kb... a n + kb n a + kb a + kb... a n + kb n =.... a n + kb n a n + kb n... a nn + kb nn If the (i, j) th element of this matrix is c ij, then c ij = a ij + kb ij = a ji + kb ji = c ji. w + kw W for each w, w W, k R and hence W is a subspace. 0 0 0 0 0 0 0 0 0 Natural basis :.....,,..... 0 0 0 0 0. 0 0 0 0 0 0 0...., 0 0 0,.....,.... 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0....,,,......... 0 0 0 5

a a a... a n a a a... a n If w W, w = a a a... a N (a ij = a ji )..... a n a n a n... a nn 0 0 0 0 0 0 0 0 0 0 0 Then w = a + a......... 0 0 0 0 0 0 0 + + a n + a 0 0 0........ 0 0 0 0 0 0 0 0 0 + + a + a + + a nn............ 0 0 0 and so this set certainly spans W. If we form 0 0 0 0 0 0 0 0 0 0 0 k + k + +......... 0 0 0 0 k n n..... + k..... + + k nn.... 0 0 0 0 0 0 0 k k k... k n k k k... k n we get...... k n k n k n... k nn k k k... k n k k k... k n If this is to sum to 0, then..... =..... k n k n k n... k nn 6

i.e. each k ij = 0, establishing linear independence. Hence, the stated set is a basis. The number of elements in this basis is (the number of strictly upper diagonal elements) + (number of diagonal elements) = (n n) + n = (n + n) = n(n + ). 6. { R(T ) = w V { ker(t ) = v V w = T (v) for some v V T (v) = 0 } } (a) For T to be a linear transformation, T (v + kv ) = T (v ) + kt (v ) for all v, v V, k R. v = x y z, v = x y z x kx kv = k y = ky z kz x kx x + kx v + kv = y + ky = y + ky z kz z + kz x + kx (x + kx ) + (y + ky ) T (v + kv ) = T y + ky = (x + kx ) + (y + ky ) + (z + kz ) z + kz (y + ky ) + (z + kz ) x + y + k(x + y ) x + y k(x + y ) = x + y + z + k(x + y + z ) = x + y + z + k(x + y + z ) y + z + k(y + z ) y + z k(y + z ) x + y x + y x x = x + y + z + k x + y + z = T y + kt y. y + z y + z z z i.e. T (v + kv ) = T (v ) + kt (v ) and hence T is a linear transformation. Let v ker(t ). Then T (v) = 0. x x x + y 0 v = y T (v) = T y = x + y + z = 0 z z y + z 0 x + y = 0 () x + y + z = 0 () y + z = 0 () () () z = x. In (), x + y + x = 0 x + y = 0 (4). (4) () y = 0. Hence, x = z = 0 also. 7

If v ker(t ), then v = 0 i.e. ker(t ) = {0} rank (T ) = dimension of k(t ) nullity (T ) = dimension of ker(t ) rank and nullity formula: rank (T ) + nullity (T ) = dim(v ) In this case, dim(v ) = dim(r ) = ; nullity (T ) = dim{0} = 0. rank (T ) = 0 =. If T (v ) = T (v ), then T (v ) T (v ) = 0. But since T is a linear transformation, T (v ) T (v ) = T (v v ). T (v v ) = 0, so v v ker(t ). But ker(t ) = {0}, so v v = 0 i.e. v = v. (b) Let u represent the function f and v represent g. T (u + kv) = (f + kg) (x) = f (x) + kg (x) = T (u) + kt (v). T is a linear transformation. { } ker(t ) = v V T (v) = 0 i.e. if v represents the function f, then f (x) 0. i.e. ker(t ) comprises all continuous, linear functions of x. 7. λ 0 det(a λi) λ 0 λ = 0 λ 0 C C C : λ ( λ) λ = 0 0 i.e. ( λ) 0 λ λ = 0. Expand by R: { [( ] } ( λ) λ)( λ) [0 + ] = 0 ( λ)(λ 5λ + 4) = 0 ( λ)(λ 4)(λ ) = 0. The eigenvalues are λ =, λ = 4, λ α 0 α α λ = : v = β β = β γ 0 γ γ 8

α + β = α α + β + γ = β β + γ = γ (4) (5) (6) (4) & (6) β = 0, (5) α = γ An eigenvector is α 0, α 0. α 0 α 4α λ = 4 : v = β β = 4β γ 0 γ 4γ α + β = 4α α + β + γ = 4β β + γ = 4γ (7) (8) (9) (9) β = γ, (7) β = α so α = γ, β = α An eigenvector is α, α 0. α 0 α α λ = : v = β, β = β γ 0 γ γ α + β = α α + β + γ = β β + γ = γ (0) () () (0) β = α, () β = γ, in () β + β β = β α = γ, β = α An eigenvector is α, α 0. Normalised eigenvectors are therefore / / 6 v = 0 / ; v = / / 6 / ; v = / 6 / / / 6 / P = 0 / 6 / / / 6 / 9

will be such that 0 0 P AP = D = 0 4 0. 0 0 NB other column orders of P, D are possible. x 0 Q = x Ax, x = x, A = x 0 check : x Ax = [ ] 0 x x x x x 0 x = [ ] x + x x x x x + x + x x + x Set x = P y = x (x + x ) + x (x + x + x ) + x (x + x ) = x + x + x + x x + x x. x Ax = (P y) AP y = y P AP y = y Dy = [ ] 0 0 y y y y 0 4 0 y = y + 4y + y. 0 0 y For this choice of P, D (other choices are possible), Q = αy + βy + γy α =, β = 4, γ = y = P x = P x (since P is orthogonal) i.e. y y = y / 0 / / 6 / 6 / 6 / / / x x x y = (x x ) y = 6 (x + x + x ) y = (x x + x ). 0

8. k 0 i 0 0 + k + k = 0 i i 0 k + ik = 0 () k + k = 0 () ik + ik k = 0 () () ( i) k + k + ik = 0. Since k + ik = 0 (from ()), this gives k = 0. Hence, k = 0 (from ()) and then k = 0 (from (()). k = k = k = 0. Hence, the vectors {v, v, v } are independent. Since V is three-dimensional, any set of three independent vectors is a basis. Such a set is {v, v, v }, which is therefore a basis. Define w = v and choose w such that w = v λv w, w = v λv, v = v, v λ v, v. choosing λ = v, v v, v will yield w, w = 0. v, v = [0, + i] }{{} v v, v = [ 0 i ] 0 = + i v 0 i = λ = 0 w = / 0 = i i i/ w = v µw νw w, w = v, w µ w, w ν w, w w, w = 0 since w and w are othogonal. w, w = v, w µ w, w.

Choosing µ = v, w / w, w guarantees that w, w are orthogonal. v, w = [ i ] = i + + i = i/ w, w = [ / i/ ] / = 4 + + 4 = i/ µ = / (/) = / w, w = v, v λ w, w ν w, w = v, v ν w, w since w, w = 0. Choosing ν = v, v / w, w will guarantee that 0 = w, w, so w and w are orthogonal. v, v = [ i ] 0 = i + i = i i w, w = [ 0 i ] 0 = + = i ν = i = i. Then i w = / i 0 = /. i/ i i/