MATH1030 Linear Algebra Assignment 5 Solution

Σχετικά έγγραφα
SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

2 Composition. Invertible Mappings

Reminders: linear functions

EE512: Error Control Coding

Matrices and Determinants

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Every set of first-order formulas is equivalent to an independent set

Numerical Analysis FMN011

ST5224: Advanced Statistical Theory II

Concrete Mathematics Exercises from 30 September 2016

Example Sheet 3 Solutions

Homework 3 Solutions

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Section 8.3 Trigonometric Equations

UNIT - I LINEAR ALGEBRA. , such that αν V satisfying following condition

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Finite Field Problems: Solutions

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)

C.S. 430 Assignment 6, Sample Solutions

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Lecture 15 - Root System Axiomatics

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

Inverse trigonometric functions & General Solution of Trigonometric Equations

Partial Differential Equations in Biology The boundary element method. March 26, 2013

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Jordan Form of a Square Matrix

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

MATHEMATICS. 1. If A and B are square matrices of order 3 such that A = -1, B =3, then 3AB = 1) -9 2) -27 3) -81 4) 81

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Second Order Partial Differential Equations

dim(u) = n 1 and {v j } j i

Fractional Colorings and Zykov Products of graphs

6.3 Forecasting ARMA processes

A Note on Intuitionistic Fuzzy. Equivalence Relation

The challenges of non-stable predicates

Quadratic Expressions

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

The Simply Typed Lambda Calculus

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

( y) Partial Differential Equations

Section 7.6 Double and Half Angle Formulas

( ) 2 and compare to M.

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

Congruence Classes of Invertible Matrices of Order 3 over F 2

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Space-Time Symmetries

Bounding Nonsplitting Enumeration Degrees

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

Problem Set 3: Solutions

SOLUTIONS TO PROBLEMS ELEMENTARY LINEAR ALGEBRA

MATRICES

Tridiagonal matrices. Gérard MEURANT. October, 2008

Solutions to Exercise Sheet 5

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.

Lecture 13 - Root Space Decomposition II

Lecture 10 - Representation Theory III: Theory of Weights

SOLVING CUBICS AND QUARTICS BY RADICALS

Areas and Lengths in Polar Coordinates

Math221: HW# 1 solutions

Srednicki Chapter 55

Statistical Inference I Locally most powerful tests

Other Test Constructions: Likelihood Ratio & Bayes Tests

Areas and Lengths in Polar Coordinates

4.6 Autoregressive Moving Average Model ARMA(1,1)

Mock Exam 7. 1 Hong Kong Educational Publishing Company. Section A 1. Reference: HKDSE Math M Q2 (a) (1 + kx) n 1M + 1A = (1) =

= {{D α, D α }, D α }. = [D α, 4iσ µ α α D α µ ] = 4iσ µ α α [Dα, D α ] µ.

w o = R 1 p. (1) R = p =. = 1

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

Homework 8 Model Solution Section

Lecture 2. Soundness and completeness of propositional logic

Solution Series 9. i=1 x i and i=1 x i.

Chapter 3: Ordinal Numbers

Lecture 21: Properties and robustness of LSE

New bounds for spherical two-distance sets and equiangular lines

derivation of the Laplacian from rectangular to spherical coordinates

TMA4115 Matematikk 3

MINIMAL CLOSED SETS AND MAXIMAL CLOSED SETS

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

ECE Spring Prof. David R. Jackson ECE Dept. Notes 2

On a four-dimensional hyperbolic manifold with finite volume

The Probabilistic Method - Probabilistic Techniques. Lecture 7: The Janson Inequality

DESIGN OF MACHINERY SOLUTION MANUAL h in h 4 0.

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

Approximation of distance between locations on earth given by latitude and longitude

Uniform Convergence of Fourier Series Michael Taylor

LINEAR ALGEBRA. 4th STEPHEN H.FRIEDBERG, ARNOLD J.INSEL, LAWRENCE E.SPENCE. Exercises Of Chapter 1-4

12. Radon-Nikodym Theorem

Section 9.2 Polar Equations and Graphs

CRASH COURSE IN PRECALCULUS

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Cyclic or elementary abelian Covers of K 4

PARTIAL NOTES for 6.1 Trigonometric Identities

Sequent Calculi for the Modal µ-calculus over S5. Luca Alberucci, University of Berne. Logic Colloquium Berne, July 4th 2008

Transcript:

2015-16 MATH1030 Linear Algebra Assignment 5 Solution (Question No. referring to the 8 th edition of the textbook) Section 3.4 4 Note that x 2 = x 1, x 3 = 2x 1 and x 1 0, hence span {x 1, x 2, x 3 } has dimension 1. 7 For any a, b, c R, (a + b, a b + 2c, b, c) T = a(1, 1, 0, 0) T + b(1, 1, 1, 0) T + c(0, 2, 0, 1) T Note that X := { (1, 1, 0, 0) T, (1, 1, 1, 0) T, (0, 2, 0, 1) T } is a linear independent set. As spav(x) = S, it follows that X is a basis for S. 8 Since the dimension of R 3 is 3, it takes at least three vectors to span R 3. Therefore x 1 and x 2 cannot span R 3. The matrix X must be nonsingular, that is, det(x) 0. If x 3 = (a, b, c) T and X = (x 1, x 2, x 3 ), then det(x) = 1 3 a 1 1 b 1 4 c = 5a b 4c If one chooses a, b, and c so that 5a b 4c 0, then {x 1, x 2, x 3 } will be a basis for R 3. 10 We must find a subset of three vectors that are linearly independent. independent, but x 3 = x 2 x 1 Clearly x 1 and x 2 are linearly so x 1, x 2 and x 3 are linearly dependent. Next, we consider {x 1, x 2, x 4 }. Let X = (x 1, x 2, x 4 ), then 1 2 2 det(x) = 2 5 7 2 4 4 = 0 1

so these three vectors are also linearly dependent. If we pick Y = (x 1, x 2, x 5 ), then 1 2 1 det(y ) = 2 5 1 2 4 0 = 2 so the vectors x 1, x 2 and x 5 are linearly independent and hence form a basis for R 3. 11 We claim that X = {x 2 + 2, x + 3} is a basis for S. To prove it, firstly note that if there exists a, b R such that a(x 2 + 2) + b(x + 3) = 0 Then a = b = 0, so X is linearly independent. Next, for any element p S, there exists a p, b p R such that p = a p x 2 + b p x + 2a p + 3b p Since we can express p as p = a p (x 2 + 2) + b p (x + 3) so p is spanned by X, i.e., p span(x). So, S span(x). And by reversing the above argument we see that span(x) S. Therefore, S = span(x). It follows that X is a basis for S. 14 Note that the dimension of a finite-dimensional space equals to the number of elements of its basis. So we just need to count the number of linearly independent elements of the given set. Note that x 2 1 = 2x + 2(x 1) + (x 2 + 1) So the four vectors are linearly dependent. Hence, the dimension of the span of the four vectors is at most 3 (i.e. co-dimension 1 of P 3 ). Next, suppose there exists a, b, c R such that Then a(x) + b(x 1) + c(x 2 + 1) = 0 c = 0 a + b = 0 c b = 0 It implies a = b = c = 0. Hence, X = {x, x 1, x 2 + 1} is linearly independent. So, span(x {x 2 1}) = span(x) has dimension 3. Clearly x 2 x 1 = x 2 (x + 1) And it is also clear that {x 2, x + 1} is linearly independent. Therefore, span({x 2, x 2 x 1, x + 1}) = span({x 2, x + 1}) has dimension 2. (d) Clearly {2x, x 2} is linearly independent. Therefore, span({2x, x 2}) has dimension 2. 15 2

Note that S = {ax 3 + bx 2 + cx a, b, c R}. Hence, a basis of S can be given by X = {x 3, x 2, x}. Note that T = {ax 3 + bx 2 + cx + d a, b, c, d R and a + b + c + d = 0}. Since a + b + c + d = 0 if and only if d = a b c, so we can also express T as T = { ax 3 + bx 2 + cx + ( a b c) a, b, c R } Therefore, a basis of T can be given by Y = {x 3 1, x 2 1, x 1}. Note that S T = { ax 3 + bx 2 + cx a, b, c R } { ax 3 + bx 2 + cx + ( a b c) a, b, c R } = { ax 3 + bx 2 + cx a, b, c R and a + b + c = 0 } = { ax 3 + bx 2 + ( a b)x a, b, c R } So a basis of S T can be given by {x 3 x, x 2 x}. 18 Let U and V be subspaces of R n with the property that U \V = 0. If either U = 0 or V = 0, then the result is obvious. So, assume that both subspaces are nontrivial with dim(u) = k > 0 and dim(v ) = r > 0. Let {u 1,..., u k } be a basis for U and let {v 1,..., v r } be a basis for V. The vectors u 1,..., u k, v 1,..., v r span U + V. We claim that these vectors form a basis for U + V and hence that dim(u) + dim(v ) = k + r. To show this we must show that the vectors are linearly independent. Thus we must show that if then c 1 = = c k+r = 0. Now, if we set then the previous equation becomes c 1 u 1 + + c k u k + c k+1 v 1 + + c k+r v r = 0 u = c 1 u 1 + + c k u k and v = c k+1 v 1 + + c k+r v r u + v = 0 This implies u = v and hence that both u and v are in both in U V = 0. Thus we have u = c 1 u 1 + + c k u k = 0 v = c k+1 v 1 + + c k+r v r = 0 So, by the independence of u 1,..., u k and the independence of v 1,..., v r it follows that c 1 = = c k+r = 0 Section 3.6 1 The reduced row echelon form of the matrix is 1 0 2 0 1 0 0 0 0 3

Thus, (1, 0, 2) and (0, 1, 0) form a basis for the row space. The first and second columns of the original matrix form a basis for the column space: a 1 = (1, 2, 4) T and a 2 = (3, 1, 7) T The reduced row echelon form involves one free variable and hence the nullspace will have dimension 1. Setting x 3 = 1, we get x 1 = 2 and x 2 = 0. Thus (2, 0, 1) T is a basis for the nullspace. The reduced row echelon form of the matrix is 1 0 0 10/7 0 1 0 2/7 0 0 1 0 Clearly then, the set {(1, 0, 0, 10/7), (0, 1, 0, 2/7), (0, 0, 1, 0)} is a basis for the row space. Since the reduced row echelon form of the matrix involves one free variable the nullspace will have dimension 1.Setting the free variable x 4 = 1 we get x 1 = 10/7, x 2 = 2/7, x 3 = 0 Thus { (10/7, 2/7, 0, 1) } T is a basis for the nullspace. The dimension of the column space equals the rank of the matrix which is 3. Thus the column space must be R 3 and we can take as our basis the standard basis {e 1, e 2, e 3 }. The reduced row echelon form of the matrix is 1 0 0 0.65 0 1 0 1.05 0 0 1 0.75 The set {(1, 0, 0, 0.65), (0, 1, 0, 1.05), (0, 0, 1, 0.75)} is a basis for the row space. The set { (0.65, 1.05, 0.75, 1) T } is a basis for the nullspace. As in part the column space is R 3 and we can take {e 1, e 2, e 3 } as our basis. 2 Note that 2 1 2 = 2 1 4 2 and 3 1 2 2 = 1 + 1 5 2 3 Also, it is obvious that { (1, 1, 2) T, (2, 1, 3) T } is a linearly independent set. Therefore, the dimension of the space spanned by the given four vectors is 2. 3 The reduced row echelon form of A is given by 1 2 0 5 3 0 U = 0 0 1 1 2 0 0 0 0 0 0 1 4

So a 1, a 3 and a 6 corresponds to the free variables. And we have a 1 = a 1 /2 a 3 = 5a 2 /2 a 4 a 6 = 0 The lead variables correspond to columns 1, 3, and 6. Thus, a 1, a 3 and a 6 form a basis for the column space of A. The remaining column vectors satisfy the following dependency relationships: a 2 = 2a 1 a 4 = 5a 1 a 3 a 5 = 3a 1 + 2a 3 8 Since null(a) = 0, so by the dimension theorem, rank(a) = n. Hence, the column vectors of A are linearly independent. However, since m > n so the column vectors of A cannot span R m. If b is not in the column space of A, then {a 1, a 2,..., a n, b} is a linearly independent set. Therefore, there is no solution for the system Ax = b. If b is in the column space of A, then there exists a set of parameters {x 1, x 2,..., x n } with not all x i are zero, such that x 1 a 1 + + x n a n = b Therefore, there exists one solution for the system Ax = b. 10 If Ac = Ad, then A(c d) = 0. Suppose rank(a) = n, then by the dimension theorem, null(a) = n n = 0. Therefore, the null space of A is N(A) = {0}. Therefore, c d = 0 and hence c = d. Suppose rank(a) < n, by the dimension theorem, null(a) = n rank(a) > 0. Therefore, there exists at least one non-zero vector v 1 such that span({v 1 }) N(A). Therefore, we cannot conclude that c d = 0 and hence c and d may not be equivalent. 12 Since A and B differ only by a product of elementary matrices E = E 1... E n (i.e. B = EA), so the system of linear equations Ax = 0 is equivalent to Bx = EAx = E0 = 0. Therefore, the null spaces of A and B are equivalent, and hence null(a) = null(b). By the dimension theorem, rank(a) = rank(b). Therefore, the dimension of the column space of A equals to that of the column space of B. Let A = [ ] 1 1 1 1 and B = [ ] 1 1 0 0 Then clearly A and B are row equivalent. Note that the column space of A is span((1, 1) T ), but the column space of B is span((1, 0) T ). Therefore, the column spaces of A and B may not be the same. 14 5

2 a 3 = 2a 1 + a 2 = 7 11 1 13 a 4 = a 1 + 4a 2 = 7 30 3 16 If A is 5 8 with rank 5, then the column space of A will be R 5. So by the Consistency Theorem, the system Ax = b will be consistent for any b in R 5. Since A has 8 columns, its reduced row echelon form will involve 3 free variables. A consistent system with free variables must have infinitely many solutions. 18 Since A is 5 3 with rank 3, its nullity is 0. Therefore N(A) = {0}. If c 1 y 1 + c 2 y 2 + c 3 y 3 = 0 then c 1 Ax 1 + c 2 Ax 2 + c 3 Ax 3 = 0 A(c 1 x 1 + c 2 x 2 + c 3 x 3 ) = 0 and it follows that c 1 x 1 + c 2 x 2 + c 3 x 3 is in N(A). However, we know from part that N(A) = {0}. Therefore c 1 x 1 + c 2 x 2 + c 3 x 3 = 0 Since x 1, x 2, x 3 are linearly independent it follows that c 1 = c 2 = c 3 = 0 and hence y 1, y 2, y 3 are linearly independent. Since dim(r 5 ) = 5 it takes 5 linearly independent vectors to span the vector space. The vectors y 1, y 2, y 3 do not span R 5 and hence cannot form a basis for R 5. 19 Given A is m n with rank n and y = Ax where x 0. If y = 0 then x 1 a 1 + x 2 a 2 + + x n a n = 0 But this would imply that the columns vectors of A are linearly dependent. Since A has rank n we know that its column vectors must be linearly independent. Therefore y cannot be equal to 0. 20 If the system Ax = b is consistent, then b is in the column space of A. Therefore the column space of (A b) will equal the column space of A. Since the rank of a matrix is equal to the dimension of the column space it follows that the rank of (A b) equals the rank of A. 6

Conversely, if (A b) and A have the same rank, then b must be in the column space of A. If b were not in the column space of A, then the rank of (A b) would equal rank(a) + 1. 22 If x N(A), then BAx = B0 = 0 and hence x N(BA). Thus N(A) is a subspace of N(BA). On the other hand, if x N(BA) then B(Ax) = BAx = 0 and hence Ax N(B). But N(B) = {0} since B is nonsingular. Therefore Ax = 0 and hence x N(A). Thus BA and A have the same nullspace. It follows from the dimension theorem that rank(a) = n dim(n(a)) = n dim(n(ba)) = rank(ba) By part, left multiplication by a nonsingular matrix does not alter the rank. Thus rank(a) = rank(a T ) = rank(c T A T ) = rank((ac) T ) = rank(ac) 24 If N(A B) = R n then the nullity of A B is n and consequently the rank of A B must be 0. Therefore A B = O A = B Section 4.1 3 For any α 1 and a 0, L(αx) = αx + a αl(x) = αx + αa Hence, the translation is not a linear operation. Remark: Students can also prove that the addition also fails to be linear for translation. 4 Let u 1 = [ ] [ ] 1 1, u 2 2 =, x = 1 [ ] 7 5 7

To determine L(x) we must first express x as a linear combination x = c 1 u 1 + c 2 u 2 To do this we must solve the system Uc = x for c. The solution is c = (4, 3) T and it follows that [ ] 7 L(x) = L(4u 1 + 3u 2 ) = 4L(u 1 ) + 3L(u 2 ) = 18 6 For any x 1, x 2, y 1, y 2 R, L((x 1, x 2 ) + (y 1, y 2 )) = L((x 1 + y 1, x 2 + y 2 )) = (x 1 + y 1, x 2, y 2, 1) but L((x 1, x 2 )) + L((y 1, y 2 )) = (x 1, x 2, 1) + (y 1, y 2, 1) = (x 1 + y 1, x 2 + y 2, 2) Therefore, L is not a linear operation. For any x 1, x 2, y 1, y 2 R, L((x 1, x 2 ) + (y 1, y 2 )) = L((x 1 + y 1, x 2 + y 2 )) = (x 1 + y 1, x 2, y 2, (x 1 + y 1 ) + 2(x 2 + y 2 )) L((x 1, x 2 )) + L((y 1, y 2 )) = (x 1, x 2, x 1 + 2x 2 ) + (y 1, y 2, y 1 + 2y 2 ) = (x 1 + y 1, x 2 + y 2, x 1 + y 1 + 2x 2 + 2y 2 ) Also, for any α R, αl((x 1, x 2 )) = α(x 1, x 2, x 1 + 2x 2 ) = (αx 1, αx 2, αx 1 + 2αx 2 ) = L((αx 1, αx 2 )) = L(α(x 1, x 2 )) Therefore, L is a linear operation. For any x 1, x 2, y 1, y 2 R, L((x 1, x 2 ) + (y 1, y 2 )) = (x 1 + y 1, 0, 0) = (x 1, 0, 0) + (y 1, 0, 0) = L((x 1, x 2 )) + L((y 1, y 2 )) Also, for any α R, Therefore, L is a linear operation. (d) For any α 1, x 1, x 2 R \ {0}, αl((x 1, x)2) = α(x 1, 0, 0) = (αx 1, 0, 0) = L(α(x 1, x 2 )) αl((x 1, x 2 )) = α(x 1, x 2, x 2 1, x 2 2) = (αx 1, αx 2, αx 2 1 + αx 2 2) but L(α(x 1, x 2 )) = (αx 1, αx 2, (αx 1 ) 2 + (αx 2 ) 2 ) = (αx 1, αx 2, α 2 x 2 1 + α 2 x 2 2) Hence, L is not a linear operation. 8 For any A, B R n n and α R, L(αA) = C(αA) + (αa)c = α(ca + AC) = αl(a) 8

And Therefore, L is a linear operator. For any A, B R n n and α, β R, L(A + B) = C(A + B) + (A + B)C = CA + CB + AC + BC = (CA + AC) + (CB + BC) = L(A) + L(B) L(αA + βb) = C 2 (αa + βb) = αc 2 A + βc 2 B = αl(a) + βl(b) Therefore, L is a linear operator. If C O, then L is not a linear operator. For example, but L(2I) = (2I) 2 C = 4C 2L(I) = 2C 9 For any p, q P 2 for any x R, L((p + q)(x)) = x(p + q)(x) = xp(x) + xq(x) = L(p(x)) + L(q(x)) Also, for any α R, L(αp(x)) = x(αp)(x) = α x(p(x)) = αl(p(x)) Therefore, L is a linear operator on P 2. Let p(x) = x 2 for any x R, then p P 2, and but L(2p(x)) = x 2 + (2p)(x) 2L(p(x)) = 2(x 2 + p(x)) = 2x 2 + (2p)(x) Therefore, L is not a linear operator. For any p, q P 2 for any x R, Also, for any α R, L((p + q)(x)) = (p + q)(x) + x(p + q)(x) + x 2 (p + q) (x) = p(x) + xp(x) + x 2 p (x) + q(x) + xq(x) + x 2 q (x) = L(p(x)) + L(q(x)) L(αp(x)) = (αp)(x) + x(αp)(x) + x 2 (αp )(x) = α[p(x) + xp(x) + x 2 p (x)] = αl(p(x)) Hence, L is a linear operator. 12 When n = 1, L(α 1 v 1 ) = α 1 L(v 1 ) 9

Assume the result is true for any linear combination of k vectors and apply L to a linear combination of k + 1 vectors. L(α 1 v 1 + + α k v k + α k+1 v k+1 ) = L([α 1 v 1 + + α k v k ] + [α k+1 v k+1 ]) Thus, the result follows then by mathematical induction. = L(α 1 v 1 + + α k v k ) + L(α k+1 v k+1 ) = α 1 L(v 1 ) + + α k L(v k ) + α k+1 L(v k+1 ) 13 If v is any element of V then v = α 1 v 1 + α 2 v 2 + + α n v n Since L 1 (v i ) = L 2 (v i ) for any i = 1,..., n, it follows that L 1 (v) = α 1 L 1 (v 1 ) + α 2 L 1 (v 2 ) + + α n L 1 (v n ) = α 1 L 2 (v 1 ) + α 2 L 2 (v 2 ) + + α n L 2 (v n ) = L 2 (α 1 v 1 + α 2 v 2 + + α n v n ) = L 2 (v) 16 If v 1, v 2 V, then Therefore, L is a linear transformation. L(αv 1 + βv 2 ) = L 2 (L 1 (αv 1 + βv 2 )) = L 2 (αl 1 (v) + βl 1 (v 2 )) = αl 2 (L 1 (v 1 )) + βl 2 (L 1 (v 2 )) = αl(v 1 ) + βl(v 2 ) 17 For any x = (x 1, x 2, x 3 ) R 3, L(x) = (x 3, x 2, x 1 ) = (0, 0, 0) = 0 if any only if x 1 = x 2 = x 3 = 0. Hence, ker(l) = {0}. By the dimension theorem, null(l) = 0 rank(l) = dim(r 3 ) = 3. Therefore, range(l) = R 3. For any x = (x 1, x 2, x 3 ) R 3, L(x) = (x 1, x 2, 0) = (0, 0, 0) = 0 if and only if x 1 = x 2 = 0 and x 3 R. Therefore, ker(l) = span({e 3 }). Then, for any (x 1, x 2, 0) R 3, we have (x 1, x 2, 1) R 3 and L(x 1, x 2, 1) = (x 1, x 2, 0). Hence, range(l) = span({e 1, e 2 }). For any x = (x 1, x 2, x 3 ) R 3, L(x) = (x 1, x 1, x 1 ) = (0, 0, 0) = 0 if and only if x 1 = 0 and x 2, x 3 R. Therefore, ker(l) = span({e 2, e 3 }). For any (x 1, x 1, x 1 ) R 3, we have (x 1, 0, 0) R 3 and L(x 1, 0, 0) = (x 1, x 1, x 1 ). Hence, range(l) = span((1, 1, 1) T ). 10

19 For any p P 3, L(p(x)) = xp (x) = 0 if and only if p (x) = 0 for any x R. Therefore, ker(l) = P 0, the set of all constant polynomial. For any q P 3 such that q(0) = 0, there exists a, b, c R such that q(x) = ax 3 +bx 2 +cx. Let p(x) = a 3 x3 + b 2 x2 +cx, then p P 3 and L(p) = q. On the other hand, for any q P 3 such that q(0) 0, there exists d R \ {0} and q 0 P 3 such that q 0 (0) = 0 and q(x) = q 0 (x) + d. If there exists p P 3 such that L(p) = q, it implies d = 0, which is contradicting. Therefore, range(l) = {q P 3 q(0) = 0}. If p(x) = ax 2 + bx + c is in ker(l), then L(p) = (ax 2 + bx + c) (2ax + b) = ax 2 + (b 2a)x + (c b) must equal the zero polynomial z(x) = 0x 2 + 0x + 0. Equating coefficients we see that a = b = c = 0 and hence ker(l) = 0. The range of L is all of P 3. To see this note that if p(x) = ax 2 + bx + c is any vector in P 3 and we define q(x) = ax 2 + (b + 2a)x + c + b + 2a then L(q(x)) = (ax 2 + (b + 2a)x + c + b + 2a)(2ax + b + 2a) = ax 2 + bx + c = p(x) Clearly, ker(l) = {p P 3 p(0) = p(1) = 0}. Also, for any a, b R, define p(x) = (b a)x + a, then p P 3, p(0) = a and p(1) = b. Hence, L(p(x)) = p(0)x + p(1) = ax + b. Therefore, range(l) = P 1. 21 Suppose L is one-to-one and v ker(l), then L(v) = 0 W and L(0 V ) = 0 W Since L is one-to-one, it follows that v = 0 V. Therefore ker(l) = 0 V. Conversely, suppose ker(l) = 0 V and L(v 1 ) = L(v 2 ). Then L(v 1 v 2 ) = L(v 1 )L(v 2 ) = 0 W Therefore v 1 v 2 ker(l) and hence So, L is one-to-one. v 1 v 2 = 0 V v 1 = v 2 22 To show that L maps R 3 onto R 3 we must show that for any vector y R 3 there exists a vector x R 3 such that L(x) = y. This is equivalent to showing that the linear system x 1 = y 1 x 1 + x 2 = y 2 x 1 + x 2 + x 3 = y 3 is consistent. And this system is consistent since the coefficient matrix is nonsingular. 25 11

If p = ax 2 + bx + c P 3, then Thus, D(p) = 2ax + b D(P 3 ) = span({1, x}) = P 2 The operator is not one-to-one, for if p 1 (x) = ax 2 + bx + c 1 and p 2 (x) = a 2 + bx + c 2 where c 2 c 1, then D(p 1 ) = D(p 2 ). The subspace S consists of all polynomials of the form ax 2 + bx. If p 1 = a 1 x 2 + b 1 x, p 2 = a 2 x 2 + b 2 x and D(p 1 ) = D(p 2 ), then 2a 1 x + b 1 = 2a 2 x + b 2 and it follows that a 1 = a 2, b 1 = b 2. Thus p 1 = p 2 and hence D is one-to-one. D does not map S onto P 3 since D(S) = P 2. 12