Matrices and vectors. Matrix and vector. a 11 a 12 a 1n a 21 a 22 a 2n A = b 1 b 2. b m. R m n, b = = ( a ij. a m1 a m2 a mn. def

Σχετικά έγγραφα
SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Numerical Analysis FMN011

TMA4115 Matematikk 3

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

ES440/ES911: CFD. Chapter 5. Solution of Linear Equation Systems

Reminders: linear functions

g-selberg integrals MV Conjecture An A 2 Selberg integral Summary Long Live the King Ole Warnaar Department of Mathematics Long Live the King

Jordan Form of a Square Matrix

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Tutorial on Multinomial Logistic Regression

Matrices and Determinants

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Lecture 21: Properties and robustness of LSE

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Distances in Sierpiński Triangle Graphs

MATHEMATICS. 1. If A and B are square matrices of order 3 such that A = -1, B =3, then 3AB = 1) -9 2) -27 3) -81 4) 81

( ) 2 and compare to M.

New bounds for spherical two-distance sets and equiangular lines

The Simply Typed Lambda Calculus

Tridiagonal matrices. Gérard MEURANT. October, 2008

Srednicki Chapter 55

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

ST5224: Advanced Statistical Theory II

2 Composition. Invertible Mappings

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

MATRIX INVERSE EIGENVALUE PROBLEM

MATRICES

Lecture 2. Soundness and completeness of propositional logic

The ε-pseudospectrum of a Matrix

Computing Gradient. Hung-yi Lee 李宏毅

x j (t) = e λ jt v j, 1 j n

Section 8.3 Trigonometric Equations

Arithmetical applications of lagrangian interpolation. Tanguy Rivoal. Institut Fourier CNRS and Université de Grenoble 1

Laplace Expansion. Peter McCullagh. WHOA-PSI, St Louis August, Department of Statistics University of Chicago

6.3 Forecasting ARMA processes

Fractional Colorings and Zykov Products of graphs

Optimal Impartial Selection

CRASH COURSE IN PRECALCULUS

Nondifferentiable Convex Functions

DERIVATION OF MILES EQUATION FOR AN APPLIED FORCE Revision C

department listing department name αχχουντσ ϕανε βαλικτ δδσϕηασδδη σδηφγ ασκϕηλκ τεχηνιχαλ αλαν ϕουν διξ τεχηνιχαλ ϕοην µαριανι

Parametrized Surfaces

Lecture 10 - Representation Theory III: Theory of Weights

Other Test Constructions: Likelihood Ratio & Bayes Tests

PARTIAL NOTES for 6.1 Trigonometric Identities

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 24/3/2007

A Bonus-Malus System as a Markov Set-Chain. Małgorzata Niemiec Warsaw School of Economics Institute of Econometrics

Statistics 104: Quantitative Methods for Economics Formula and Theorem Review

Θεωρία Πληροφορίας και Κωδίκων

CORDIC Background (2A)

derivation of the Laplacian from rectangular to spherical coordinates

Ανάκτηση Πληροφορίας

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Homework 3 Solutions

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

CORDIC Background (4A)

Analytical Expression for Hessian

The Jordan Form of Complex Tridiagonal Matrices

ΕΙΣΑΓΩΓΗ ΣΤΗ ΣΤΑΤΙΣΤΙΚΗ ΑΝΑΛΥΣΗ

From the finite to the transfinite: Λµ-terms and streams

Homework 8 Model Solution Section

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

w o = R 1 p. (1) R = p =. = 1

Horizontal and Vertical Recurrence Relations for Exponential Riordan Matrices and Their Applications

Example Sheet 3 Solutions

Rectangular Polar Parametric

Affine Weyl Groups. Gabriele Nebe. Summerschool GRK 1632, September Lehrstuhl D für Mathematik

Math 6 SL Probability Distributions Practice Test Mark Scheme

On Numerical Radius of Some Matrices

The challenges of non-stable predicates

Major Concepts. Multiphase Equilibrium Stability Applications to Phase Equilibrium. Two-Phase Coexistence

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)

Abstract Storage Devices

Congruence Classes of Invertible Matrices of Order 3 over F 2

Product Identities for Theta Functions

Mock Exam 7. 1 Hong Kong Educational Publishing Company. Section A 1. Reference: HKDSE Math M Q2 (a) (1 + kx) n 1M + 1A = (1) =

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 6/5/2006

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

A Hierarchy of Theta Bodies for Polynomial Systems

Solutions to Exercise Sheet 5

Probability and Random Processes (Part II)

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

DETERMINATION OF DYNAMIC CHARACTERISTICS OF A 2DOF SYSTEM. by Zoran VARGA, Ms.C.E.

Appendix A. Curvilinear coordinates. A.1 Lamé coefficients. Consider set of equations. ξ i = ξ i (x 1,x 2,x 3 ), i = 1,2,3

The Determinant of a Hypergeometric Period Matrix and a Generalization of Selberg s Integral

6. MAXIMUM LIKELIHOOD ESTIMATION

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr

ΜΑΣ 473/673: Μέθοδοι Πεπερασμένων Στοιχείων

Quadratic Expressions

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

Durbin-Levinson recursive method

ΜΕΘΟΔΟΙ ΑΕΡΟΔΥΝΑΜΙΚΗΣ

Inverse trigonometric functions & General Solution of Trigonometric Equations

Second Order RLC Filters

Lecture 13 - Root Space Decomposition II

Bayesian modeling of inseparable space-time variation in disease risk

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Transcript:

Matrices and vectors Matrix and vector a 11 a 12 a 1n a 21 a 22 a 2n A = a m1 a m2 a mn def = ( a ij ) R m n, b = b 1 b 2 b m Rm

Matrix and vectors in linear equations: example E 1 : x 1 + x 2 + 3x 4 = 4, E 2 : 2x 1 + x 2 x 3 + x 4 = 1, E 3 : 3x 1 x 2 x 3 + 2x 4 = 3, E 4 : x 1 + 2x 2 + 3x 3 x 4 = 4 coefficient matrix unknown vector A def = x def = x 1 x 2 x 3 x 4 right hand side vector (RHS) 1 1 0 3 2 1 1 1 3 1 1 2 1 2 3 1, b def = 4 1 3 4,

General linear equations E 1 : a 11 x 1 + a 12 x 2 + a 1n x n = b 1, E 2 : a 21 x 1 + a 22 x 2 + a 2n x n = b 2, E n : a n1 x 1 + a n2 x 2 + a nn x n = b n,

General linear equations A def = E 1 : a 11 x 1 + a 12 x 2 + a 1n x n = b 1, E 2 : a 21 x 1 + a 22 x 2 + a 2n x n = b 2, E n : a n1 x 1 + a n2 x 2 + a nn x n = b n, a 11 a 12 a 1n b 1 a 21 a 22 a 2n, b def b 2 =, x def = a n1 a n2 a nn b n x 1 x 2 x n

General linear equations A def = E 1 : a 11 x 1 + a 12 x 2 + a 1n x n = b 1, E 2 : a 21 x 1 + a 22 x 2 + a 2n x n = b 2, E n : a n1 x 1 + a n2 x 2 + a nn x n = b n, a 11 a 12 a 1n b 1 a 21 a 22 a 2n, b def b 2 =, x def = a n1 a n2 a nn b n equation E j maps to row j of A: ( ) a j1, a j2,, a jn, 1 j n a 1k a 2k variable x k relates to column k of A:, 1 k n a nk x 1 x 2 x n

Gaussian Elimination with partial pivoting (I) a 11 a 12 a 1n a 21 a 22 a 2n A =, b = a n1 a n2 a nn b 1 b 2 b n, x = equation E j maps to row j of A, x 1 relates to pivoting: choose largest entry in absolute value: a 11 a 21 a n1 x 1 x 2 x n piv def = argmax 1 j n a j1, ( a piv,1 = max 1 j n a j1 ) exchange equations E 1 and E piv (E 1 E piv )

Gaussian Elimination with partial pivoting (II) A = a 11 a 12 a 1n a 21 a 22 a 2n a n1 a n2 a nn, b = b 1 b 2 b n, x = x 1 x 2 x n elimination of x 1 from E 2 through E n : ( E j a ) j1 E 1 (E j ), a j1 a 11 1, 2 j n new row j of A ( aj1 a j2 a jn ) a j1 a 11 ( a11 a 12 a 1n ) a 11 = ( 0 a j2 a j1 a 11 a 12 a jn a j1 a 11 a 1n ), 2 j n new j th component of b b j a j1 a 11 b 1

Gaussian Elimination with partial pivoting (III) new A = a 11 a 12 a 1n 0 a 22 a 2n 0 a n2 a nn overwrite ==== A, where a jk = a jk a j1 a 11 a 1k, 2 j n, 2 k n new b = b 1 b 2 b n overwrite ==== b where b j = b j a j1 a 11 b 1, 2 j n, 2 k n (bold faced entries are computed from elimination process)

Gaussian Elimination with partial pivoting (IV) a 11 a 12 a 1n 0 a 22 a 2n A =, b = 0 a n2 a nn b 1 b 2 b n

Gaussian Elimination with partial pivoting (IV) a 11 a 12 a 1n 0 a 22 a 2n A =, b = 0 a n2 a nn b 1 b 2 b n repeat same process on E s for s = 2,, n 1: pivoting: choose largest entry in absolute value: piv def = argmax s j n a js, ( a piv,s = max 1 j n a js ) exchange equations E s and E piv (E s E piv )

Gaussian Elimination with partial pivoting (IV) a 11 a 12 a 1n 0 a 22 a 2n A =, b = 0 a n2 a nn b 1 b 2 b n repeat same process on E s for s = 2,, n 1: pivoting: choose largest entry in absolute value: piv def = argmax s j n a js, ( a piv,s = max 1 j n a js ) exchange equations E s and E piv (E s E piv ) eliminating x s from E s+1 through E n : ( E j a ) js E s (E j ), s + 1 j n a ss a jk = a jk a js a ss a sk overwrite ==== a jk, s + 1 j n, s + 1 k n, b j = b j a js a ss b s overwrite ==== b j, s + 1 j n

Gaussian Elimination with partial pivoting: summary a 11 a 12 a 1n a 21 a 22 a 2n A =, b = a n1 a n2 a nn b 1 b 2 b n

Gaussian Elimination with partial pivoting: summary a 11 a 12 a 1n a 21 a 22 a 2n A =, b = a n1 a n2 a nn b 1 b 2 b n for s = 1, 2,, n 1: pivoting: choose largest entry in absolute value: piv def = argmax s j n a js, E s E piv

Gaussian Elimination with partial pivoting: summary a 11 a 12 a 1n a 21 a 22 a 2n A =, b = a n1 a n2 a nn b 1 b 2 b n for s = 1, 2,, n 1: pivoting: choose largest entry in absolute value: piv def = argmax s j n a js, E s E piv eliminating xs from E s+1 through E n : a jk = a jk a js a ss a sk overwrite ==== a jk, s + 1 j n, s + 1 k n, b j = b j a js a ss b s overwrite ==== b j, s + 1 j n

Equations after Gaussian Elimination, backward substitution A = a 11 a 12 a 1n 0 a 22 a 2n 0 0 a nn, b = b 1 b 2 b n

Equations after Gaussian Elimination, backward substitution A = a 11 a 12 a 1n 0 a 22 a 2n 0 0 a nn, b = b 1 b 2 b n for s = n, n 1,, 1: x s = b s n k=s+1 a skx k a ss matlab command x = A\b

Gaussian Elimination: cost analysis (I) for s = 1, 2,, n 1: pivoting: (n s + 1 comparisons) piv = argmax s j n a js, E s E piv

Gaussian Elimination: cost analysis (I) for s = 1, 2,, n 1: pivoting: (n s + 1 comparisons) piv = argmax s j n a js, E s E piv eliminating x s from E s+1 through E n : a jk = a jk a js a ss a sk, s + 1 j n, s + 1 k n, b j = b j a js a ss b s, s + 1 j n Counting operations compute piv for each s, and perform swaps E s E piv for each s, compute ratios a js a ss for each j s + 1 for each s, compute a jk for each pair of j, k s + 1 for each s, compute b j for each j s + 1 Do not count integer operations; do not count memory costs

Gaussian Elimination: cost analysis (II) for s = 1, 2,, n 1: pivoting: (n s + 1 comparisons) piv = argmax s j n a js, E s E piv (total comparisons: n 1 n(n+1) s=1 (n s + 1) = 2 1 )

Gaussian Elimination: cost analysis (II) for s = 1, 2,, n 1: pivoting: (n s + 1 comparisons) piv = argmax s j n a js, E s E piv (total comparisons: n 1 n(n+1) s=1 (n s + 1) = 2 1 ) eliminating xs from E s+1 through E n : a jk = a jk a js a ss a sk, s + 1 j n, s + 1 k n, b j = b j a js a ss b s, s + 1 j n (total cost for computing { a js a ss }: n 1 n(n 1) s=1 (n s) = 2 ) (total cost for computing {a jk }: n 1 s=1 2(n s)2 = n(n 1)(2n 1) (total cost for computing {b j }: n 1 s=1 2(n s) = n(n 1) ) 3 )

Gaussian Elimination: cost analysis (II) for s = 1, 2,, n 1: pivoting: (n s + 1 comparisons) piv = argmax s j n a js, E s E piv

Gaussian Elimination: cost analysis (II) for s = 1, 2,, n 1: pivoting: (n s + 1 comparisons) piv = argmax s j n a js, E s E piv eliminating xs from E s+1 through E n : a jk = a jk a js a ss a sk, s + 1 j n, s + 1 k n, b j = b j a js a ss b s, s + 1 j n about 2(n s) 2 additions and multiplications for each s grand total, up to n 3 term: 2 3 n3 additions and multiplications

Gaussian Elimination: performance on mac desktop, in octave, GEPPruntime 56 10 5 n 3

NO scaled partial pivoting

Gaussian Elimination with partial pivoting blue: A R n n, b R n random

Gaussian Elimination with partial pivoting blue: A R n n, b R n random red: A R n n is Wilkinson matrix 1 1 1 1 1 A = 1 1 1 1, b R n random 1 1 1 1

Gaussian Elimination with complete pivoting, A R 2 2 ( a11 a A = 12 a 21 a 22 ) ( b1, b = b 2 ) ( x1, x = x 2 ) pivoting: choose largest entry in absolute value: (ip, jp) def = argmax 1 i,j 2 a ij, ( a ip,jp = max 1 j n a ij ) exchange equations E 1 and E ip (E 1 E ip ) exchange columns/variables 1 and jp perform Gaussian Elimination without pivoting on the resulting 2 2 system

Gaussian Elimination with complete pivoting a 11 a 12 a 1n a 21 a 22 a 2n A =, b = a n1 a n2 a nn b 1 b 2 b n

Gaussian Elimination with complete pivoting a 11 a 12 a 1n a 21 a 22 a 2n A =, b = a n1 a n2 a nn b 1 b 2 b n for s = 1, 2,, n 1: pivoting: choose largest entry in absolute value: (ip, jp) def = argmax s i,j n a ij, E s E ip swap matrix columns/ variables s and jp

Gaussian Elimination with complete pivoting a 11 a 12 a 1n a 21 a 22 a 2n A =, b = a n1 a n2 a nn b 1 b 2 b n for s = 1, 2,, n 1: pivoting: choose largest entry in absolute value: (ip, jp) def = argmax s i,j n a ij, E s E ip swap matrix columns/ variables s and jp eliminating xs from E s+1 through E n : a jk = a jk a js a ss a sk overwrite ==== a jk, s + 1 j n, s + 1 k n, b j = b j a js a ss b s overwrite ==== b j, s + 1 j n more numerically stable, but too expensive in practice

Matrix Algebra Definition: Let a 1,1 a 1,2 a 1,m a 2,1 a 2,2 a 2,m A = a n,1 a n,2 a n,m b 1,1 b 1,2 b 1,m b 2,1 b 2,2 b 2,m B = b n,1 b n,2 b n,m C def = def = (a ij ) R n m, def = (b ij ) R n m, then α a 1,1 + β b 1,1 α a 1,2 + β b 1,2 α a 1,m + β b 1,m α a 2,1 + β b 2,1 α a 2,2 + β b 2,2 α a 2,m + β b 2,m α a n,1 + β b n,1 α a n,2 + β b n,2 α a n,m + β b n,m = α A + β B R n m

Matrix Algebra: example Let A = ( 2 1 7 3 1 0 ) ( 4 2 8, B = 0 1 6 ) R 2 3, then C ( def 6 5 23 = A 2B = 3 1 12 ) R 2 3

Matrix Algebra Let A, B, C R n m, and let λ, µ R

Matrix-vector product vs linear equations a 11 x 1 + a 12 x 2 + a 1n x n = b 1, a 21 x 1 + a 22 x 2 + a 2n x n = b 2, a n1 x 1 + a n2 x 2 + a nn x n = b n,

Matrix-vector product vs linear equations A = a 11 x 1 + a 12 x 2 + a 1n x n = b 1, a 21 x 1 + a 22 x 2 + a 2n x n = b 2, a n1 x 1 + a n2 x 2 + a nn x n = b n, a 11 a 12 a 1n b 1 a 21 a 22 a 2n, b = b 2, x = a n1 a n2 a nn b n x 1 x 2 x n

Matrix-vector product vs linear equations a 11 x 1 + a 12 x 2 + a 1n x n = b 1, a 21 x 1 + a 22 x 2 + a 2n x n = b 2, a n1 x 1 + a n2 x 2 + a nn x n = b n, a 11 a 12 a 1n b 1 x 1 a 21 a 22 a 2n A =, b = b 2, x = x 2 a n1 a n2 a nn b n x n a 11 x 1 + a 12 x 2 + a 1n x n A x def a 21 x 1 + a 22 x 2 + a 2n x n =, for any A Rn m, x R m a n1 x 1 + a n2 x 2 + a nn x n equations become: A x = b

Matrix-vector product is dot product A x def = a 11 x 1 + a 12 x 2 + a 1n x n a 21 x 1 + a 22 x 2 + a 2n x n a n1 x 1 + a n2 x 2 + a nn x n (A x) j = ( ) ( ) a j1 x 1 + a j2 x 2 + a jn x n = aj1 a j2 a jn x 1 x 2 x n

Matrix-vector product, example A = A x = 3 2 1 1 6 4 3 2 1 1 6 4, x = ( 3 1 ( 3 1 ) = ) 7 4 14

Matrix-matrix product Let a 11 a 12 a 1m b 11 b 12 b 1p a 21 a 22 a 2m A= b 21 b 22 b 2p Rn m, B= Rm p a n1 a n2 a nm b m1 b m2 b mp

Matrix-matrix product Let a 11 a 12 a 1m b 11 b 12 b 1p a 21 a 22 a 2m A= b 21 b 22 b 2p Rn m, B= Rm p a n1 a n2 a nm b m1 b m2 b mp Column partition: b 1j B = ( ) def b 2j b 1 b 2 b p, bj = Rm, j = 1,, p b mj

Matrix-matrix product Let a 11 a 12 a 1m b 11 b 12 b 1p a 21 a 22 a 2m A= b 21 b 22 b 2p Rn m, B= Rm p a n1 a n2 a nm b m1 b m2 b mp Column partition: b 1j B = ( ) def b 2j b 1 b 2 b p, bj = Rm, j = 1,, p Definition: A B = A ( b 1 b 2 b p ) def = ( A b 1 A b 2 A b p ) R n p b mj

Matrix-matrix product Let a 11 a 12 a 1m b 11 b 12 b 1p a 21 a 22 a 2m A= b 21 b 22 b 2p Rn m, B= Rm p a n1 a n2 a nm b m1 b m2 b mp Column partition: b 1j B = ( ) def b 2j b 1 b 2 b p, bj = Rm, j = 1,, p Definition: A B = A ( b 1 b 2 b p ) def = ( A b 1 A b 2 A b p ) R n p b mj entry-wise formula: (A B) jk = (A b k ) j = ( a j1 a j2 a jm ) bk = m a ji b ik i=1

Matrix-matrix product, example Let A = 3 2 1 1 1 4 R 3 2, B = ( 2 1 1 3 1 2 ) R 2 3 C = A B = 12 5 1 1 0 3 14 5 7 R 3 3

Theorem: Let A R n m, B R m p, C R p k, Then A (B C) = (A B) C Proof: (A (B C)) st = = = = m a si (B C) it i=1 m i=1 a si p b ij c jt j=1 ( p m ) a si b ij j=1 i=1 p (A B) sj c jt j=1 = ((A B) C) st c jt

Special matrices square matrix A R n n d 1 d diagonal matrix D = 2 dn identity matrix I = upper-triangular matrix U = lower-triangular matrix L = 1 1 1 l 11 l 21 l 22 u 11 u 12 u 1n u 22 u 2n l n1 l n2 l nn u nn