Tridiagonal matrices Gérard MEURANT October, 2008
1 Similarity 2 Cholesy factorizations 3 Eigenvalues 4 Inverse
Similarity Let α 1 ω 1 β 1 α 2 ω 2 T =......... β 2 α 1 ω 1 β 1 α and β i ω i, i = 1,..., 1 Proposition Assume that the coefficients ω j, j = 1,..., 1 are different from zero and the products β j ω j are positive. Then, the matrix T is similar to a symmetric tridiagonal matrix. Therefore, its eigenvalues are real.
Proof. Consider D 1 T D which is similar to T, D diagonal matrix with diag. el. δ j Tae δ 1 = 1, δj 2 = β j 1 β 1, j = 2,... ω j 1 ω 1 Let α 1 β 1 β 1 α 2 β 2 J =......... β 2 α 1 β 1 β 1 α where the values β j, j = 1,..., 1 are assumed to be nonzero
Proposition with initial conditions det(j +1 ) = α +1 det(j ) β 2 det(j 1) det(j 1 ) = α 1, det(j 2 ) = α 1 α 2 β 2 1. The eigenvalues of J are the zeros of det(j λi ) The zeros do not depend on the signs of the coefficients β j, j = 1,..., 1 We suppose β j > 0 and we have a Jacobi matrix
Cholesy factorizations Let be a diagonal matrix with diagonal elements δ j, j = 1,..., and 1 l 1 1 L =...... l 2 1 l 1 1 J = L L T δ 1 = α 1, l 1 = β 1 /δ 1 δ j = α j β2 j 1 δ j 1, j = 2,...,, l j = β j /δ j, j = 2,..., 1
The factorization can be completed if no δ j is zero for j = 1,..., 1 This does not happen if the matrix J is positive definite, all the elements δ j are positive and the genuine Cholesy factorization can be obtained from J = L C (LC )T with L C = L 1/2 L C = which is δ1 δ1 β 1 δ2...... β 2 δ 2 δ 1 β 1 δ 1 δ
The factorization can also be written as with J = L D 1 (LD )T δ 1 β 1 δ 2 L D =...... β 2 δ 1 β 1 Clearly, the only elements we have to compute and store are the diagonal elements δ j, j = 1,..., δ To solve a linear system J x = c, we successively solve L D y = c, (LD )T x = y
The previous factorizations proceed from top to bottom (LU) We can also proceed from bottom to top (UL) J = L T D 1 L with L = d () 1 β 1 d () 2...... β 2 d () 1 β 1 and D a diagonal matrix with elements d () j d () d () = α, d () j = α j β2 j d (), j = 1,..., 1 j+1
From LU and UL factorizations we can obtain all the so-called twisted factorizations of J J = M Ω M T M is lower bidiagonal at the top for rows with index smaller than l and upper bidiagonal at the bottom for rows with index larger than l ω 1 = α 1, ω j = α j β2 j 1 ω j 1, j = 2,..., l 1 ω = α, ω j = α j β2 j ω j+1, j = 1,..., l + 1 ω l = α l β2 l 1 ω l 1 β2 l ω l+1
Eigenvalues The eigenvalues of J are the zeros of det(j λi ) det(j λi ) = δ 1 (λ) δ (λ) = d () 1 (λ) d () (λ) This shows that δ (λ) = det(j λi ) det(j 1 λi ), d () 1 (λ) = det(j λi ) det(j 2, λi ) Theorem The eigenvalues θ (+1) of J +1 strictly interlace the eigenvalues of J i θ (+1) 1 < θ () 1 < θ (+1) 2 < θ () 2 < < θ () < θ (+1) +1 (Cauchy interlacing theorem)
Proof. Eigenvector x = (y ζ) T of J +1 corresponding to θ J y + β ζe = θy β y + α +1 ζ = θζ Eliminating y from these relations, we obtain (α +1 β 2 ((e ) T (J θi ) 1 e ))ζ = θζ α +1 β 2 j=1 ξ 2 j θ () j θ θ = 0 where ξ j is the last component of the jth eigenvector of J The zeros of this function interlace the poles θ () j
Inverse Theorem There exist two sequences of numbers {u i }, {v i }, i = 1,..., such that u 1 v 1 u 1 v 2 u 1 v 3... u 1 v u 1 v 2 u 2 v 2 u 2 v 3... u 2 v J 1 = u 1 v 3 u 2 v 3 u 3 v 3... u 3 v....... u 1 v u 2 v u 3 v... u v Moreover, u 1 can be chosen arbitrarily, for instance u 1 = 1 see Baranger and Duc-Jacquet; Meurant Hence, we just have to compute J 1 e1 and J 1 e
J v = e 1 Use UL factorization of J For v 1 = 1 d () 1 use LU factorization u = 1 δ v,, v j = ( 1) j 1 β 1 β j 1 d () 1 d () j v J u = e, j = 2,..., u j = ( 1) j β j β 1 δ j δ v, j = 1,..., 1
Theorem The inverse of the symmetric tridiagonal matrix J is characterized as ) i,j = ( 1) j i d () j+1 β i β d () j 1 (J 1 (J 1 δ i δ ) i,i = d () i+1 d (), i δ i δ, i, j > i Proof. u i = ( 1) (i+1) 1 d () 1 d () β 1 β i 1 δ i δ
The diagonal elements of J 1 can also be obtained using twisted factorizations Theorem Let l be a fixed index and ω j the diagonal elements of the corresponding twisted factorization Then (J 1 ) l,l = 1 ω l In the sequel we will be interested in (J 1 ) 1,1 (J 1 ) 1,1 = 1 d () 1
Can we compute (J 1 ) 1,1 incrementally? Theorem Proof. (J 1 +1 ) 1,1 = (J 1 ) 1,1 + (β 1 β ) 2 (δ 1 δ ) 2 δ +1 ( J β J +1 = e ) β (e ) T α +1 The upper left bloc of J 1 +1 is the inverse of the Schur complement ( ) 1 J β2 e (e ) T α +1 Inverse of a ran-1 modification of J
Use the Sherman Morrison formula This gives (A + αxy T ) 1 = A 1 α A 1 xy T A 1 1 + αy T A 1 x ( ) 1 J β2 e (e ) T = J 1 α +1 Let l = J 1 e + (J 1 α +1 β 2 (J 1 +1 ) 1,1 = (J 1 ) 1,1 + β2 (l 1 )2 α +1 β 2 l e )((e ) T J 1 ) (e ) T J 1 e
l 1 = ( 1) 1 β 1 β 1 δ 1 δ, l = 1 δ To simplify the formulas, we note that α +1 β 2 l = α +1 β2 δ = δ +1 We start with (J 1 1 ) 1,1 = π 1 = 1/α 1 and c 1 = 1 t = β 2 π, δ +1 = α +1 t, π +1 = 1 δ +1, c +1 = t c π This gives (J 1 +1 ) 1,1 = (J 1 ) 1,1 + c +1 π +1
J. Baranger and M. Duc-Jacquet, Matrices tridiagonales symétriques et matrices factorisables, RIRO, v 3, (1971), pp 61 66 G.H. Golub and C. Van Loan, Matrix Computations, Third Edition, Johns Hopins University Press, (1996) G. Meurant, A review of the inverse of tridiagonal and bloc tridiagonal matrices, SIAM J. Matrix Anal. Appl., v 13 n 3, (1992), pp 707 728