MATRIX INVERSE EIGENVALUE PROBLEM

Σχετικά έγγραφα
Tridiagonal matrices. Gérard MEURANT. October, 2008

2 Composition. Invertible Mappings

Numerical Analysis FMN011

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

w o = R 1 p. (1) R = p =. = 1

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Congruence Classes of Invertible Matrices of Order 3 over F 2

EE512: Error Control Coding

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Matrices and Determinants

Example Sheet 3 Solutions

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

Homework 3 Solutions

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Fractional Colorings and Zykov Products of graphs

4.6 Autoregressive Moving Average Model ARMA(1,1)

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

New bounds for spherical two-distance sets and equiangular lines

6.3 Forecasting ARMA processes

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Homomorphism in Intuitionistic Fuzzy Automata

Other Test Constructions: Likelihood Ratio & Bayes Tests

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Areas and Lengths in Polar Coordinates

( ) 2 and compare to M.

The Jordan Form of Complex Tridiagonal Matrices

Jordan Form of a Square Matrix

Math221: HW# 1 solutions

A Note on Intuitionistic Fuzzy. Equivalence Relation

Areas and Lengths in Polar Coordinates

A Two-Sided Laplace Inversion Algorithm with Computable Error Bounds and Its Applications in Financial Engineering

Statistical Inference I Locally most powerful tests

Section 8.3 Trigonometric Equations

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Finite Field Problems: Solutions

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Lecture 21: Properties and robustness of LSE

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

derivation of the Laplacian from rectangular to spherical coordinates

On Numerical Radius of Some Matrices

Every set of first-order formulas is equivalent to an independent set

Second Order RLC Filters

C.S. 430 Assignment 6, Sample Solutions

Problem Set 3: Solutions

Uniform Convergence of Fourier Series Michael Taylor

Reminders: linear functions

Matrices and vectors. Matrix and vector. a 11 a 12 a 1n a 21 a 22 a 2n A = b 1 b 2. b m. R m n, b = = ( a ij. a m1 a m2 a mn. def

Solution Series 9. i=1 x i and i=1 x i.

Inverse trigonometric functions & General Solution of Trigonometric Equations

Commutative Monoids in Intuitionistic Fuzzy Sets

Concrete Mathematics Exercises from 30 September 2016

Homework 8 Model Solution Section

ST5224: Advanced Statistical Theory II

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Space-Time Symmetries

The Simply Typed Lambda Calculus

Problem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ.

Approximation of distance between locations on earth given by latitude and longitude

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

SOLUTIONS TO MATH38181 EXTREME VALUES AND FINANCIAL RISK EXAM

Section 7.6 Double and Half Angle Formulas

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)

J. of Math. (PRC) Banach, , X = N(T ) R(T + ), Y = R(T ) N(T + ). Vol. 37 ( 2017 ) No. 5

The challenges of non-stable predicates

The ε-pseudospectrum of a Matrix

Appendix to On the stability of a compressible axisymmetric rotating flow in a pipe. By Z. Rusak & J. H. Lee

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

Coefficient Inequalities for a New Subclass of K-uniformly Convex Functions

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

1 String with massive end-points

On a four-dimensional hyperbolic manifold with finite volume

n=2 In the present paper, we introduce and investigate the following two more generalized

Lecture 2. Soundness and completeness of propositional logic

Takeaki Yamazaki (Toyo Univ.) 山崎丈明 ( 東洋大学 ) Oct. 24, RIMS

Appendix to Technology Transfer and Spillovers in International Joint Ventures

Strain gauge and rosettes

Orbital angular momentum and the spherical harmonics

Solutions to Exercise Sheet 5

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 24/3/2007

MATHEMATICS. 1. If A and B are square matrices of order 3 such that A = -1, B =3, then 3AB = 1) -9 2) -27 3) -81 4) 81

ES440/ES911: CFD. Chapter 5. Solution of Linear Equation Systems

Srednicki Chapter 55

On the Galois Group of Linear Difference-Differential Equations

Lanczos and biorthogonalization methods for eigenvalues and eigenvectors of matrices

The k-α-exponential Function

Finite difference method for 2-D heat equation

Appendix S1 1. ( z) α βc. dβ β δ β

TMA4115 Matematikk 3

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

Transcript:

English NUMERICAL MATHEMATICS Vol.14, No.2 Series A Journal of Chinese Universities May 2005 A STABILITY ANALYSIS OF THE (k) JACOBI MATRIX INVERSE EIGENVALUE PROBLEM Hou Wenyuan ( ΛΠ) Jiang Erxiong( Ξ) Abstract In this paper we will analyze the perturbation quality for a new algorithm of the (k) Jacobi matrix inverse eigenvalue problem. Key words eigenvalue, Jacobi matrix, (k) inverse problem. AMS(2000)subject classifications 65F 18 1 Introduction Let T n = α 1 β 1 0 β 1 α 2 β 2 β 2............ β 0 β α n be an n n unreduced symmetric tridiagonal matrix, and denote its submatrix T p,q, (p <q)as follows α p β p 0 β p α p+1 β p+1. T p,q = β..... p+1 p<q....... βq 1 0 β q 1 α q We call an unreduced symmetric tridiagonal matrix with β i > 0asaJacobimatrix. Consider T 1,n and T p,q to be Jacobi matrices. The matrix ( ) T 1, 0 W k = 0 T k+1,n is obtained by deleting the k th row and the k th column (k =1, 2,..., n) fromt n. Received: Mar. 15, 2003.

116 Hou Wenyuan Jiang Erxiong Problem If we don t know the matrix T 1,n, but we know all eigenvalues of matrix T 1,, all eigenvalues of matrix T k+1,n, and all eigenvalues of matrix T 1,n,could we construct the matrix T 1,n. Let S1 =(µ 1,µ 2,,µ ), S2 =(µ k,µ k+1,,µ ), λ =(λ 1,λ 2,,λ n ) are the eigenvalues of matrices T 1,,T k+1,n and T 1,n respectively. The problem is that from above 2n-1 data to find other 2n-1 data: α 1,α 2,,α n, and β 1,β 2,,β. Obviously, when k =1ork = n this problem has been solved and there are many algorithms to construct T 1,n [3][6], and its stability analysis can be founded in [2]. While k =2, 3...n 1, a new algorithm has been put forward to construct T 1,n [1]. In this paper we will give some stability properties of the new algorithm in case k =2, 3...n 1. 2 Basic Theorem Theorem 2.1 [1] If there is no common number between µ 1,µ 2,,µ and µ k,µ k+1,,µ, then the necessary and sufficient condition for the (k) problem having a solution is: λ 1 <µ j1 <λ 2 <µ j2 < <µ j <λ n, (2.1) where µ =(µ j1,µ j2...µ j ), and µ i, (i =1, 2...n 1) are recorded as µ ji, (i =1, 2...n 1) such that µ j1 <µ j2 < <µ j. (2.2) Furthermore, if a given (k) problem has a solution, then the solution is unique. Algorithm 2.2 [1] Given three vectors λ =(λ 1,λ 2 λ n ) T,S1 =(µ 1,µ 2 µ ) T and S2 =(µ k, µ k+1 µ ) T which are satisfied with (2.1), then we can solve (k) problem by following algorithm: Step 1 Find α k as α k =trace(t 1,n ) trace(w k )= n λ i µ i. (2.3)

A Stability Analysis of the (k)-jacobi Matrix Inverse Eigenvalue Problem 117 Step 2 Find x =(x 1,x 2,,x )as x j = n (µ j λ i ), j > 0,j =1, 2,n 1. (2.4) (µ j µ i ) Step 3 Compute β and β k as β = x i, (2.5) β k = x i. (2.6) Step 4 Compute S (1),i,, 2,,k 1andS(2) 1,i i = k, k +1,,n 1. as: S (1),j = x j /β,j =1, 2,,k 1, (2.7) S (2) 1,i = x i /β k,i= k, k +1,,n 1, (2.8) where S (1),j is the last element of the unit eigenvector of T 1,, corresponding to the eigenvalue µ j,and S (2) 1,i is the first element of the unit eigenvector of T k+1,n, corresponding to the eigenvalue µ i. Step 5 Compute T 1, from S (1),i,i =1, 2,,k 1andµ 1,µ 2,,µ by Lanczos Process or Givens Orthogonal Reduction Process [3],[6],[7]. Compute T k+1,n from S (2) 1,i,i= k, k+1,,andµ k,µ k+1,,µ by Lanczos Process or Givens Orthogonal Reduction Process [3],[6],[7]. 3 Stability Analysis Let λ =(λ 1,λ 2 λ n ) T,S1=(µ 1,µ 2 µ ) T,S2=(µ k,µ k+1 µ ) T (3.1) and λ =( λ 1, λ 2 λ n ) T, S1 =( µ 1, µ 2 µ ) T, S2 =( µ k, µ k+1 µ ) T (3.2) are satisfied with (2.1) to be that λ 1 <µ j1 <λ 2 <µ j2 < <µ j <λ n,

118 Hou Wenyuan Jiang Erxiong and λ 1 < µ j1 < λ 2 < µ j2 < < µ j < λ n. By using the Algorithm 2.2, we can get a matrix T 1,n by the date (3.1) and a matrix T 1,n by the data (3.2), now we will analyze the bound of T 1,n T 1,n F. is Lemma 3.1 The derivative of the function f(t) defined as f (t) =f(t)( f(t) = n n (a i + b i t), j (c i + d i t) b i a i + b i t, j d i c i + d i t ). Proof Start by noticing that f (t) = = ( n (a k + b k t)) k=1 k=1, j n = f(t)( (c k + d k t) b i n k=1 (a i + b i t) n Defined a function f j (t) as f j (t) = +( n (a k + b k t))( k=1 (a k + b k t) k=1, j (c k + d k t) b i a i + b i t, j d i c i + d i t )., j k=1, j n [µ j λ i + t( µ j µ j + λ i λ i )], j By the Lemma 3.1, we get that f j(t) =f j (t)( [µ j µ i + t( µ j µ j + µ i µ i )], j n 1 (c k + d k t) d i (c i + d i t) n ) (a k + b k t) k=1 µ j µ j + λ i λ i µ j λ i + t( µ j µ j + λ i λ i ) k=1, j (c k + d k t), t [0, 1]. (3.3) µ j µ j + µ i µ i ). (3.4) µ j µ i + t( µ j µ j + µ i µ i )

A Stability Analysis of the (k)-jacobi Matrix Inverse Eigenvalue Problem 119 Using the mean value formula for differential calculus, we have f j (1) f j (0) = f j(τ),τ (0, 1). (3.5) Symbolized A j = B j = n, j F j = max t (0,1) f j(t), (3.6) 1 min{ µ j λ i, µ j λ i }, (3.7) 1 min{ µ j µ i, µ j µ i }, (3.8) θ j =max{a j,b j }, (3.9) thus from (3.4) we get f j (t) F j + + n, j n, j A j µ j µ j + λ i λ i µ j λ i + t( µ j µ j + λ i λ i ) µ j µ j + µ i µ i µ j µ i + t( µ j µ j + µ i µ i ) ) µ j µ j + λ i λ i min{ µ j λ i, µ j λ i } n µ j µ j + µ i µ i min{ µ j µ i, µ j µ i } µ j µ j + λ i λ i +B j, j n n A j ( µ j µ j + λ i λ i ) + B j (, j µ j µ j +, j µ i µ i ) θ j [2(n 1) µ j µ j + µ µ 1 + λ λ 1 ] µ j µ j + µ i µ i (2n 1)θ j ( µ µ 1 + λ λ 1 ). (3.10) Theorem 3.2 Let x j, β, β k, x j, β and β k are the matrix element defined by (2.4),(2.5) and (2.6) from the data (3.1) and (3.2) respectively, then we have x j x j ϱ j ( µ µ 1 + λ λ 1 ) (3.11) β β ρ 1 ( µ µ 1 + λ λ 1 ) (3.12) β k β k ρ 2 ( µ µ 1 + λ λ 1 ), (3.13)

120 Hou Wenyuan Jiang Erxiong where ϱ j,ρ 1 and ρ 2 are defined as ϱ j = (2n 1)F jθ j xj + x j, (3.14) ρ 1 = (2n 1) F i θ i β + β, (3.15) ρ 2 = (2n 1) F i θ i β k + β k. (3.16) Proof Start by observing that x j x j = x j x j xj + x j = f j(1) f j (0) xj + x j (2n 1)F jθ j xj + x j ( µ µ 1 + λ λ 1 ), so it is easy to prove that β β = x x i i x i x i x i + x i (2n 1) F i θ i β + β ( µ µ 1 + λ λ 1 ), β k β k = x x i i x i x i x i + x i (2n 1) F i θ i β k + β k ( µ µ 1 + λ λ 1 ). Theorem 3.3 Let q 1 =(S (1),1,S(1),2 S(1), )T and q 1 =( S (1) (1),1, S S (1), )T, where S (1) (1),j and S,j,j =1, 2...k 1 are defined by (2.7), then,2 q 1 q 1 2 η 1 ( µ µ 1 + λ λ 1 ), (3.17)

A Stability Analysis of the (k)-jacobi Matrix Inverse Eigenvalue Problem 121 where η 1 is defined as ϱ 2 i + ρ 1 η 1 = β (3.18) and ϱ i, ρ 1 are defined as (3.14),(3.15). Proof q 1 q 1 2 =( =( =( ( S (1) (1),i S,i 2 ) 1 2 ϱ 2 i xi β xi β xi β 2 ) 1 2 xi β + xi β xi x i 2 ) 1 2 +( β x i ρ 1 xi β 2 ) 1 2 xi (β β ) β β 2 ) 1 2 ( + )( µ µ 1 + λ β λ 1 ) β β ϱ 2 i + ρ 1 = β ( µ µ 1 + λ λ 1 ). So by the same method, we can get the theorem below immediately. Theorem 3.4 Let q 2 =(S (2) 1,k,S(2) 1,k+1 S(2) S (2) 1, )T, where S (2) 1,j where η 2 is defined as and S (2) 1,j and ϱ i, ρ 2 are defined as (3.14),(3.16). 1, )T and q 2 =( S (2) (2) 1,k, S,j = k, k +1...n 1 are defined by (2.8), then 1,k+1 q 2 q 2 2 η 2 ( µ µ 1 + λ λ 1 ), (3.19) η 2 = ϱ 2 i + ρ 2 β k (3.20) Lemma 3.5 [2] Let q 1 =(S (1),1,S(1),2 S(1), )T and q 2 =(S (2) 1,k,S(2) 1,k+1 S (2) 1, )T with S (1),i > 0(i =1, 2 k 1), S(2) 1,i > 0(i = k, k +1 n 1) and q 1 2 =1, q 2 2 =1,letΛ 1 =diag(µ 1,µ 2 µ )andλ 2 =diag(µ k,µ k+1 µ ). Then the matrices K 1 =[Λ 1 q 1, Λ k 3 1 q 1, q 1 ] R () (), (3.21)

122 Hou Wenyuan Jiang Erxiong K 2 =[q 2, Λ 2 q 2, Λ n 1 q 2 ] R (n k) (n k) (3.22) are nonsingular, and K 1 1 2 α 1, K 1 2 2 α 2, where Lemma 3.6 [4] E R n n satisfy α 1 = k 1 α 2 = n k max 1i,i j max ki,i j 1+ λ i λ j λ i / min 1i S(1),i, (3.23) 1+ λ i λ j λ i / min ki S(2) 1,i. (3.24) Let A = QR be the QR factorization of A R n n with rank(a) =n. Let A 1 2 E 2 < 1. (3.25) Then there is a unique QR factorization A + E =(Q + W )(R + F ) (3.26) and W F (1 + 2) A 1 2 1 A 1 2 E 2 E F. (3.27) By the same way, we can get the result on the condition that A = QL be the QL factorization of A R n n with rank(a)=n. Let E R n n satisfy A 1 2 E 2 < 1. (3.28) Then there is a unique QL factorization A + E =(Q + W )(L + F ) (3.29) and W F (1 + 2) A 1 2 1 A 1 2 E 2 E F. (3.30) Theorem 3.7 Let T 1,,and T 1, are the solution by the data (3.1) and (3.2) respectively. If α 1 ζ 1 < 1, then where α 1 are defined as (3.23) and T 1, T 1, F l 1 ( µ µ 1 + λ λ 1 ), (3.31) ζ 1 =( k 1δ 1 η 1 + δ k 3 1 )( µ µ 1 + λ λ 1 ), (3.32)

A Stability Analysis of the (k)-jacobi Matrix Inverse Eigenvalue Problem 123 l 1 =1+2 (δ1η i 1 + iδ1 i 1 (1 + 2)α 1 ) S1 2, (3.33) 1 α 1 ζ 1 δ 1 =max{ S1, S1, 1}. (3.34) Proof Start by observing that T 1, = Q 1 Λ 1 Q T 1, T 1, = Q 1 Λ1 QT 1 and Q T 1 e = q 1, Q T 1 ẽ = q 1. (3.35) Let W 1 = Q 1 Q 1, Ω 1 =Λ 1 Λ 1. (3.36) Thus T 1, T 1, = Q 1 Λ 1 Q T 1 Q 1 Λ1 QT 1 = Q 1 Λ 1 W T 1 +(Q 1 Λ 1 Q 1 Λ1 ) Q T 1 = Q 1 Λ 1 W T 1 +(Q 1 Λ 1 Q 1 Λ 1 + Q 1 Λ 1 Q 1 Λ1 ) Q T 1 = Q 1 Λ 1 W T 1 + W 1Λ 1 QT 1 + Q 1 Ω 1 QT 1. (3.37) Hence we have T 1, T 1, F Ω 1 F +2 Λ 1 F W 1 F. (3.38) As K 1 =[Λ 1 q 1, Λ k 3 1 q 1, q 1 ] R () (), it is easy to get that Q 1 K 1 =[T 1, e,t k 3 1, e, e ]=L 1, (3.39) where L 1 is an lower triangular matrix with positive diagonal elements, and K 1 = Q T 1 L 1 is the QL factorization of K 1. From the same reason, we can get the QL factorization of K 1 as K 1 = Q T 1 L 1. Let E 1 = K 1 K 1. (3.40) Then E 1 2 = K 1 K 1 2

124 Hou Wenyuan Jiang Erxiong = [Λ 1 q 1 Λ 1 q 1, Λ k 3 k 3 1 q 1 Λ 1 q 1,,q 1 q 1 ] 2 On the other hand we notice that max 0i Λi 1q 1 Λ i 1 q 1 1 Thus by Lemma 3.5 and 3.6 we know if then max ( 0i Λi 1 1 q 1 q 1 1 + Λ i 1 Λ i 1 1) δ1 q 1 q 1 1 +δ1 k 3 Λ 1 Λ 1 1 ( k 1δ1 η 1 + δ1 k 3 )( µ µ 1 + λ λ 1 ) = ζ 1. (3.41) K 1 +( E 1 )=(Q T 1 +( W 1 ) T )(L 1 + F 1 ). K 1 1 2 E 1 2 α 1 ζ 1 < 1. W 1 F (1 + 2) K 1 1 2 1 K 1 1 2 E 1 2 E 1 F (1 + 2)α 1 E 1 F. (3.42) 1 α 1 ζ 1 So by the Lemma 3.3 we can get that E 1 F = K 1 K 1 F = [Λ 1 q 1 Λ 1 q 1, Λ k 3 1 q 1 Λ k 3 1 q 1,,q 1 q 1 ] F Λ i 1 q 1 Λ i 1 q 1 2 Λ i 1 2 q 1 q 1 2 + δ1 i q 1 q 1 2 + Λ i 1 Λ i 1 2 iδ i 1 1 Λ 1 Λ 1 2 (δ1 i η 1 + iδ1 i 1 )( µ µ 1 + λ λ 1 ). (3.43) Thus substituting (3.42) and (3.43) into (3.38) we get the inequality that T 1, T 1, F Ω 1 F +2 Λ 1 F W 1 F [1 + 2 (δ1η i 1 + iδ1 i 1 (1 + 2)α 1 ) S1 2 ] 1 α 1 ζ 1 ( µ µ 1 + λ λ 1 ) = l 1 ( µ µ 1 + λ λ 1 ). (3.44)

A Stability Analysis of the (k)-jacobi Matrix Inverse Eigenvalue Problem 125 From this we have proved the Theorem 3.7. Theorem 3.8 Let T k,,and T k, are the solution by the data (3.1) and (3.2) respectively. If α 2 ζ 2 < 1, then where α 2 are defined as (3.24) and T k, T k, l 2 ( µ µ 1 + λ λ 1 ), (3.45) ζ 2 =( n kδ1 n η 2 + δ2 n )( µ µ 1 + λ λ 1 ) (3.46) n l 2 =1+2 (δ2η i 2 + iδ2 i 1 (1 + 2)α 2 ) S2 2 1 α 2 ζ 2 (3.47) δ 2 =max{ S2, S2, 1}. (3.48) Proof The proof of this theorem are absolutely similar to the theorem above. The only difference we need to notice is the (3.39) change to Q 2 K 2 =[e 1,T k, e 1 T n k, e 1]=R 2, where R 2 is an upper triangular matrix and K 2 = Q T 2 R 2 is the QR factorization of K 2. Theorem 3.9 Let T 1,n,and T 1,n are the solution by the data (3.1) and (3.2) respectively. If max{α 1 ζ 1,α 2 ζ 2 } < 1, then T 1,n T 1,n F L( µ µ 1 + λ λ 1 ) (3.49) where L =1+ρ 1 + ρ 2 + l 1 + l 2 (3.50) and ρ 1, ρ 2, l 1, l 2 are defined by (3.15)(3.16)(3.33)(3.47) respectively. Proof By the theorem (3.2)(3.7) and (3.8) we obtain that T 1,n T 1,n F T 1, T 1, F + T k, T k, F + β β + β k β k + a k ã k (1 + ρ 1 + ρ 2 + l 1 + l 2 )( µ µ 1 + λ λ 1 ). Theorem 3.10 The algorithm 2.2 is Lipschitz continuous. Proof By the Cauchy-Schwarz inequality n n x i y i ( x 2 i ) n 1 2 ( yi 2 ) 1 2 we get from the Theorem 3.9 immediately that T 1,n T 1,n F 2n 1L( µ µ 2 2 + λ λ 2 2 ) 1 2. Thus we obtain our result.

126 Hou Wenyuan Jiang Erxiong 4 Numerical Example Now we will solve a practical example by using the algorithm 2.2 and look its stability. Let T 1,9 = 1 1 0 0 0 0 0 0 0 1 2 1 0 0 0 0 0 0 0 1 3 1 0 0 0 0 0 0 0 1 4 1 0 0 0 0 0 0 0 1 5 1 0 0 0 0 0 0 0 1 6 1 0 0 0 0 0 0 0 1 7 1 0 0 0 0 0 0 0 1 8 1 0 0 0 0 0 0 0 1 9. (4.1) Its eigenvalues are λ = 0.25380581710031 1.78932135473495 2.96105907080106 3.99605612592861 5.00000000000000 6.00394387407139 7.03894092919894 8.21067864526506 9.74619418289969. (4.2) Pick k = 5, and delete 5th row and 5th column from T 1,9. There are two submatrices T 1,4 and T 6,9. The eigenvalues of them are S1 = 0.25471875982586 1.82271708088711 3.17728291911289, (4.3) 4.74528124017414 respectively. S2 = Thus we can construct a Jacobi matrix T (1) 1,9 5.25471875982586 6.82271708088711 8.17728291911289 (4.4) 9.74528124017414 by algorithm 2.2 from the data λ,s1 ands2. Note ε 1 =0.00000001 (1, 1, 1, 1, 1, 1, 1, 1, 1), ε 2 =0.00000001 (1, 1, 1, 1) and ε 3 =0.0000001 (1, 1, 1, 1), so we can construct T (2) 1,9 by the data λ + ε 1,S1 +ε 2, S2+ε 2 and T (3) 1,9 by the data λ + ε 1, S1+ε 3, S2+ε 3 respectively. Now we get a table to compare the difference between T (1) (2) (3) 1,9,T 1,9 and T 1,9.

A Stability Analysis of the (k)-jacobi Matrix Inverse Eigenvalue Problem 127 α i T 1,9 T (1) 1,9 T (2) 1,9 T (3) 1,9 1 1 1.00000000000022 1.00000000000022 1.00004369610216 2 2 1.99999999999985 1.99999999999985 1.99997034564175 3 3 2.99999999999993 2.99999999999993 2.99998806134363 4 4 4.00000000000000 4.00000000000000 3.99999789691246 5 5 5.00000000000001 5.00000001000001 4.99999929000001 6 6 5.99999999999999 5.99999999999999 5.99999789690966 7 7 6.99999999999988 6.99999999999988 6.99998806114693 8 8 7.99999999999975 7.99999999999975 7.99997034236897 9 9 9.00000000000038 9.00000000000038 9.00004369957443 β i T 1,9 T (1) 1,9 T (2) 1,9 T (3) 1,9 1 1 1.00000000000007 1.00000000000007 1.00001332604870 2 1 1.00000000000006 1.00000000000006 1.00001366460058 3 1 1.00000000000002 1.00000000000002 1.00000292889717 4 1 1.00000000000000 1.00000000000000 1.00000025345542 5 1 1.00000000000000 1.00000000000000 0.99999974654437 6 1 0.99999999999998 0.99999999999998 0.99999707108586 7 1 0.99999999999988 0.99999999999988 0.99998633482702 8 1 0.99999999999988 0.99999999999988 0.99998667086250 References 1 Jiang E X. An inverse eigenvalue problem for Jacobi Matrices. J. Comput. Math., Vol. 21, No. 3, Sept. 2003, 21: 569-584 2 Xu S F. A stability analysis of the Jacobi matrix inverse eigenvalue problem. BIT, 1993, 33: 695-702 3 Hald O H. Inverse eigenvalue problems for Jacobi matrices. Linear Algebra Appl., 1976, 14: 63-65 4 Sun J G. Perturbation bounds for the Cholesky and QR factorizations. BIT, 1991, 33: 341-352 5 Gautschi W. Norm estimate for inverse of Vandermonde matrices. Numer. Math., 1975, 23: 337-347 6 Jiang E X. Symmetrical Matrix Computing (Chinese). Shanghai: Shanghai Science and Technology Press, 1984 7 Wilkinson J H. The algebraic eigenvalue problem (Chinese). Beijing: Science Press, 2001 Hou Wenyuan Dept. of Math., Shanghai University, Shanghai 200436, PRC. Jiang Erxiong Dept. of Math., Shanghai University, Shanghai 200436, PRC.