Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

Σχετικά έγγραφα
Partial Differential Equations in Biology The boundary element method. March 26, 2013

Homework 3 Solutions

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Tridiagonal matrices. Gérard MEURANT. October, 2008

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

Other Test Constructions: Likelihood Ratio & Bayes Tests

EE512: Error Control Coding

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

2 Composition. Invertible Mappings

Concrete Mathematics Exercises from 30 September 2016

Example Sheet 3 Solutions

Statistical Inference I Locally most powerful tests

Numerical Analysis FMN011

Inverse trigonometric functions & General Solution of Trigonometric Equations

w o = R 1 p. (1) R = p =. = 1

Congruence Classes of Invertible Matrices of Order 3 over F 2

ST5224: Advanced Statistical Theory II

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Matrices and Determinants

Approximation of distance between locations on earth given by latitude and longitude

Areas and Lengths in Polar Coordinates

C.S. 430 Assignment 6, Sample Solutions

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Quadratic Expressions

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

The ε-pseudospectrum of a Matrix

Areas and Lengths in Polar Coordinates

Reminders: linear functions

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

New bounds for spherical two-distance sets and equiangular lines

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

6.3 Forecasting ARMA processes

Math221: HW# 1 solutions

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

Lecture 13 - Root Space Decomposition II

Fractional Colorings and Zykov Products of graphs

Section 8.3 Trigonometric Equations

Every set of first-order formulas is equivalent to an independent set

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Uniform Convergence of Fourier Series Michael Taylor

A Note on Intuitionistic Fuzzy. Equivalence Relation

Finite Field Problems: Solutions

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

Lecture 21: Properties and robustness of LSE

Lanczos and biorthogonalization methods for eigenvalues and eigenvectors of matrices

( ) 2 and compare to M.

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

Problem Set 3: Solutions

Section 7.6 Double and Half Angle Formulas

The challenges of non-stable predicates

ΗΜΥ 220: ΣΗΜΑΤΑ ΚΑΙ ΣΥΣΤΗΜΑΤΑ Ι Ακαδημαϊκό έτος Εαρινό Εξάμηνο Κατ οίκον εργασία αρ. 2

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

The Simply Typed Lambda Calculus

Second Order Partial Differential Equations

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

derivation of the Laplacian from rectangular to spherical coordinates

TMA4115 Matematikk 3

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)

SOLVING CUBICS AND QUARTICS BY RADICALS

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

Two-parameter preconditioned NSS method for non-hermitian and positive definite linear systems

ES440/ES911: CFD. Chapter 5. Solution of Linear Equation Systems

4.6 Autoregressive Moving Average Model ARMA(1,1)

On generalized preconditioned Hermitian and skew-hermitian splitting methods for saddle point problems

Pg The perimeter is P = 3x The area of a triangle is. where b is the base, h is the height. In our case b = x, then the area is

Lecture 15 - Root System Axiomatics

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

Second Order RLC Filters

Trigonometric Formula Sheet

Parametrized Surfaces

Solution Series 9. i=1 x i and i=1 x i.

Homework 8 Model Solution Section

General 2 2 PT -Symmetric Matrices and Jordan Blocks 1

Solutions to Exercise Sheet 5

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω

Space-Time Symmetries

D Alembert s Solution to the Wave Equation

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER

Spherical Coordinates

DESIGN OF MACHINERY SOLUTION MANUAL h in h 4 0.

Appendix to On the stability of a compressible axisymmetric rotating flow in a pipe. By Z. Rusak & J. H. Lee

Jordan Form of a Square Matrix

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 6/5/2006

Notes on the Open Economy

PARTIAL NOTES for 6.1 Trigonometric Identities

Lecture 12: Pseudo likelihood approach

SOME PROPERTIES OF FUZZY REAL NUMBERS

Lecture 2. Soundness and completeness of propositional logic

Local Approximation with Kernels

Lecture 26: Circular domains

Transcript:

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices Chi-Kwong Li Department of Mathematics The College of William and Mary Williamsburg, Virginia 23187-8795 ckli@math.wm.edu Joint Work with Zhong-Zhi Bai (Chinese Academy of Science), and Gene Golub (Stanford Unversity) 1

The HSS Iteration Let A = H + S C n n be a sparse matrix, where H = (A + A )/2 and S = (A A )/2, so that H is positive definite and S 0. To solve the linear system Ax = b, consider the following HSS (Hemitian and Skew-Hermitian Spliting) iteration scheme proposed in [Bai,Golub,Ng, 2003]. 2

The HSS Iteration Let A = H + S C n n be a sparse matrix, where H = (A + A )/2 and S = (A A )/2, so that H is positive definite and S 0. To solve the linear system Ax = b, consider the following HSS (Hemitian and Skew-Hermitian Spliting) iteration scheme proposed in [Bai,Golub,Ng, 2003]. Given an initial guess x (0) C n, compute x (k) for k = 0, 1, 2,... using the following iteration scheme until {x (k) } satisfies the stopping criterion: (αi + H)x (k+1 2 ) = (αi S)x (k) + b, (αi + S)x (k+1) = (αi H)x (k+1 2 ) + b, where α is a given positive constant. 2

In matrix-vector form, we have x (k+1) = M(α)x (k) + b(α), k = 0, 1, 2,..., (1) where b(α) = 2α(αI + S) 1 (αi + H) 1 b and M(α) = (αi + S) 1 (αi H)(αI + H) 1 (αi S) (2) is the iteration matrix of the HSS method. 3

In matrix-vector form, we have x (k+1) = M(α)x (k) + b(α), k = 0, 1, 2,..., (1) where b(α) = 2α(αI + S) 1 (αi + H) 1 b and M(α) = (αi + S) 1 (αi H)(αI + H) 1 (αi S) (2) is the iteration matrix of the HSS method. Note that (1) may also result from the splitting of the coefficient matrix A, with A = B(α) C(α) B(α) = 2α 1 (αi + H)(αI + S), C(α) = 2α 1 (αi H)(αI S). 3

The HSS scheme always converges because the iteration matrix M(α) = (αi + S) 1 (αi H)(αI + H) 1 (αi S) 4

The HSS scheme always converges because the iteration matrix M(α) = (αi + S) 1 (αi H)(αI + H) 1 (αi S) and the matrix M(α) = (αi H)(αI + H) 1 (αi S)(αI + S) 1 have the same eigenvalues. 4

The HSS scheme always converges because the iteration matrix M(α) = (αi + S) 1 (αi H)(αI + H) 1 (αi S) and the matrix M(α) = (αi H)(αI + H) 1 (αi S)(αI + S) 1 have the same eigenvalues. Furthermore, M(α) has singular values 1 λ j (H), j = 1,..., n. 1 + λ j (H) So, the spectral radius ρ(m(α)) = ρ( M(α)) is bounded above by M(α) < 1. 4

The HSS scheme always converges because the iteration matrix M(α) = (αi + S) 1 (αi H)(αI + H) 1 (αi S) and the matrix M(α) = (αi H)(αI + H) 1 (αi S)(αI + S) 1 have the same eigenvalues. Furthermore, M(α) has singular values 1 λ j (H), j = 1,..., n. 1 + λ j (H) So, the spectral radius ρ(m(α)) = ρ( M(α)) is bounded above by M(α) < 1. Problem How to find the optimal α so that ρ(m(α )) ρ(m(α)) for all α > 0. 4

Let λ 1 and λ 2 be the maximum and minumum eigenvalues of H. α = λ 1 λ 2, then λ1 λ 2 λ1 + = M( α) M(α), α > 0. λ 2 If Thus, ρ(m(α )) ρ(m( α)) λ1 λ 2 λ1 + λ 2. In most situations, ρ(m(α )) λ1 λ 2 λ1 + λ 2. 5

Let λ 1 and λ 2 be the maximum and minumum eigenvalues of H. α = λ 1 λ 2, then λ1 λ 2 λ1 + = M( α) M(α), α > 0. λ 2 If Thus, ρ(m(α )) ρ(m( α)) λ1 λ 2 λ1 + λ 2. In most situations, ρ(m(α )) λ1 λ 2 λ1 + λ 2. Problem Can we do better? 5

The two-by-two real case Theorem 1 Let A = H +S R 2 2 be such that H is symmetric positive definite and S is skew-symmetric. Suppose H has eigenvalues λ 1 λ 2 > 0 and det(s) = q 2 with q R. Then the two eigenvalues of the iteration matrix M(α) are where λ ± = (α2 λ 1 λ 2 )(α 2 q 2 ) ± (α) (α + λ 1 )(α + λ 2 )(α 2 + q 2, ) (α) = (α 2 λ 1 λ 2 ) 2 (α 2 q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 )(α2 + q 2 ) 2. 6

The two-by-two real case Theorem 1 Let A = H +S R 2 2 be such that H is symmetric positive definite and S is skew-symmetric. Suppose H has eigenvalues λ 1 λ 2 > 0 and det(s) = q 2 with q R. Then the two eigenvalues of the iteration matrix M(α) are where λ ± = (α2 λ 1 λ 2 )(α 2 q 2 ) ± (α) (α + λ 1 )(α + λ 2 )(α 2 + q 2, ) (α) = (α 2 λ 1 λ 2 ) 2 (α 2 q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 )(α2 + q 2 ) 2. As a result, ρ(m(α)) = (α2 λ 1 λ 2 )(α 2 q 2 ) + (α) (α + λ 1 )(α + λ 2 )(α 2 + q 2 ) if (α) 0; ρ(m(α)) = (α λ1 )(α λ 2 ) (α + λ 1 )(α + λ 2 ) if (α) < 0. 6

Proof. Apply an orthogonal similarity, and assume that [ ] [ ] λ1 0 0 q H = and S = with q R. 0 λ 2 q 0 Then (αi + H) 1 (αi H)(αI + S) 1 (αi S) equals 1 α 2 + q 2 (α 2 q 2 )(α λ 1 ) α+λ 1 2qα(α λ 1) α+λ 1 2qα(α λ 2 ) α+λ 2 (α 2 q 2 )(α λ 2 ) α+λ 2 The formula for λ ± and the assertion on ρ(m(α)) follow.. 7

One may want to use the formula of ρ(m(α)) in Theorem 1 to determine the optimal choice of α. It turns out that the analysis is very complicated and not productive. The main difficulty is the expression (α) = (α 2 λ 1 λ 2 ) 2 (α 2 q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 )(α2 + q 2 ) 2 (3) in the formula of ρ(m(α)). 8

One may want to use the formula of ρ(m(α)) in Theorem 1 to determine the optimal choice of α. It turns out that the analysis is very complicated and not productive. The main difficulty is the expression (α) = (α 2 λ 1 λ 2 ) 2 (α 2 q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 )(α2 + q 2 ) 2 (3) in the formula of ρ(m(α)). Here, we use a different approach that allows us to avoid the complicated expression (3). The key idea is: 8

One may want to use the formula of ρ(m(α)) in Theorem 1 to determine the optimal choice of α. It turns out that the analysis is very complicated and not productive. The main difficulty is the expression (α) = (α 2 λ 1 λ 2 ) 2 (α 2 q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 )(α2 + q 2 ) 2 (3) in the formula of ρ(m(α)). Here, we use a different approach that allows us to avoid the complicated expression (3). The key idea is: If X has eigenvalues γ 1, γ 2 then ρ(x) = max{ γ 1, γ 2 } so that ρ(x) (γ 1 + γ 2 )/2 = (tr X)/2 and ρ(x) 2 γ 1 γ 2 = det(x). 8

For notational simplicity, write ρ(α) = ρ(m(α)), and τ(α) = { trace (M(α)) δ(α) = det (M(α)) = } 2 = { (α 2 q 2 )(α 2 λ 1 λ 2 ) 2 (α 2 + q 2 )(α + λ 1 )(α + λ 2 ) (α λ 1 )(α λ 2 ) (α + λ 1 )(α + λ 2 ), } 2, ω(α) = max{τ(α), δ(α)}. Then ρ(α) 2 ω(α). 9

Note that Thus, 1 = τ(0) = lim τ(α) and 1 = δ(0) = lim δ(α). α + α + lim ω(α) = ω(0) = 1 > ω(ξ) for all ξ > 0. α + Since ω(α) is continuous and nonnegative, there exists α > 0 such that ω(α ) = min{ω(α) : α > 0}. 10

Note that Thus, 1 = τ(0) = lim τ(α) and 1 = δ(0) = lim δ(α). α + α + lim ω(α) = ω(0) = 1 > ω(ξ) for all ξ > 0. α + Since ω(α) is continuous and nonnegative, there exists α > 0 such that ω(α ) = min{ω(α) : α > 0}. We show that τ(α ) = δ(α ). (4) 10

Note that Thus, 1 = τ(0) = lim τ(α) and 1 = δ(0) = lim δ(α). α + α + lim ω(α) = ω(0) = 1 > ω(ξ) for all ξ > 0. α + Since ω(α) is continuous and nonnegative, there exists α > 0 such that ω(α ) = min{ω(α) : α > 0}. We show that τ(α ) = δ(α ). (4) As a result, the eigenvalues of M(α ) have the same modulus, and thus ρ(α) 2 ω(α) ω(α ) = ρ(α ) 2, for all α > 0. 10

Theorem 2 Let the assumptions of Theorem 1 be satisfied and define the functions τ and δ as above. Then the optimal α > 0 satisfying lies in the finite set ρ(m(α )) = min{ρ(m(α)) : α > 0} S = {α > 0 : τ(α) = δ(α)}, which consists of numbers α > 0 satisfying or (α 2 + q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 ) = (α2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2 (5) (α 2 + q 2 ) 2 (λ 2 1 α2 )(α 2 λ 2 2 ) = (α2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2. (6) 11

Theorem 2 Let the assumptions of Theorem 1 be satisfied and define the functions τ and δ as above. Then the optimal α > 0 satisfying lies in the finite set ρ(m(α )) = min{ρ(m(α)) : α > 0} S = {α > 0 : τ(α) = δ(α)}, which consists of numbers α > 0 satisfying or (α 2 + q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 ) = (α2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2 (5) (α 2 + q 2 ) 2 (λ 2 1 α2 )(α 2 λ 2 2 ) = (α2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2. (6) Proof. Two pages of detailed analysis. 11

Theorem 2 Let the assumptions of Theorem 1 be satisfied and define the functions τ and δ as above. Then the optimal α > 0 satisfying lies in the finite set ρ(m(α )) = min{ρ(m(α)) : α > 0} S = {α > 0 : τ(α) = δ(α)}, which consists of numbers α > 0 satisfying or (α 2 + q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 ) = (α2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2 (5) (α 2 + q 2 ) 2 (λ 2 1 α2 )(α 2 λ 2 2 ) = (α2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2. (6) Proof. Two pages of detailed analysis. Remark Use the substitution β = α 2 two/three polynomial equations! in (5) and (6) to get degree 11

Applications to two-by-two block matrices Theorem 3 Suppose A = H + S C n n such that H = 1 [ ] 2 (A + λ1 A I ) = r 0 and S = 1 0 λ 2 I s 2 (A A ) = [ 0 E E 0 ], where λ 1 > λ 2 > 0, and the nonzero matrix E C r s has nonzero singular values q 1 q 2 q k. Then the spectral radius of the iteration matrix M(α) = (αi + S) 1 (αi H)(αI + H) 1 (αi S) attains the minimum at α, which is λ 1 λ 2, q 1 q k, or a root of one of the following equations: or where j = 1, k. (α 2 + q 2 j )2 (α 2 λ 2 1 )(α2 λ 2 2 ) = (α2 q 2 j )2 (α 2 λ 1 λ 2 ) 2 (α 2 + q 2 j )2 (λ 2 1 α2 )(α 2 λ 2 2 ) = (α2 q 2 j )2 (α 2 λ 1 λ 2 ) 2, 12

Estimation of optimal parameters for n-by-n matrices In general, for a nonsymmetric and positive definite system of linear equations Ax = b, the eigenvalues of its coefficient matrix A lies in D = {x + iy : λ 1 x λ 2, q y q}, where λ 1 and λ 2 are the largest and the smallest eigenvalues of H, and q is the largest module of the eigenvalues of the skew-hermitian part S, of the coefficient matrix A. 13

Estimation of optimal parameters for n-by-n matrices In general, for a nonsymmetric and positive definite system of linear equations Ax = b, the eigenvalues of its coefficient matrix A lies in D = {x + iy : λ 1 x λ 2, q y q}, where λ 1 and λ 2 are the largest and the smallest eigenvalues of H, and q is the largest module of the eigenvalues of the skew-hermitian part S, of the coefficient matrix A. A reduced (simpler and lower-dimensional) matrix A R whose eigenvalues possess the same contour as the domain D is used to approximate the matrix A. For instance, a simple choice of the reduced matrix is given by A R = [ λ1 q q λ 2 ] with q = S or q = ρ(h 1 S) λ 1 λ 2. We then use our results to estimate the optimal parameter α of the HSS iteration method as follows. 13

Eestimation Let A R n n be a positive definite matrix, and H, S R n n be its symmetric and skew-symmetric parts, respectively. Let λ 1 and λ 2 be the largest and smallest eigenvalues of H. Suppose q = S or q = ρ(h 1 S) λ 1 λ 2. Then one can use the positive roots of the equation (α 2 + q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 ) = (α2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2 or (α 2 + q 2 ) 2 (λ 2 1 α2 )(α 2 λ 2 2 ) = (α2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2 to estimate the optimal parameter α > 0 satisfying ρ(m(α )) = min{ρ(m(α)) : α > 0}. 14

Eestimation Let A R n n be a positive definite matrix, and H, S R n n be its symmetric and skew-symmetric parts, respectively. Let λ 1 and λ 2 be the largest and smallest eigenvalues of H. Suppose q = S or q = ρ(h 1 S) λ 1 λ 2. Then one can use the positive roots of the equation (α 2 + q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 ) = (α2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2 or (α 2 + q 2 ) 2 (λ 2 1 α2 )(α 2 λ 2 2 ) = (α2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2 to estimate the optimal parameter α > 0 satisfying ρ(m(α )) = min{ρ(m(α)) : α > 0}. Numerical examples were given to illustate that the estimations are useful. 14

Further research Determine the optimal parameters for other classes of matrices A. 15

Further research Determine the optimal parameters for other classes of matrices A. Thank you for your attention!!! 15