Two-parameter preconditioned NSS method for non-hermitian and positive definite linear systems

Σχετικά έγγραφα
2 Composition. Invertible Mappings

Example Sheet 3 Solutions

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

Congruence Classes of Invertible Matrices of Order 3 over F 2

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Matrices and Determinants

4.6 Autoregressive Moving Average Model ARMA(1,1)

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Concrete Mathematics Exercises from 30 September 2016

EE512: Error Control Coding

Tridiagonal matrices. Gérard MEURANT. October, 2008

Every set of first-order formulas is equivalent to an independent set

Homework 3 Solutions

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

6.3 Forecasting ARMA processes

Section 8.3 Trigonometric Equations

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

New bounds for spherical two-distance sets and equiangular lines

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

ES440/ES911: CFD. Chapter 5. Solution of Linear Equation Systems

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

C.S. 430 Assignment 6, Sample Solutions

Statistical Inference I Locally most powerful tests

Areas and Lengths in Polar Coordinates

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Numerical Analysis FMN011

SOME PROPERTIES OF FUZZY REAL NUMBERS

ST5224: Advanced Statistical Theory II

Areas and Lengths in Polar Coordinates

Second Order Partial Differential Equations

Problem Set 3: Solutions

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Section 7.6 Double and Half Angle Formulas

On Numerical Radius of Some Matrices

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Lecture 21: Properties and robustness of LSE

Other Test Constructions: Likelihood Ratio & Bayes Tests

Section 9.2 Polar Equations and Graphs

Finite difference method for 2-D heat equation

The Simply Typed Lambda Calculus

Homework 8 Model Solution Section

Math221: HW# 1 solutions

Inverse trigonometric functions & General Solution of Trigonometric Equations

( ) 2 and compare to M.

derivation of the Laplacian from rectangular to spherical coordinates

Approximation of distance between locations on earth given by latitude and longitude

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

Reminders: linear functions

Lecture 34 Bootstrap confidence intervals

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

A Note on Intuitionistic Fuzzy. Equivalence Relation

Lecture 13 - Root Space Decomposition II

w o = R 1 p. (1) R = p =. = 1

Lecture 26: Circular domains

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

Coefficient Inequalities for a New Subclass of K-uniformly Convex Functions

Lecture 2. Soundness and completeness of propositional logic

Second Order RLC Filters

SOLVING CUBICS AND QUARTICS BY RADICALS

High order interpolation function for surface contact problem

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

( y) Partial Differential Equations

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

Homomorphism in Intuitionistic Fuzzy Automata

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

Problem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ.

Quadratic Expressions

Lanczos and biorthogonalization methods for eigenvalues and eigenvectors of matrices

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

Fractional Colorings and Zykov Products of graphs

MATRIX INVERSE EIGENVALUE PROBLEM

Solution Series 9. i=1 x i and i=1 x i.

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.

ACTA MATHEMATICAE APPLICATAE SINICA Nov., ( µ ) ( (

Srednicki Chapter 55

b. Use the parametrization from (a) to compute the area of S a as S a ds. Be sure to substitute for ds!

J. of Math. (PRC) Banach, , X = N(T ) R(T + ), Y = R(T ) N(T + ). Vol. 37 ( 2017 ) No. 5

Commutative Monoids in Intuitionistic Fuzzy Sets

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

The ε-pseudospectrum of a Matrix

D Alembert s Solution to the Wave Equation

Uniform Convergence of Fourier Series Michael Taylor

On a four-dimensional hyperbolic manifold with finite volume

Finite Field Problems: Solutions

Μηχανική Μάθηση Hypothesis Testing

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

The challenges of non-stable predicates

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

PARTIAL NOTES for 6.1 Trigonometric Identities

Appendix S1 1. ( z) α βc. dβ β δ β

Figure A.2: MPC and MPCP Age Profiles (estimating ρ, ρ = 2, φ = 0.03)..

Math 6 SL Probability Distributions Practice Test Mark Scheme

General 2 2 PT -Symmetric Matrices and Jordan Blocks 1

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

Transcript:

013 9 7 3 Sept. 013 Communication on Applied Mathematics and Computation Vol.7 No.3 DOI 10.3969/j.issn.1006-6330.013.03.005 Two-parameter preconditioned NSS method for non-hermitian and positive definite linear systems WANG Yang 1,, WU Yu-jiang, FAN Xiao-yan (1. College of Mathematics, Jilin Normal University, Siping 136100, Jilin Province, China;. School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, China) (Communicated by BAI Zhong-zhi) Abstract By using the normal and skew-hermitian splitting (NSS) iteration technique for large sparse non-hermitian and positive definite linear systems, a two-parameter preconditioned NSS iteration, which gives the actually generalized form of the preconditioned NSS method is proposed. Theoretical analysis shows that the new iterative method converges to the unique solution of the linear system. Moreover, the optimal choice of two parameters involved in the new method and the corresponding imum value for the upper bound of the iterative spectrum are derived and computed. For the actual implementation of this method, the incomplete LU (ILU) decomposition and the incremental unknowns (IUs) are chosen as two types of the preconditioners. Numerical results confirm the analysis of the convergence theory and the effectiveness of the proposed method. Key words normal and skew-hermitian splitting; positive definite linear system; incremental unknown; incomplete LU (ILU) decomposition; preconditioner 010 Mathematics Subject Classification 65F10; 65F08; 65N Chinese Library Classification O41.6 Hermite Đ Ð Þ Ý ß NSS 1, (1. 136100;. ¹Ö Ä ¹Ö 730000) ( ÔÓŲ) Received 01-04-6; Revised 01-06-8 Project supported by the National Basic Research Program of China (973 Program, 011CB706903) and the Natural Science Foundation of Jilin Province of China (01115) Corresponding author WU Yu-jiang, research interests are mathematical theory and applications of scientific computing. E-mail: myjaw@lzu.edu.cn

No. 3 WANG Yang, et al.: Two-parameter preconditioned NSS method 33 Ê Hermite Ì Ø Hermite (normal and skew-hermitian splitting, NSS) ÃÅ Õ½ NSS  ± NSS Å» É Ï ¼ Ì Ø Ç µ ÅÆ ÅË Ï Ô ½ Ù À ÙÍ Ò Ï ± Ô ³ Æ LU µ ¾ÈÑ Ú ½º Ò Á ¼» Î Ð Hermite Ì Ø ¾ÈÑ Æ LU µ 010 «65F10; 65F08; 65N O41.6 Æ Ü Å A Æ Û 1006-6330(013)03-03-19 0 Introduction Consider the following large sparse non-hermitian and positive definite system of linear equations: Ax = b, (1) where A C n n, and x, b C n. This system comes from the spatial discretization of a class of partial differential equations (see, e.g., [1-] and references therein). Based on the Hermitian and skew-hermitian splitting (HSS), A = H + S, where H = 1 (A + A ), and S = 1 (A A ), Bai, et al. [3] established a class of HSS iteration methods for solving the non-hermitian system of linear equations (1). They proved that the HSS iteration converges unconditionally to the exact solution of (1). In order to accelerate the HSS iteration method, Li, et al. [4] presented the asymmetric Hermitian and skew-hermitian splitting (AHSS) iteration method which introduced two parameters α and β in the HSS method. The AHSS iteration method can converge to the unique solution of (1) with any given nonnegative α, if β is restricted in an appropriate region. The generalized preconditioned HSS (GPHSS) method in [] gave us a more detailed convergence analysis about the AHSS method. Moreover, they used the AHSS iteration scheme to solve a preconditioned system and gave us the exact computation of the parameters α and β. Furthermore, to make the HSS method more attractive, based on the normal and skew- Hermitian splitting (NSS) of A, i.e., A = N + S, () where N C n n is a normal matrix, and S C n n is a skew-hermitian matrix, Bai, et al. [5] proposed also an iterative method called the NSS method based on this particular splitting, (αi + N)x k+ 1 = (αi S)x k + b, (3) (αi + S)x k+1 = (αi N)x k+ 1 + b,

34 Communication on Applied Mathematics and Computation Vol. 7 where α is a given positive constant. The NSS method can be viewed as a generalized version of the HSS iteration [3,6]. The theoretical analysis shows that the NSS iterative scheme converges unconditionally to the unique solution of the linear system (1). The numerical result of the NSS scheme is very effective and robust when it is used to solve the large sparse positive definite system of linear equations (1). However, with different properties of the matrices N and S, there is no reason to use one same parameter α in the iteration. In order to optimize the NSS method [,4], we introduce two different parameters α and β in the NSS method (3). This produces a new method which is called the generalized NSS (GNSS) method. GNSS method Given an initial guess x 0 for k = 0, 1,, until {x k } converges, compute (αi + N)x k+ 1 = (αi S)x k + b, (4) (βi + S)x k+1 = (βi N)x k+ 1 + b, where α is a given nonnegative constant, and β is a given positive constant. Obviously, the GNSS method is a generalized form of the NSS method since GNSS reduces to NSS when α = β. In view of [,7-8], the GNSS method can be considered to be designed for solving another preconditioned linear system  x = b with  = R AR 1, x = Rx, and b = R b, where R C n n is a prescribed nonsingular matrix, and R = (R 1 ) is the conjugate transpose of R 1. We usually take an Hermitian positive definite matrix P = R R for preconditioning. Finally, a kind of generalized preconditioned NSS (GPNSS) method is defined. GPNSS method Given an initial guess x 0 for k = 0, 1,, until {x k } converges, compute (αp + N)x k+ 1 = (αp S)x k + b, (5) (βp + S)x k+1 = (βp N)x k+ 1 + b, where α is a given nonnegative constant, and β is a given positive constant. Since HSS is a special version of NSS, the new method GPNSS (5) is actually the generalization of the GPHSS method. It preserves all properties of HSS, AHSS, NSS, and GPHSS iteration methods. One can also refer to [-5] for details. The new method GPNSS has the preconditioned NSS (PNSS) method as its special case when α = β. Moreover, the NSS method is evidently a trivial case with α = β and without preconditioner. In the following part, we will prove that the GPNSS method is convergent under certain domain conditions for the two parameters. This will generalize essentially what exists for the NSS and PNSS methods. The convergence analysis for GPNSS is, in particular, consistent with those for GPHSS if one uses HSS. Optimal choice of parameters α, β in GPNSS and imum upper bound for the spectral radius of the iterative matrix are derived and computed by two steps in accordance with two independent spectral conditions.

No. 3 WANG Yang, et al.: Two-parameter preconditioned NSS method 35 By using incremental unknowns (IUs) [9-1] and ILU decomposition [13-15] as our preconditioners, respectively, we perform the numerical tests with NSS, GNSS, and GPNSS methods. Numerical results not only confirm the theoretical analysis of the new method, but also show us the advantages of the IU preconditioner over other methods. The structure of the paper is as follows. In Section 1, a theorem for the convergence of the GPNSS method is established and proved. In Section, optimal parameters and imum values of the upper bound of the iterative spectrum radius are derived. Numerical experiments with the IUs preconditioner and the ILU decomposition preconditioner are presented in Section 3. Here, we apply the numerical results for comparison with many other existing methods to show the efficiency of the new methods. 1 Convergence analysis To simplify notations in the proof process for convergence, we rewrite equivalently the GPNSS method as a one-step iteration, i.e., where x k+1 = M(α, β)x k + N(α, β)b, M(α, β) = (βp + S) 1 (βp N)(αP + N) 1 (αp S), N(α, β) = (α + β)(βp + S) 1 P(αP + N) 1. Here, M(α, β) stands for the iterative matrix of the GPNSS iteration. The standard theory of iteration indicates that, if the spectral radius of the iteration matrix M(α, β) is strictly less than 1, then the GPNSS method converges. Since S is skew-hermitian, we know that all eigenvalues of P 1 S are imaginary. In particular, the jth eigenvalues of P 1 S are of the form ie j (j = 1,,, n) with i = 1. As for the eigenvalues of P 1 N, they are usually complex numbers. The following theorem demonstrates the convergence property of the GPNSS iteration. Theorem 1 Let A C n n be a positive definite matrix, P C n n be a Hermitian positive definite matrix, and Θ 1, Θ be spectral sets of matrices P 1 N and P 1 S, respectively. For any eigenvalue λ k = γ k + iη k Θ 1 with its real part γ k, imaginary part η k, and i = 1, k = 1,,, n, we denote γ = η = For any eigenvalue in Θ, we denote also {γ k}, γ = {γ k}, k=1,,,n k=1,,,n { η k }, η = { η k }. k=1,,,n k=1,,,n e = ie j Θ { e j }, e = ie j Θ { e j }.

36 Communication on Applied Mathematics and Computation Vol. 7 Then, the spectral radius ρ(m(α, β)) of the GPNSS iteration is bounded by (β γ k ) σ(α, β) = + η + e k j γ k +iη k Θ 1 (α + γ k ) + ηk ie j Θ β + e. (6) j Furthermore, the GPNSS method (5) converges to the unique solution x C n of the linear system (1) if the parameters α and β satisfy either of the following two conditions. (i) If η < 4 γ γ, then (α, β) Ω i with i=1 Ω 1 = {(α, β) α β < β(α)}, Ω = {(α, β) β < {α, β(α)}, Ψ 1 (α, β) < 0}, Ω 3 = {(α, β) β(α) β α, Ψ (α, β) < 0}, Ω 4 = {(α, β) β {α, β(α)}}, where functions Ψ 1 (α, β), Ψ (α, β), and β(α) are defined by Ψ 1 (α, β) = (γ e )(α β) αβγ e γ + (α β)η, Ψ (α, β) = (γ e )(α β) αβγ e γ + (α β)η, (7) β(α) = γ γ + α(γ + γ ) η α + γ + γ. (ii) If η γ γ, then (α, β) Π 1 = {(α, β) α β}, 3 Π i with i=1 Π = {(α, β) α > β β(α), Ψ (α, β) < 0}, Π 3 = {(α, β) β < {α, β(α)}, Ψ 1 (α, β) < 0}, where β(α), Ψ 1 (α, β), and Ψ (α, β) are the same as in (7). Proof The spectral radius ρ(m(α, β)) of the iterative matrix M(α, β) is evidently bounded by a multiplicative product of two quantities, i.e., ρ((βp + S)M(α, β)(βp + S) 1 ) = ρ((βp N)(αP + N) 1 (αp S)(βP + S) 1 ) = ρ((βi P 1 N)(αI + P 1 N) 1 (αi P 1 S)(βI + P 1 S) 1 ) (βi P 1 N)(αI + P 1 N) 1 (αi P 1 S)(βI + P 1 S) 1 = λ k Θ 1 β λ k α + λ k ie j Θ α + e j β + e j

No. 3 WANG Yang, et al.: Two-parameter preconditioned NSS method 37 = γ k +iη k Θ 1 (β γ k ) + ηk (α + γ k ) + ηk ie j Θ α + e j β + e. (8) j Since α β + γ > 0, it is easy to verify that ((β γ) + η )/((α + γ) + η ) is an increasing function with respect to the variable η. This leads γ k +iη k Θ 1 (β γ k ) + η k (α + γ k ) + η k = γ γ k γ (β γ k ) + η (α + γ k ) + η { (β γ ) = + η, } (β γ ) + η (α + γ ) + η. (9) Similarly, we can obtain ie j Θ α + e j β + e j α + e β + e, α β, = α + e β + e, α > β. (10) In the following process of the proof, we will consider two cases. Case 1 η < γ γ. Due to the definition of β(α), it follows that the right-hand side of (9) is equal to (β γ ) + η (α + γ ) + η, β < β(α), (β γ ) + η, β β(α). (11) Now, let us divide the region D = {(α, β) α > 0, β > 0} into four subregions, i.e., D = 4 D i (see Fig. 1 with α 0 = γ γ η), where i=1 D 1 = {(α, β) α β < β(α)}, D = {(α, β) β < {β(α), α}}, D 3 = {(α, β) β(α) β < α}, D 4 = {(α, β) β {β(α), α}}. (1)

38 Communication on Applied Mathematics and Computation Vol. 7 Fig. 1 Four subregions of D Our observation is that, there are four different bounds for the spectral radius ρ(m(α, β)) when (α, β) in the four different subregions D 1, D, D 3, and D 4. Actually, the following facts are deduced. (a) For (α, β) D 1, by (8), (10), and (11), we have directly the following inequality: (β γ ) ρ(m(α, β)) < + η (α + γ ) + η + e β + e < 1. (b) For (α, β) D, the spectral radius ρ(m(α, β)) satisfies (β γ ) ρ(m(α, β)) < + η (α + γ ) + η + e β + e. The right-hand side is strictly less than 1, if and only if Ψ 1 (α, β) = (γ e )(α β) αβγ e γ + (α β)η < 0. (c) For (α, β) D 3, it follows that if and only if ρ(m(α, β)) < (β γ ) + η α + e β + e < 1, Ψ (α, β) = (γ e )(α β) αβγ e γ + (α β)η < 0. (d) For (α, β) D 4, we have the following inequality: (β γ ) ρ(m(α, β)) < + η + e β + e < 1. Therefore, we obtain in Case 1 ρ(m(α, β)) < 1, 4 (α, β) Ω i. i=1

No. 3 WANG Yang, et al.: Two-parameter preconditioned NSS method 39 Case η γ γ. Making use of the definition of β(α) and (10), we find that the right-hand side of (8) is equal to (β γ ) + η + e β + e, α β, (β γ ) + η + e β + e, α > β β(α), (13) (β γ ) + η (α + γ ) + η + e β + e, β < {α, β(α)}. It is natural to divide the region D = {(α, β) α > 0, β > 0} into three other subregions D = 3 (see Fig. ), where Di i=1 D1 = {(α, β) α β}, D = {(α, β) α > β β(α)}, (14) D3 = {(α, β) β < {α, β(α)}}. Fig. Three subregions of D (a) For (α, β) D1, the spectral radius satisfies (β γ ) ρ(m(α, β)) < + η + e β + e < 1. (b) For (α, β) D, the spectral radius satisfies (β γ ) ρ(m(α, β)) < + η + e β + e < 1, if and only if Ψ (α, β) < 0 holds true. (c) For (α, β) D3, the spectral radius satisfies (β γ ) ρ(m(α, β)) < + η (α + γ ) + η + e β + e < 1,

330 Communication on Applied Mathematics and Computation Vol. 7 if and only if Ψ 1 (α, β) < 0 holds true. Therefore, we obtain in Case, ρ(m(α, β)) < 1, 3 (α, β) Π i. i=1 Remark 1 When P = I, the identity matrix, the GPNSS iteration reduces to the GNSS iteration. The convergence property of the GPNSS method also reduces to the GNSS s. Remark When α = β, the GPNSS iteration reduces to the PNSS iteration, which is unconditionally convergent. Remark 3 The GPHSS method is one of the special forms of GPNSS with η = 0. We can check that the convergence conclusion here is consistent with that for GPHSS in []. Optimality Let us denote (α, β ) = arg {σ(α, β)}. α,β The optimal parameters (α, β ) to the GPNSS iteration and the imum upper bound of ρ(m(α, β)) are described in the following two theorems. Theorem In the case η < γ γ, let Ψ 3 (α, β) > 0 and Ψ 4 (α, β) > 0. Then, the optimal parameters are (α 1, β(α 1 )), γ γ η e, (α, β ) = (α 0, β(α 0 )), e < γ γ η < e, (15) (α, β(α )), γ γ η e, where Ψ 3 (α, β) = e γ βη + β γ βγ + βe, Ψ 4 (α, β) = e γ βη + β γ βγ + βe, α 1 = η + e γ γ + 1, γ + γ α = η + e γ γ +, γ + γ 1 = (e + γ )(e + γ ) + η (η + e γ γ ), = (e + γ )(e + γ ) + η (η + e γ γ ). (16)

No. 3 WANG Yang, et al.: Two-parameter preconditioned NSS method 331 Furthermore, the imum value of σ(α, β) is given by σ(α 1 ), γ γ η e, σ(α, β ) = σ(α 0 ), e < γ γ η < e, σ(α ), γ γ η e, where the function σ(α) will be specified in the proof process (see (19)). Proof From (11) and (10), we obtain that (β γ ) + η (α + γ ) + η + e β + e, (α, β) D 1, (β γ ) + η (α + γ ) + η + e β + e, (α, β) D, σ(α, β) = (β γ ) + η + e β + e, (α, β) D 3, (β γ ) + η + e β + e, (α, β) D 4. (17) (18) By straightforward computation, we find that the partial derivative of the function σ(α, β) with respect to β satisfies σ β (α, β) < 0 for (α, β) D 1 D. By using Ψ 3 (α, β) > 0 and Ψ 4 (α, β) > 0, we find that σ β (α, β) > 0 for (α, β) D 3 D 4. Thus, the imum values of the function σ(α, β) are exactly on the curve β = β(α). Now, we turn to look for imum points of σ(α) instead of looking for imum points of σ(α, β), where (β(α) γ ) + η (α + γ ) + η + e β(α) + e, α > α 0, σ(α) := σ(α, β(α)) = (β(α) γ ) + η + e β(α) + e, α α 0. (19) Evidently, the derivative of σ(α) has the forms c σ 1 (α)η 1 (α), α > α 0, (α) = c (α)η (α), α α 0, where c 1 (α) and c (α) are two positive functions, and η 1 (α) = (γ + γ )α + (γ γ e η)α e (γ + γ ), η (α) = (γ + γ )α + (γ γ e η )α e (γ + γ ). (0) (1) Note that η 1 (α) has both a negative root and a positive root. So has η (α). Let us denote the two positive roots of η 1 (α) and η (α) by α 1 and α, respectively. Since the values of the

33 Communication on Applied Mathematics and Computation Vol. 7 two functions at α 0 are η 1 (α 0 ) = (γ + γ + γ γ )(γ γ η e ), η (α 0 ) = (γ + γ + γ γ )(γ γ η e ), () we observe the following facts. (a) If γ γ η e, then the unique imum of σ(α, β) is σ(α 1), which is σ(α 1, β(α 1 )) for α 1 > α 0. (b) If e > γ γ η > e, then the unique imum of σ(α, β) is σ(α 0), which is σ(α 0, β(α 0 )). (c) If γ γ η e, then the unique imum of σ(α, β) is σ(α ), which is σ(α, β(α )) for α < α 0. Remark 4 One can check the evaluations of the parameters here which are absolutely the same as in the GPHSS method when GPNSS reduces to GPHSS. Theorem 3 In the case η γ γ, the optimal parameters are (α, β ), γ + η e, (α, β ) = (µ 0, µ 0 ), e < γ + η < e, (3) (α 1, β 1 ), γ + η e, where α 1 = (γ + η ) + e + ((η + e ) + γ )((η e ) + γ ), γ α = (γ + η ) + e + ((η + e ) + γ )((η e ) + γ ), γ β 1 = (γ + η ) e + ((η + e ) + γ )((η e ) + γ ), γ β = (γ + η ) e + ((η + e ) + γ )((η e ) + γ ). γ (4) Furthermore, the imum value of σ(α, β) is given by σ(α, β ), γ + η e, σ(α, β) = σ(µ 0, µ 0 ), e < γ + η < e, σ(ᾱ 1, β 1 ), γ + η e, (5) where µ 0 = γ + η.

No. 3 WANG Yang, et al.: Two-parameter preconditioned NSS method 333 Proof From (13), we have (β γ ) + η (α + γ ) + η + e β + e, (α, β) D1, (β γ σ(α, β) = ) + η + e β + e, (α, β) D, (β γ ) + η (α + γ ) + η + e β + e, (α, β) D3. (6) It is evident that σ β (α, β) < 0 for (α, β) D3. In order to find the imum value of σ(α, β), we need only to find the imum value of the following two equations: (β γ ) + η + e β + e, (α, β) D1, σ(α, β) = (β γ ) + η α + e β + e, (α, β) D. Two partial derivatives σ α (α, β) and σ β (α, β) of the function σ(α, β) are needed in the following. In fact, we have c 1 (α, β)η 1 (α), (α, β) D1, σ α (α, β) = c (α, β)η (α), (α, β) D, (8) where c 1 (α, β) and c (α, β) are two positive functions. Moreover, both η 1 and η are two functions with respect to α, η 1 (α) = γ α + (η + γ e )α γ e, η (α) = γ α + (η + γ e )α γ e. (7) (9) Similarly, we have also c 3 (α, β)η 3 (β), (α, β) D1, σ β (α, β) = c 4 (α, β)η 4 (β), (α, β) D, (30) where c 3 (α, β) and c 4 (α, β) are also two positive functions. Moreover, both η 3 and η 4 are two functions with respect to β, η 3 (β) = γ β + (e (η + γ ))β γ e, (31) η 4 (β) = γ β + (e (η + γ))β γ e. Note that the four functions η 1 (α), η (α), η 3 (β), and η 4 (β) have the same property. Each of them has both a negative root and a positive root. We denote the four positive roots of

334 Communication on Applied Mathematics and Computation Vol. 7 them by α 1, α, β 1, and β, respectively. Moreover, we are definite to observe the following facts. (a) If γ + η e, then α 1 > β 1, α > β. Therefore, the function σ(α, β) does not have its imum value in D 1. While in the region {(α, β) α > β}, the function σ(α, β) has its imum value at the point (α, β ). As a matter of fact, σ(α, β ) is the imum (β γ) value of the function +η (α+γ ) +η follows, then, σ(α, β ) α +e β +e in the region {(α, β) α > 0, β > 0}. It (β γ ) + η + e β + e, (β γ ) + η + e β + e, α β. Finally, please note that the point (α, β ) is in D because β > β(α ). Therefore, under the condition of γ +η e, the imum value of σ(α, β) in the region {(α, β) α > 0, β > 0} is exactly σ(α, β ). (b) If e < γ +η < e, then α 1 > β 1, α < β. It is easy to see that (α i, β i ) is not in Di for i = 1,. Hence, the function σ(α, β) has its imum value at the line α = β. The imum point, which we denote by µ 0, is just the zero of the derivative of σ(α, α). We find µ 0 = γ + η. Therefore, the imum value of σ(α, β) is σ(µ 0, µ 0 ). (c) If γ + η e, then α 1 < β 1, α < β. Therefore, the function σ(α, β) does not have its imum value in D. While in D 1, the function σ(α, β) has its imum value at the point (α 1, β 1 ). As a matter of fact, σ(α 1, β 1 ) is the imum value of the function (β γ) +η (α+γ ) +η α +e β +e in the region {(α, β) α > 0, β > 0}. It follows, then, (β γ ) σ(α 1, β 1 ) + η + e, (β γ ) + η β + e α + e β + e, α > β. Therefore, under the condition of γ + η e, the imum value of σ(α, β) in the region {(α, β) α > 0, β > 0} is σ(α 1, β 1 ). 3 Numerical results Example 1 Consider the following convection-diffusion equation: (u xx + u yy ) + bu x = f in Ω, u = 0 on Ω,

No. 3 WANG Yang, et al.: Two-parameter preconditioned NSS method 335 where b > 0 is a constant, and Ω = [0, 1]. After using the five-point discretization with the mesh size h [16], 1 h (4u ij u i 1,j u i+1,j u i,j 1 u i,j+1 ) + b h (u i+1,j u i 1,j ) = f ij, where u ij and f ij are the approximate values of u and f, we get a linear system AU = b. (3) Unlike the splitting HSS, the splitting NSS is not unique for a given matrix A. We use the NSS (see, e.g., [5]) N = 1 (A + A ) + S r, S = 1 (A A ) S r, S r = ici with i = 1 and c a real number. Let H = 1 (A+A ). Then, HS r = S r H. One can check that H + S r is a normal matrix. 3.1 Incremental unknowns preconditioner It is well known that the IUs method can effectively reduce the condition number of the coefficient matrix (see [,9-1]). We choose here IUs as our preconditioner at first. Let U be the vector corresponding to the IUs and R be the transfer matrix, i.e., U = RU. Then, the system (3) becomes AU = b, where A = R T AR and b = R T b. The preconditioner matrix P is chosen to be P = (RR T ) 1. 3. ILU factorization preconditioner For a non-symmetric matrix A, especially when A is banded, another preconditioning strategy is based on the ILU decomposition (see [13-15]). We compute the LU decomposition of A but drop any fill-in in L or U outside of the original structure of A. Higher accuracy incomplete decompositions are also used as follows: ILU(p) allows for p additional diagonals in L and U. One can refer to [17] for more details. In our numerical experiments, we use only p = 0. Remark 5 Here for the ILU preconditioner, the matrix P is only positive but not symmetric. One has not yet had efficiently theoretical analysis for this non-symmetric positive preconditioner up to now. It will become our future work to do this. However, the algorithm and numerical tests of computation are available for our problem. 3.3 Spectral radius and convergence We will exae the spectral radius and convergence region of the GPNSS method in this part. In order to be clear, we perform numerical tests in two cases.

336 Communication on Applied Mathematics and Computation Vol. 7 Case 1 We take b = 1 in the model equation and c = 1 in the splitting NSS. One can check that for NSS, GNSS, and GPNSS (IU) methods, the spectral conditions are η < γ γ and γ γ η e. While for the GPNSS (ILU) method, the conditions are η γ γ and η + γ e. By Theorem 1, we obtain the convergence region C of GPNSS (IU) C = {(α, β) α > 0, β > β 1 (α)} = 4 Ω i, where 45.71α 1.55 β 1 (α) = 13.47α + 45.71, and the parameter (α, β) is (α, β(α )). See the convergence region in Fig. 3. The value of (α, β) can be detered according to the conclusions of Theorems and 3. The comparison of the spectral radius is given in Table 1, where n 0 is the number of the coarsest grid in the discretization of IUs. It shows that the spectral radii of the GPNSS (IU) method are much smaller than those of NSS, GNSS, and GPNSS (ILU) methods. i=1 Fig. 3 Convergence region of GPNSS (IU) method NSS Table 1 Comparison of radius with b=1 n 0 3 4 5 6 α 1.73 1 1.158 9 0.76 5 0.68 0 ρ(m(α)) 0.804 5 0.86 0 0.907 7 0.959 6 α 0.618 0 0.817 0.94 1 1.01 3 GNSS β 1.184 9 0.957 4 0.869 0.87 8 ρ(m(α, β)) 0.768 7 0.85 0 0.905 8 0.933 6 α 0.904 1.016 8 0.930 6 1.48 8 GPNSS (ILU) β 1.7 1.096 3 1.196 9 0.890 5 ρ(m(α, β)) 0.760 9 0.88 3 0.87 7 0.907 7 α 0.591 1 0.359 5 0.479 9 0.606 3 GPNSS (IU) β.03 0 1.557 1 1.6 6 1.070 0 ρ(m(α, β)) 0.567 8 0.656 0.77 5 0.785 5

No. 3 WANG Yang, et al.: Two-parameter preconditioned NSS method 337 Case We take b = 10 in the model equation and c = 3 in the NS splitting. One can also check that the spectral conditions are η γ γ and e < γ +η < e. By Theorems 1 and 3, we obtain the convergence region C of GPNSS (IU) (see also Fig. 4) where C = {(α, β) α > 0, β > β (α)} = β (α) = 8.13α 1.10 0.99α + 8.13, 3 Π i, and the parameter (α, β) is (µ 0, µ 0 ). For our example here, the set Π 3 is empty. Table also gives us a comparison of the spectral radius. Note that in this case, the GNSS method is reduced to the NSS method. i=1 Fig. 4 Convergence region of GPNSS(IU) method NSS (GNSS) Table Comparison of radius with b=10 n 0 3 4 5 6 α 3.047 3 3.015 4 3.006 4 3.003 1 ρ(m(α)) 0.881 0 0.897 0.99 0.949 8 α 3.069 7 3.035 3.017 8 3.009 4 GPNSS (ILU) β 3.069 7 3.035 3.017 8 3.009 4 ρ(m(α, β)) 0.887 7 0.910 1 0.97 1 0.940 1 α 3.8 0 3.137 0 3.074 0 3.040 3 GPNSS (IU) β 3.8 0 3.137 0 3.074 0 3.040 3 ρ(m(α, β)) 0.793 4 0.79 7 0.914 1 0.87 6 3.4 Convergence speed For the given differential equation, we perform also numerical experiments to compare the convergence speed of NSS, GNSS, GPNSS (ILU), and GPNSS (IU) for two cases. With different number n 0, we list the number of iteration and the CPU time of the numerical computations for b = 1 and b = 10 in Tables 3 and 4, respectively. Given the tolerance r k < 10 7, we find that GPNSS (IU) is better than other three methods for both IT

338 Communication on Applied Mathematics and Computation Vol. 7 (iteration) and the CPU (time) as N increases. It shows also that using the IU preconditioner in GPNSS is more efficient than using the ILU preconditioner in GPNSS when n 0 becomes large. NSS GNSS GPNSS (ILU) GPNSS (IU) NSS (GNSS) GPNSS (ILU) GPNSS (IU) Table 3 Comparison of iterations and CPU time with b=1 n 0 3 4 5 6 IT 41 6 98 64 CPU 0.96 8 1.093 8 3.406 3 0.906 0 IT 37 6 94 134 CPU 0.96 8 1.031 3.968 8 7.437 5 IT 35 50 69 95 CPU 0.093 7 0.34 3 1.06 5.546 9 IT 17 3 30 40 CPU 0.093 7 0.38 1 0.984 3.34 4 Table 4 Comparison of iterations and CPU time with b=10 n 0 3 4 5 6 IT 61 94 139 195 CPU 0.313 0 1.187 5 3.734 4 8.406 3 IT 77 99 16 160 CPU 0.187 5 0.531 1.500 0 4.18 8 IT 5 40 60 71 CPU 0.18 5 0.484 3 1.515 6.750 0 Example Consider the system of linear equations Ax = b with the coefficient matrix [ ] B E A =, E T 0.5I where [ I T + T I 0 B = 0 I T + T I ] [ R m m I F, E = F I ] R m m, and T = tridiag( 1,, 1) R m m, F = δh tridiag( 1, 1, 0) R m m with h = 1 m+1 the mesh size of discretization (see also [18]). Since HSS is a special and specific splitting of NSS, we choose N = 1 (A + A ) and S = 1 (A A) in our numerical test. From the actual computation, we find that η < γ γ and e < γ γ < e. By Theorems 1 and, we can easily conclude that the GPNSS iteration method is convergent unconditionally, and the two parameters α and β are identical. Actually, one can deduce that α = β = γ γ. The experimental

No. 3 WANG Yang, et al.: Two-parameter preconditioned NSS method 339 optimal parameters α, β and the corresponding spectral radii ρ(m(α, β)) are listed in Table 5, and the number of iterations and the CPU time in seconds are listed in Table 6. NSS (GNSS) Table 5 Comparison of radius with δ=10 m 5 10 15 0 α 1.931 9 1.16 8 0.780 6 0.596 3 ρ(m(α)) 0.709 9 0.614 8 0.833 5 0.908 6 α 1.815 7 1.064 6 0.851 9 0.776 3 GPNSS (ILU) β 1.815 7 1.064 6 0.851 9 0.776 3 ρ(m(α, β)) 0.364 6 0.58 7 0.603 4 0.631 8 NSS (GNSS) GPNSS (ILU) Table 6 Comparison of iterations and CPU time with δ=10 m 5 10 15 0 IT 34 60 93 13 CPU 0.340 5 7.190 3 78.613 0 48.506 IT 18 31 41 47 CPU 0.110.94 44.063 4 196.57 7 Remark 6 The splitting NSS is not unique for a given matrix A. For the sake of simplicity, our NSS here is specifically the same as HSS in our numerical test. It is still a problem to be solved whether other types of splitting NSS may result in better convergence analysis. Remark 7 The coefficient matrix of the linear system in our test is just a special case of generalized saddle-point problems. When A is a non-hermitian positive semidefinite matrix in linear systems, for example, A = [ B E E 0 the GPNSS iteration method may not be particularly suitable. However, this will be another important topic. New work of research on it may appear elsewhere in the future. ], References [1] Golub G H, Van L C. Matrix Computations [M]. 3rd ed. Baltimore: The Johns Hopkins University Press, 1996. [] Yang A L, An J, Wu Y J. A generalized preconditioned HSS method for non-hermitian positive definite linear systems [J]. Appl Math Comput, 010, 16: 1715-17. [3] Bai Z Z, Golub G H, Ng M K. Hermitian and skew-hermitian splitting methods for non- Hermitian positive definite linear systems [J]. SIAM J Matrix Anal Appl, 003, 4: 603-66. [4] Li L, Huang T Z, Liu X P. Asymmetric Hermitian and skew-hermitian splitting methods for positive definite linear systems [J]. Comput Math Appl, 007, 54: 147-159. [5] Bai Z Z, Golub G H, Ng M K. On successive-overrelaxation acceleration of the Hermitian and skew-hermitian splitting iterations [J]. Numer Linear Algebra Appl, 007, 14: 319-335.

340 Communication on Applied Mathematics and Computation Vol. 7 [6] Bai Z Z, Golub G H, Lu L Z, Yin J F. Block triangular and skew-hermitian splitting methods for positive definite linear systems [J]. SIAM J Sci Comput, 005, 6(3): 844-863. [7] Bai Z Z, Golub G H, Li C K. Convergence properties of preconditioned Hermitian and skew- Hermitian splitting methods for non-hermitian positive semidefinite matrices [J]. Math Comput, 007, 76: 87-98. [8] Bai Z Z, Golub G H, Pan J Y. Preconditioned Hermitian and skew-hermitian splitting methods for non-hermitian positive semidefinite linear systems [J]. Numer Math, 004, 98: 1-3. [9] Chen M, Temam R. Incremental unknowns for solving partial differential equations [J]. Numer Math, 1991, 59: 55-71. [10] Chen M, Temam R. Incremental unknowns in finite differences: condition number of the matrix [J]. SIAM J Matrix Anal Appl, 1993, 14: 43-455. [11] Song L J, Wu Y J. A modified Crank-Nicolson scheme with incremental unknowns for convection doated diffusion equations [J]. Appl Math Comput, 010, 15: 1969-1979. [1] Wu Y J, Wang Y, Zeng M L, Yang A L. Implementation of modified Marder-Weitzner method for solving nonlinear eigenvalue problems [J]. J Comput Appl Math, 009, 6: 166-176. [13] Greenbaum A. Iterative Methods for Solving Linear Systems [M]. Philadelphia: SIAM, 1997. [14] Quarteroni A, Valli A. Numerical Approximation of Partial Differential Equations [M]. Berlin: Springer-Verlag, 1994. [15] Saad Y. Iterative Methods for Sparse Linear Systems [M]. nd ed. Philadelphia: SIAM, 003. [16] Thomas J W. Numerical Partial Differential Equations: Finite Difference Methods [M]. New York: Springer-Verlag, 1995. [17] Evans D J. Preconditioning Methods: Analysis and Applications [M]. New York: Gordon and Breach Science Publishers, 1983. [18] Bai Z Z, Golub G H, Li C K. Optimal parameter in Hermitian and skew-hermitian splitting methods for certain two-by-two block matrix [J]. SIAM J Sci Comput, 006, 8: 583-603.