On generalized preconditioned Hermitian skew-hermitian splitting methods for saddle point problems Department of Humanities Social Sciences Zhejiang Industry Polytechnic College Shaoxing, Zhejiang 312000 P.R.China panchunping@126.com Abstract: In this paper, we study the iterative algorithms for saddle point problemsspp. Bai, Golub Pan recently studied a class of preconditioned Hermitian skew-hermitian splitting methodsphss. By further accelerating it with another parameters, using the Hermitian/skew-Hermitian splitting iteration technique we present the generalized preconditioned Hermitian skew-hermitian splitting methods with four parameters4- GPHSS. Under some suitable conditions, we give the convergence results. Numerical examples further confirm the correctness of the theory the effectiveness of the method. Key Words: Saddle point problems, iterative method, Hermitian skew-hermitian splitting method, preconditioned Hermitian skew-hermitian splitting method, SOR-like method 1 Introduction We consider the iterative solutions of large sparse saddle point problems of the form A B x p B T = 1 0 y q A R m m is a symmetric positive definite matrix, B R m n is a matrix of full column rank, p R m q R n are two given vectors. here m n. Denote by B T the transpose of the matrix B. These assumptions guarantee the existence uniqueness of the solution of the linear system. This system arises as the first-order optimality conditions for the following equality-constrained quadratic programming problem: min Jx = 1 2 xt Ax p T x 2 s.t. Bx = q 3 In this case the variable y represents the vector of Lagrange multipliers. Any solution x, y of 1 is a saddle point for the Lagrangian Lx, y = 1 2 xt Ax p T x + Bx q T y 4 hence the name saddle point problem given to 1. Recall that a saddle point is a point x, y R n+m that satisfies Lx, y Lx, y Lx, y 5 for any x R n y R m, or, equivalently, minmax Lx, y = Lx x, y = max min Lx, y 6 y y x Systems of the form 1 also arise in nonlinearly constrained optimization sequential quadratic programming interior point methods, in fluid dynamics Stokes problem, incompressible elasticity, circuit analysis, structural analysis, so forth[1]. Since the above problem is large sparse, iterative methods for solving equation 1 are effective because of storage requirements preservation of sparsity. The best known the oldest methods is Uzawa algorithms[2]. The well-known SOR method, which is a simple iterative method that is popular in engineering applications, cannot be applied directly to system 1 because of the singularity of the block diagonal part of the coefficient matrix. Recently, several proposals for generalizing the SOR method to system 1 have been proposed[3, 4, 5, 6, 7]. Recently, Benzi Golub discussed the convergence the preconditioning property of the Hermitian skew-hermitian splittinghss iteration method when it is used to solve the saddle point problem[8].then, Bai et al.establish a class of preconditioned Hermitian/skew-Hermitian splittingphss E-ISSN: 2224-2880 1147 Issue 12, Volume 11, December 2012
iteration method for saddle point problem[9], Pan et al.futher proposed its two-parameter acceleration, called the generalized preconditioned Hermitian/skew-Hermitian splittinggphss iteration method, studied the convergence of this iterative scheme[10]. Both theory experiments have shown that these methods are very robust efficient for solving the saddle-point problem when they are used as either solvers or preconditionersfor the Krylov subspace iteration methods. In this paper, By further accelerating PHSS iteration method with another parameter, we present the generalized Preconditioned Hermitian skew- Hermitian splitting methods with four parameters4- GPHSS. Under some suitable conditions, we give the convergence results. Numerical results show that the new methods are very effective. 2 The Four-parameter PHSS iteration methods In this section, we review the HSS PHSS iteration method for solving the saddle-point problems presented by Bai,Golub Ng,Bai,Golub Pan[8, 9]. Let A C n n be a positive definite matrix. Given an initial guess x 0 C n. For k = 0, 1, 2, until {x k } converges, compute { αi + Hx k+ 1 2 = αi Sx k 7 αi + Sx k+1 = αi Hx k+ 1 2 α is a given positive constant. In matrix-vector form, the above HSS iteration method can be equivalently rewritten as x k+1 = Mαx k + Nαb, k = 0, 1, 2, 8 Mα = αi + S 1 αi HαI + H 1 αi S Nα = 2ααI + S 1 αi + H 1 Here, Mα is the iteration matrix of the HSS iteration. In fact, 8may also result from the splitting A = F α Gα of the coefficient matrix A, with F α = 1 αi + HαI + S 2α Gα = 1 2α αi HαI S 9 The following theorem describes the convergence property of the HSS iteration. Theorem 1. [8]Let A C n n be a positive definite matrix, H = 1 2 A + A S = 1 2 A A be its Hermitian skew-hermitian parts, respectively, α be a positive constant. Then the spectral radius ρmα of the iteration matrix Mα of the HSS iteration is bounded by σα = max λ j λh α λ j α + λ j λh is the spectral set of the matrix H. Therefore, it follows that ρmα σα < 1, α > 0, i.e., the HSS iteration converges to the exact solution x C n of the system of linear equations 1. Moreover, if γ min γ max are the lower the upper bounds of the eigenvalues of the matrix H, respectively, then { } α = argmin max α λ α γ min λ γ max α + λ = γ min γ max σα = = γmax γ min γmax + γ min κh 1, κh + 1 κh is the spectral condition number of H. To establish the convergence properties of iterative method for the saddle-point problem, It need to begin by writing the saddle-point problem 1 in nonsymmetric form: A = define matrices A B B T, Z = 0 P = AZ = b 10 A 0 0 Q B = A 1 2 BQ 1 2 x p, b = y q R n n 11 12 13 E-ISSN: 2224-2880 1148 Issue 12, Volume 11, December 2012
Q R n n is a prescribed nonsingular symmetric matrix, define A = P 1 2 AP 1 I B 2 = B T 14 0 x = P 1 2 y x y b = P 1 2 b = p q 15 16 Then the system of linear equations 1 can be transformed into the following equivalent one: x A = b 17 y Evidently, the Hermitian skew-hermitian parts of the matrix A R m+n m+n are, respectively, H = 1 I 0 2 A + AT = 18 0 0 S = 1 2 A AT = 0 B B T 0 19 By straightforwardly applying the HSS iteration technique to 17, it then is easy to obtain the HSS iteration scheme [8] : αi + H αi + S x k+ 1 2 y k+ 1 2 x k+1 y k+1 = αi S = αi H x k y k x k+ 1 2 y k+ 1 2 20 α is a given positive constant, The iteration matrix of the HSS iteration method is M α = αi + S 1 αi HαI + H 1 αi S It then follows immediately that in the original variable it is easy to obtain the following preconditioned Hermitian/skew-Hermitian splitting PHSS iteration method [9]. αp + H αp + S x k+ 1 2 y k+ 1 2 x k+1 y k+1 = αp S = αp H x k y k x k+ 1 2 y k+ 1 2 21 H = 1 2 A + AT, S = 1 2 A AT The iteration matrix of the PHSS iteration method is M α =αp + S 1 αp HαP + H 1 αp S 22 Let ω, τ, α β be four nonzero reals, I m R m m I n R n n be the m-by-m the n-by-n identity matrices, respectively, ωim 0 Ω = 0 τi n αim 0 Λ = 0 βi n Then we consider the following four-parameter PHSS iteration scheme: ΩP + H ΛP + S x k+ 1 2 y k+ 1 2 x k+1 y k+1 = ΩP S = ΛP H x k y k x k+ 1 2 y k+ 1 2 23 More precisely, we have the following algorithmic description of this Four-parameter Preconditioned Hermitian skew-hermitian Splitting method4- GPHSS. Algorithm 2. The 4-GPHSS iteration method Let Q R n n be a nonsingular symmetric matrix. Given initial vectors x 0 R m, y 0 R n, four relaxation factors ω, τ, α, β 0. For k = 0, 1, 2,...until the iteration sequence {x kt, y kt T } is convergent, compute x k+ 1 2 = ω 1+ω xk + 1 ω+1 A 1 p By k y k+ 1 2 = y k + 1 τ Q 1 B T x k q y k+1 = D 1 βqy k+ 1 2 x k+1 +1 1 α BT x k+ 1 2 + 1 α BT A 1 p q = α 1 α xk+ 1 2 + 1 α A 1 p By k+1 24 here D = α 1 B T A 1 B + βq R n n. Obviously, when ω = α, τ = β, Algorithm 2 reduces to GPHSS method in[10]; when ω = α = τ = β, it becomes the PHSS method[9]. E-ISSN: 2224-2880 1149 Issue 12, Volume 11, December 2012
By selecting different matrix Q, we can get some useful 4-GPHSS iterative algorithm. Such as Q = θi, θ 0, Q = B T A 1 B, Q = B T B, in addition to the above special selection method, as long as you keep Q symmetric positive definite, can also have other method. After straightforward computations we obtain the iterration matrix of the 4-GPHSS iteration method M ω,τ,α,β = ΛP + S 1 ΛP HΩP + H 1 ΩP S = P 1 2 M ω,τ,α,β P 1 2 M ω,τ,α,β = ΛI + S 1 ΛI HΩI + H 1 ΩI S Therefore, we have ρm ω,τ,α,β = ρm ω,τ,α,β. Evidently, the 4-GPHSS iteration method can be equivalently rewritten as x k+1 x k p = L ω,τ,α,β + N ω,τ,α,β 25 q y k+1 y k 1 αa B ωα 1 L ω,τ,α,β = B T ω+1 A α 1 ω+1 B βq β τ BT βq 26 1 αa B α+ω N ω,τ,α,β = B T ω+1 I m 0 βq 0 β+τ τ I n 27 Here, L ω,τ,α,β is the iteration matrix of the 4-GPHSS iteration. In fact, 25 may also result from the splitting A = M ω,τ,α,β N ω,τ,α,β 28 of the coefficient matrix A, with αω+1 M ω,τ,α,β = ω+α A ω+1 N ω,τ,α,β = τ β+τ BT ωα 1 α+ω B βτ β+τ Q ω+α A α 1 β β+τ BT α+ω B βτ β+τ Q 29 30 3 Convergence analysis By straightforward computations, we can obtain an explicit expression of the iteration matrix L ω,τ,α,β in 25. Lemma 3. Consider the system of linear equations 10. Let A R m m be symmetric positive definite matrix, B R m n be of full column rank, ω, τ, α, β > 0 four given constants. Assume that Q R m m is a symmetric positive definite matrix. Then we partition L ω,τ,α,β in 25 as L11 L L ω,τ,α,β = 12 L 21 L 22 α 1ω L 11 = αω + 1 I T α,β,ω,τa 1 BS 1 βα + ω L 12 = αω + 1 A 1 BS 1 α,β Q ωα 1 L 21 = αω + 1 + β τ QS 1 α,β BT A 1 L 22 = α 1 βω + α I + ω + 1 ω + 1 S 1 α,β Q Proof. Let T α,β,ω,τ = α 1ω α 2 ω + 1 + β ατ α,β BT 31 32 S 1 α,β = βq + 1 α BT A 1 B 33 M ω,τ,α,β = P 1 2 M ω,τ,α,β P 1 2 34 αω+1 ω+α = I ω+1 α+ω B 35 τ β+τ BT βτ β+τ I N ω,τ,α,β = P 1 2 N ω,τ,α,β P 1 2 36 ωα 1 ω+α = I α 1 α+ω B 37 β β+τ BT βτ β+τ I the matrices P B are define in 1213. Then L11 L L ω,τ,α,β = 12 L 21 L 22 = M 1 ω,τ,α,βn ω,τ,α,β 38 E-ISSN: 2224-2880 1150 Issue 12, Volume 11, December 2012
with α 1ω α 1ω L 11 = I αω + 1 α 2 ω + 1 + β ατ βα + ω L 12 = αω + 1 B S 1 α,β ωα 1 L 21 = αω + 1 + β τ S 1 α,βb T L 22 = α 1 βω + α I + ω + 1 ω + 1 S 1 α,β Then from 28 we have L ω,τ,α,β = B S 1 α,βb T S 1 α,β = βi + 1 α BT B 39 M 1 ω,τ,α,β N ω,τ,α,β 40 = P 1 2 M 1 ω,τ,α,βn ω,τ,α,β P 1 2 41 = P 1 2 L ω,τ,α,β P 1 2 42 the result follows immediately. Based on Lemma 3, we can further obtain the eigenvalues of the iteration matrix L ω,τ,α,β of the 4- GPHSS method. Lemma 4. Let the conditions in Lemma 3 be satisfied. If σ k k = 1, 2,... n are the positive singular values of the matrix B = A 1 2 BQ 1 2, then the eigenvalues of the iteration matrix L ω,τ,α,β of the 4- GPHSS iteration method are ωα 1 αω+1 with multiplicity m n, d k ± d 2 k 4e k 2τω + 1αβ + σ 2 k = 1, 2,... n. 43 k, d k = ωτβα+1+αβτω 1 [βω+1+τα 1]σ 2 k e k = βτω + 1α 1ωτ + σ 2 k αβ + σ2 k Proof. From 42 we know that L ω,τ,α,β is similar to L ω,τ,α,β of 38. Therefore, we only need to compute the eigenvalues of the matrix L ω,τ,α,β. Let B = U Σ 1 V T be the singular value decomposition of the matrix B, U R m m V R n n are unitary matrices, Σ 1 = Σ 0, Σ = diagσ 1, σ 2,..., σ n R n n. 44 Then after a few computation, we have therefore L 11 = U L 12 = U S α,β = V βi + 1 α Σ2 V T = V DV T α 1ω αω+1 I td 1 Σ 2 0 0 βα+ω αω+1 ΣD 1 V T 0 ωα 1 αω+1 I ωα 1 L 21 = V αω + 1 + β τ ΣD 1, 0U T L 22 = V α 1 βω + α I + ω + 1 ω + 1 D 1 V T Define Q = diagu, V, then U T Q T L ω,τ,α,β Q = ri td 1 Σ 2 0 βα+ω αω+1 ΣD 1 ωα 1 0 αω+1 I 0 r + β τ ΣD 1 0 α 1 ω+1 I + βω+α ω+1 D 1 t = α 1ω α 2 ω+1 + β ατ, r = α 1ω αω+1. It follows immediately that the eigenvalues of the matrix L ω,τ,α,β are just ωα 1 αω+1 with multiplicity m n, those of the matrix ri td 1 Σ 2 βα+ω αω+1 ΣD 1 r + β τ ΣD 1 α 1 ω+1 I + βω+α ω+1 D 1 which are the same as the matrices 1 ω+1αβ+σ 2 k L k, k = 1, 2,..., n. L k = ωβα 1 βω+1 τ σ 2 k βω + ασ k ωα 1 + αβω+1 τ σ k αβω + 1 α 1σ 2 k The two eigenvalues of the matrix L k are two roots of the quadratic equations λ 2 [2αβω + ω αβ or in other words, βω + 1 + τα 1 σ 2 k τ ]λ + β τ ω + 1α 1ωτ + σ2 k αβ + σ2 k = 0 λ = 1 2τ d k ± d 2 k 4e k 45 E-ISSN: 2224-2880 1151 Issue 12, Volume 11, December 2012
We know that the eigenvalues of the matrix L ω,τ,α,β are ωα 1 αω+1 with multiplicity m n, d k ± d 2 k 4e k 2τω + 1αβ + σ 2 k = 1, 2,... n. 46 k, This completes our proof. Lemma 5. [11] Consider the quadratic equation x 2 bx + c = 0, when b c are real numbers, Both roots of the equation are less than one in modulus if only if c < 1 b < 1 + c. Theorem 6. Consider the system of linear equations 10. Let A R m m be symmetric positive definite matrix, B R m n be of full column rank, ω, τ, α, β > 0 four given constants. Assume that Q R m m is a symmetric positive definite matrix, If ωτ = αβ, Then ρl ω,τ,α,β < 1, ω > 0, τ > 0, α > 0, β > 0. 47 i.e. the 4-GPHSS iteration converges to the exact solution of the system of linear equations 10. Proof. According to lemma 4, we know that the eigenvalues of the iteration matrix L ω,τ,α,β are ωα 1 αω+1 with multiplicity m n, roots of the quadratic equations λ 2 λ + c = 0 b = [βω + 1 + τα 1]σ2 k 2αβωτ α ωβτ τω + 1αβ + σ 2 k obviously, we have c = β α 1 ωτ + σ 2 k τ ω + 1 αβ + σ 2 k ωα 1 αω + 1 = α 1 α ω ω + 1 < 1, if ωτ = αβ, then after a few computation, we have β α 1 ωτ + σ 2 k τ ω + 1 αβ + σ 2 k ωα 1 = αω + 1 = α 1 α ω ω + 1 < 1 [βω + 1 + τα 1]σ2 k 2αβωτ α ωβτ τω + 1αβ + σ 2 k = 2αβωτ + α ωβτ + τ β ωβ ατσ2 k τω + 1αβ + σ 2 k < 2αβωτ + α ωβτ + τ β + ωτ + αβσ2 k τω + 1αβ + σ 2 k =1 + β α 1 ωτ + σ 2 k τ ω + 1 αβ + σ 2 k =1 + c By lemma 5, we have λ < 1, in other words, when ωτ = αβ, ρl ω,τ,α,β < 1, ω, τ, α, β > 0. Theorem 7. [10]Consider the system of linear equations 10. Let A R m m be symmetric positive definite matrix, B R m n be of full column rank, ω, τ, α, β > 0 four given constants. Assume that Q R m m is a symmetric positive definite matrix, If ω = α, τ = β, Then ρl ω,τ < 1, ω > 0, τ > 0. 48 i.e. the GPHSS iteration converges to the exact solution of the system of linear equations 10. Proof. From 40 we know that L ω,τ,α,β is similar to L ω,τ,α,β of 38. Therefore, we only need to compute the eigenvalues of the matrix L ω,τ,α,β. Based on lemma 4, when ω = α, τ = β, we can obtain that the eigenvalues of the iteration matrix L ω,τ are ω 1 ω+1 with multiplicity m n, λ k 1,2 = ωωτ σ2 k ± τω + σ 2 k 2 4ω 3 τσ 2 k ω + 1τω + σ 2 k 49 k = 1, 2,..., n. We know that when τω + σ 2 k > 2ω τωσ k, 1,2 = ω ωτ σ2 k + τω + σ 2 k 2 4ω 3 τσ 2 k λ k ω + 1τω + σ 2 k 1 ω 2 = ω τω + σ 2 k ω + 1 τω + σ 2 + k < ω ω + 1 1 + 1 ω = 1 when τω + σ 2 k 2ω τωσ k, λ k 1,2 = λ k ω 1 1 λk 2 = ω + 1 < 1 4ω3 τσ 2 k τω + σ 2 k 2 E-ISSN: 2224-2880 1152 Issue 12, Volume 11, December 2012
Obviously, we have then ω 1 ω + 1 < 1 ρm ω,τ < 1, ω > 0, τ > 0, i.e. the GPHSS iteration converges to the exact solution of the system of linear equations 10. Corollary 8. [9] Consider the system of linear equations 10. Let A R m m be symmetric positive definite matrix, B R m n be of full column rank, ω, τ, α, β > 0 four given constants. Assume that Q R m m is a symmetric positive definite matrix, If ω = α = τ = β, Then ρlα < 1, α > 0. 50 i.e. the PHSS iteration converges to the exact solution of the system of linear equations 10. The optimal iteration parameter the corresponding asymptotic convergence factor of the GPHSS iteration method are described in the following theorem. Theorem 9. Consider the system of linear equations 10. Let A R m m be symmetric positive definite matrix, B R m n be of full column rank, ω, τ, α, β > 0 four given constants. Assume that Q R m m is a symmetric positive definite matrix, If ω = α, τ = β, σ k k = 1, 2,... n are the positive singular values of the matrix B = A 1 2 BQ 1 min 2, σ min = 1 k n {σ k}, σ max = max 1 k n {σ k}. Then for the GPHSS iteration converges to the exact solution of the system of linear equations 10, the optimal value of the iteration parameter ω, τ is given by correspondingly ω = argmin ω,τ ρm ω,τ = σ max + σ min 2 σ max σ min τ = argmin ω,τ ρm ω,τ = 2σ maxσ min σmax σ min σ max + σ min ρm ω,τ = σmax σ min σmax + σ min Proof. According to lemma 4, we know ω 1 ω+1, or ω ωτ σ 2 k 1 ω+1 + 4ωτσ2 ωτ+σ λ = 2 ω k 2 k, ωτ+σ 2 k 2 for ωτ + σ 2 k > 2ω ωτσ k, or ω 1 ω+1, for ωτ + σ2 k 2ω ωτσ k, 51 for k = 1, 2, n. We observe that the following two facts hold true: Let 1. when ω 1, ωτ + σ 2 k > 2ω ωτσ k, k = 1, 2, n; 2. when ω > 1, a. ωτ + σ 2 k > 2ω ωτσ k, if only if 0 < σ k < α or σ k > α +, k {1, 2, n}; b. ωτ +σ 2 k 2ω ωτσ k, if only if α σ k α +, k {1, 2, n}; ω 1 c. ω+1 < ω 1 ω+1 α = ω ωτ ωτω 2 1 α + = ω ωτ + ωτω 2 1 Θω, τ, σ = ω ωτ σ 2 ω + 1 ωτ + σ 2 + 1 ω 2 4ωτσ2 ωτ + σ 2 2 Then base on the facts 1 2 we easily see that max{ 1 ω 1+ω, max Θω, τ, σ k}, 1 k n for ω 1, ρm ω,τ = ω 1 max{ ω+1, max Θω, τ, σ k }, σ k <α orσ k >α + for ω > 1. 52 For any fixed β > 0, we define two functions θ 1, θ 2 : 0, + 0, + by θ 1 t = β t β + t, θ 2t = 4βt β + t 2. After straightforward computations we obtain θ 1t = 2β β + t 2, θ 2t = It then follows that: 4βt β β + t 3. E-ISSN: 2224-2880 1153 Issue 12, Volume 11, December 2012
i. max 1 k n Θω, τ, σ k = max{θω, τ, σ min, Θω, τ, σ max }, for ω 1, σ 2 min ωτ σ2 max ii. max Θω, τ, σ k σ k <α or σ k >α + = max{θω, τ, σ min, Θω, τ, σ max }, for ω > 1, σ min < α or σ max > α + Therefore, when ω 1, the optimal parameter ω, τ must satisfy σ 2 min ω τ σ 2 max, either of the following three conditions: 1. 1 ω 1+ω = Θω, τ, σ min Θω, τ, σ max 2. 1 ω 1+ω = Θω, τ, σ max Θω, τ, σ min 3. Θω, τ, σ min = Θω, τ, σ max 1 ω 1+ω when ω > 1, the optimal parameter ω, τ must satisfy σ min < α or σ max > α +, either of the following three conditions: a. ω 1 ω + 1 = Θω, τ, σ min Θω, τ, σ max b. ω 1 ω + 1 = Θω, τ, σ max Θω, τ, σ min c. Θω, τ, σ min = Θω, τ, σ max α = ω ω τ ω τ ω 2 1, α+ = ω ω τ + ω τ ω 2 1. ω 1 ω + 1 By straightforwardly solving the inequalities1-3, a-c, we can obtain ω τ = σ min σ max, we can further obtain the optimal parameters ω = σ max + σ min 2 σ max σ min 53 τ = 2σ maxσ min σmax σ min 54 σ max + σ min The by substituting ω, τ into 52, we obtain σmax σ min ρm ω,τ = σmax + 55 σ min Corollary 10. [9]Consider the system of linear equations 10. Let A R m m be symmetric positive definite matrix, B R m n be of full column rank, ω, τ, α, β > 0 four given constants. Assume that Q R m m is a symmetric positive definite matrix, If ω = α = τ = β, σ k k = 1, 2,... n are the positive singular values of the matrix B = A 1 2 BQ 1 min 2, σ min = 1 k n {σ k}, σ max = max 1 k n {σ k} Then for the PHSS iteration converges to the exact solution of the system of linear equations 10, the optimal value of the iteration parameter α is given by α = argmin α ρmα = σ min σ max, correspondingly ρmα = σ max σ min σ max + σ min 56 4 Numerical examples In this section, we use a numerical example to further examine the effectiveness show the advantages of the 4-GPHSS method over the PHSS method GPHSS method. This example is a system of purely algebraic equations discussed in[12].the matrices A B are defined as follows: i + 1, i = j, A = a i,j m m = 1, i j = 1, 0, otherwise. { j, i = j + m n, B = b i,j m n = 0, otherwise. We report the corresponding the number of iterationsdenoted by IT, the spectral radiusdenoted by ρ, the time needed for convergencedenoted by CPU the norm of absolute error vectorsdenoted by RESby choosing Q = B T B for all the SORlike, 4-GPHSS, GPHSS PHSS methods. The stopping criterion are used in the computations, r k = p q r k 2 r 0 2 < 10 6 A B B T 0 x k y k {x kt, y kt T } is the kth iteration for each of the methods. E-ISSN: 2224-2880 1154 Issue 12, Volume 11, December 2012
The optimum parameter for the SOR-like, PHSS GPHSS method were determined according to had given the results. We chose the parameters for the 4-GPHSS method by trial error. All the computations were performed on an Intel E2180 2.0GHZ CPU, 2.0G Memory, Windows XP system using Matlab 7.0. From the below numerical results, we can see that the iteration number the time in the 4-GPHSS method GPHSS method are less than that in the SOR-like, PHSS method. From the IT CPU two rows, we know that we can decrease the number of iterations the time needed for convergence by choosing four suitable parameters. However, we only give the two optimum parameters, further theoretical considerations regarding the determination of the four optimum parameters for the 4-GPHSS method numerical computations are needed before any firm conclusions can be drawn. Further work in this direction is underway. Table 1: Iteration parameters for the PHSS method. m 50 200 400 n 40 150 300 m+n 90 350 700 ω opt 0.2037 0.0993 0.0705 IT 99 210 306 CPUs 0.2507 21.5841 311.6206 ρm ωopt 0.3653 0.3276 0.3320 RES 9.01E-7 9.47E-7 9.90E-7 Table 2: Iteration parameters for the GPHSS method. m 50 200 400 n 40 150 300 m+n 90 350 700 ω opt 1.0742 1.0584 1.0601 τ opt 0.0386 0.0093 0.0047 IT 10 9 9 CPUs 0.0492 1.1086 9.1771 ρm ωopt,τ opt 0.1892 0.1685 0.1708 RES 3.18E-7 7.52E-7 8.75E-7 Table 3: Iteration number for the SOR-like method. m n m+n ω opt IT CPUs 50 40 90 1.8201 292 0.422 200 150 350 1.9533 1032 32.75 400 300 700 1.9759 2066 546.281 spectral radius 0.19 0.185 0.18 0.175 0.17 0.165 0.16 0.155 0.15 0.145 GPHSS 4 GPHSS with the optimal parameter 4 GPHSS GPHSS 0 2 4 6 8 10 CPUs Table 4: Iteration parameters for the 4GPHSS method. m 50 200 400 n 40 150 300 m+n 90 350 700 ω 1.0742 1.0584 1.0601 τ 0.0386 0.0093 0.0047 α 1.08 1.064 1.064 β 0.0384 0.00925 0.00468 IT 9 9 9 CPUs 0.0263 1.1026 9.1113 ρm ω,τ,α,β 0.1838 0.1483 0.1695 RES 9.63E-7 4.47E-7 5.07E-7 Table 5: Iteration parameters for the GPHSS method without the optimum parameters. m 50 200 400 n 40 150 300 m+n 90 350 700 ω 1.2 1.2 1.2 τ 0.2 0.2 0.05 IT 55 103 102 CPUs 0.144 13.843 110.092 RES 8.89E-7 9.35E-7 9.82E-7 Table 6: Iteration parameters for the 4GPHSS method without the optimum parameters. m 50 200 400 n 40 150 300 m+n 90 350 700 ω 1.2 1.2 1.2 τ 0.2 0.1 0.05 α 2.6 4.4 4 β 0.0923 0.0273 0.015 IT 26 35 37 CPUs 0.0812 4.376 40.355 RES 9.59E-7 5.68E-7 7.81E-7 E-ISSN: 2224-2880 1155 Issue 12, Volume 11, December 2012
Acknowledgements: The research was supported by the National Research Subject of Education Information Technology grant No.126240641, the Research Topic of Higher Vocational Education in Zhejiang Province, P.R. China grant No.YB1115, the Science Technology Project of Zhejiang Industry Polytechnic College, P.R. China grant No.1023401092012014. [11] D. M. Young, Iterative Solutions of Large Linear Systems, Academic Press, New York, 1971. [12] Q. Hu, J. Zou, An iterative method with variable relaxation parameters for saddle-point problems, SIAM Journal of Matrix Analysis Applications, vol. 23, 2001, pp. 317-338. References: [1] M. Benzi, G. H. Golub J. Liesen, Numerical solution of saddle point problems, Acta Numerica, vol.14, 2005, pp. 1-137. [2] J. H. Bramble, J. E. Pasciak A. T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point problem, SIAM J.Numer.Anal., vol.34, 1997, No.3, pp. 1072-1092. [3] G. H. Golub, X. Wu, J.-Y. Yuan, SOR-like methods for augmented systems, BIT, vol. 41, 2001, pp. 71-85. [4] Z. Z. Bai, B. N. Parlett., Z.-Q. Wang, On generalized successive overrelaxation methods for augmented linear systems,numer. Math., vol.102, 2005, pp. 1-38. [5] J. C. Li, X. Kong. Optimum parameters of GSOR-1ike methods for the augmented systems, Applied Mathematics Computation, vol.204, 2008, No.1, 2008, pp. 150-161. [6] C. P. Pan, H. Y. Wang, The polynomial Acceleration of GSOR Iterative Algorithms for Large Sparse Saddle point problems, Chinese Journal of Engineering Mathematics, vol.28, 2011, No.3, pp. 307-314. [7] C. P. Pan, H. Y. Wang, W. L.Zhao, On generalized SSOR iterative method for Saddle point problems, Journal of Mathematics, vol.31, 2011, No.3, pp. 569-574. [8] Z. Z. Bai, G. H. Golub M. K. Ng, Hermitian skew-hermitian splitting methods for non- Hermitian positive definite linear systems,siam Journal of Matrix Analysis Applications, vol.24, 2003, pp. 603-626. [9] Z. Z. Bai, G. H. Golub J. Y. Pan, Preconditioned Hermitian skew-hermitian splitting methods for non-hermitian positive definite linear systems, Numer. Math., vol.98, 2004, pp. 1-32. [10] C. P. Pan, H. Y. Wang, On generalized Preconditioned Hermitian skew-hermitian splitting methods for saddle point problems, Journal On Numerical Methods Computer Applications, vol.32, 2011, No.3 pp. 174-182. E-ISSN: 2224-2880 1156 Issue 12, Volume 11, December 2012