Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

Μέγεθος: px
Εμφάνιση ξεκινά από τη σελίδα:

Download "Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices"

Transcript

1 Optimal Parameter in Hermitian Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices Zhong-Zhi Bai 1 Department of Mathematics, Fudan University Shanghai , P.R. China State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics Scientific/Engineering Computing Academy of Mathematics Systems Science Chinese Academy of Sciences, P.O. Box 2719, Beijing , P.R. China 2 bzz@lsec.cc.ac.cn Gene H. Golub 3 Scientific Computing Computational Mathematics Program Department of Computer Science Stanford University, Stanford, CA , USA golub@sccm.stanford.edu Chi-Kwong Li 4 Department of Mathematics, The College of William & Mary P.O. Box 8795, Williamsburg, VA , USA ckli@math.wm.edu October 30, Supported by The National Basic Research Program (No. 2005CB321702) The National Natural Science Foundation (No ), P.R. China; The 2004 Ky Yu-Fen Fan Fund Travel Grant of American Mathematical Society 2 Correspondence permanent address 3 The work of this author was in part supported by the Department of Energy: DE-FC02-01ER The research of Li was partially supported by an NSF grant. He is an honorary professor of the Heilongjiang University, an honorary professor of the University of Hong Kong

2 Abstract The optimal parameter of the Hermitian/skew-Hermitian splitting (HSS) iteration method for a real 2-by-2 linear system is obtained. The result is used to determine the optimal parameters for linear systems associated with certain 2-by-2 block matrices, to estimate the optimal parameters of the HSS iteration method for linear systems with n-by-n real coefficient matrices. Numerical examples are given to illustrate the results. Keywords: Non-Hermitian matrix, Hermitian matrix, skew-hermitian matrix, splitting iteration method, optimal iteration parameter. AMS(MOS) Subject Classifications: 65F10, 65F50; CR: G1.3.

3 2 Z.-Z. Bai, G.H. Golub C.-K. Li 1 Introduction To solve the large sparse non-hermitian positive definite system of linear equations Ax = f, A C n n positive definite, A A, x, f C n, (1.1) Bai, Golub Ng[2] recently proposed the Hermitian/skew-Hermitian splitting (HSS) iteration method based on the fact that the coefficient matrix A naturally possesses the Hermitian/skew-Hermitian (HS) splitting[10, 19] where A = H + S, H = 1 2 (A + A ) S = 1 2 (A A ), with A being the conjugate transpose of the matrix A. They showed that this HSS iteration converges unconditionally to the exact solution of the system of linear equations (1.1), with the upper bound on convergence speed about the same as that of the conjugate gradient method when applied to Hermitian matrices. Moreover, the upper bound of the contraction factor is dependent on the spectrum of the Hermitian part H, but is independent of the spectrum of the skew-hermitian part S as well as the eigenvalues of the matrices H, S A. Numerical experiments have shown that the HSS iteration method is very efficient robust both as a solver as a preconditioner (to Krylov subspace methods such as GMRES BiCGSTAB; see [15, 18]) for solving non-hermitian positive definite linear systems. To further improve the efficiency of the method, it is desirable to determine or find a good estimate for the optimal parameter α. Unfortunately, there is no good method in doing that. In this paper, we analyze 2-by-2 real matrices in detail, obtain the optimal parameter α that minimizes the spectral radius of the iteration matrix of the corresponding HSS method. We then use the results to determine the optimal parameters for linear systems associated with certain 2-by-2 block matrices, to estimate the optimal parameter α of the HSS method for general n-by-n nonsymmetric positive definite system of linear equations (1.1). Numerical examples are given to show that our estimations improve previous results are close to the values of the optimal parameters. Unless specified otherwise, we assume throughout the paper that the non-hermitian matrix A C n n is positive definite, i.e., A A its Hermitian part H = 1 2 (A + A ) is Hermitian positive definite. 2 The HSS Iteration Let us first review the HSS iteration method presented in Bai, Golub Ng[2]. The HSS Iteration Method. Given an initial guess x (0) C n, compute x (k) for k = 0, 1, 2,... using the following iteration scheme until {x (k) } satisfies the stopping criterion: { (αi + H)x (k+ 1 2 ) = (αi S)x (k) + f, (αi + S)x (k+1) = (αi H)x (k+ 1 2 ) + f,

4 The Optimal Parameter in HSS Method 3 where α is a given positive constant. In matrix-vector form, the above HSS iteration method can be equivalently rewritten as x (k+1) = M(α)x (k) + G(α)f, k = 0, 1, 2,..., (2.1) where M(α) = (αi + S) 1 (αi H)(αI + H) 1 (αi S) (2.2) G(α) = 2α(αI + S) 1 (αi + H) 1. Here, M(α) is the iteration matrix of the HSS method. In fact, (2.1) may also result from the splitting A = B(α) C(α) of the coefficient matrix A, with { B(α) = 1 2α (αi + H)(αI + S), C(α) = 1 2α (αi H)(αI S). The following theorem established in [2] describes the convergence property of the HSS iteration. Theorem 2.1. Let A C n n be a positive definite matrix, H = 1 2 (A + A ) S = 1 2 (A A ) be its Hermitian skew-hermitian parts, respectively, α be a positive constant. Then the spectral radius ρ(m(α)) of the iteration matrix M(α) of the HSS iteration (see (2.2)) is bounded by σ(α) = α λ j max λ j λ(h) α + λ j, where λ( ) represents the spectrum of the corresponding matrix. Consequently, we have ρ(m(α)) σ(α) < 1, α > 0, i.e., the HSS iteration converges to the exact solution x C n of the system of linear equations (1.1). Moreover, if γ min γ max are the lower the upper bounds of the eigenvalues of the matrix H, respectively, then { } α arg min max α λ α γ min λ γ max α + λ = γ min γ max σ( α) = γmax γ min κ(h) 1 γmax + =, γ min κ(h) + 1 where κ(h) is the spectral condition number of H.

5 4 Z.-Z. Bai, G.H. Golub C.-K. Li Of course, α is usually different from the optimal parameter it always holds that α = arg min ρ(m(α)), α ρ(m(α )) σ( α). Numerical experiments in [2] have confirmed that in most situation, ρ(m(α )) σ( α). See [3, 12, 6, 16, 1, 13, 8, 14, 11, 7] for further applications generalizations of the HSS iteration method. 3 The Real Two-By-Two Case In this section, we study linear systems associated with a real 2-by-2 matrix A with positive definite symmetric part. We first determine the eigenvalues of M(α) defined in (2.2). The following theorem is stated in general terms so that it can be used more conveniently for future discussion. Theorem 3.1. Let A = H +S R 2 2 be such that H is symmetric positive definite S is skew-symmetric. Suppose H has eigenvalues λ 1 λ 2 > 0 det(s) = q 2 with q R. Then the two eigenvalues of the iteration matrix M(α) defined in (2.2) are λ ± = (α2 λ 1 λ 2 )(α 2 q 2 ) ± (α 2 λ 1 λ 2 ) 2 (α 2 q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 )(α2 + q 2 ) 2 (α + λ 1 )(α + λ 2 )(α 2 + q 2. ) As a result, if then ρ(m(α)) equals to if (α 2 λ 1 λ 2 ) 2 (α 2 q 2 ) 2 (α 2 λ 2 1)(α 2 λ 2 2)(α 2 + q 2 ) 2, α 2 λ 1 λ 2 α 2 q 2 + (α 2 λ 1 λ 2 ) 2 (α 2 q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 )(α2 + q 2 ) 2 (α + λ 1 )(α + λ 2 )(α 2 + q 2 ; ) then ρ(m(α)) equals to (α 2 λ 1 λ 2 ) 2 (α 2 q 2 ) 2 < (α 2 λ 2 1)(α 2 λ 2 2)(α 2 + q 2 ) 2, (α λ 1 )(α λ 2 ) (α + λ 1 )(α + λ 2 ). Proof. Let A = H + S R 2 2, where H is symmetric positive definite, S is skewsymmetric. Then there is an orthogonal matrix Q R 2 2 such that Q t HQ is a diagonal matrix with diagonal entries λ 1 λ 2, where Q t denotes the transpose matrix of Q. We may replace A by Q t AQ without changing the assumptions conclusions of our theorem. So, assume that [ ] [ ] λ1 0 0 q H = S = with q R. 0 λ 2 q 0

6 The Optimal Parameter in HSS Method 5 Then (αi + H) 1 (αi H)(αI + S) 1 (αi S) equals 1 α 2 + q 2 [ (α 2 q 2 )(α λ 1 ) α+λ 1 2qα(α λ 1) α+λ 1 2qα(α λ 2 ) (α 2 q 2 )(α λ 2 ) α+λ 2 α+λ 2 ]. The formula for λ ± the assertion on ρ(m(α)) follow. One may want to use the formula of ρ(m(α)) in Theorem 3.1 to determine the optimal choice of α. It turns out that the analysis is very complicated not productive. The main difficulty is the expression (α 2 λ 1 λ 2 ) 2 (α 2 q 2 ) 2 (α 2 λ 2 1 )(α2 λ 2 2 )(α2 + q 2 ) 2 (3.1) in the formula of ρ(m(α)). For example, one may see [5] for the analysis of a similar simpler problem. Here, we use a different approach that allows us to avoid the complicated expression (3.1). For notational simplicity, we write Define Evidently, Moreover, Thus, { } trace (M(α)) 2 φ(α) = = 2 ρ(α) = ρ(m(α)). { (α 2 q 2 )(α 2 } 2 λ 1 λ 2 ) (α 2 + q 2, )(α + λ 1 )(α + λ 2 ) ψ(α) = det (M(α)) = (α λ 1)(α λ 2 ) (α + λ 1 )(α + λ 2 ), ω(α) = max{φ(α), ψ(α) }. ρ(α) 2 ω(α). 1 = φ(0) = lim φ(α) 1 = ψ(0) = lim ψ(α). α + α + lim ω(α) = ω(0) = 1 > ω(ξ) for all ξ > 0. α + Since ω(α) is continuous nonnegative, there exists α > 0 such that We will show that ω(α ) = min{ω(α) α > 0}. φ(α ) = ψ(α ). (3.2) As a result, the eigenvalues of M(α ) have the same modulus, thus ρ(α) 2 ω(α) ω(α ) = ρ(α ) 2, for all α > 0. By the above discussion, establishing (3.2) will lead to the following theorem.

7 6 Z.-Z. Bai, G.H. Golub C.-K. Li Theorem 3.2. Let the assumptions of Theorem 3.1 be satisfied define the functions φ ψ as above. Then the optimal α > 0 satisfying ρ(m(α )) = min{ρ(m(α)) α > 0} lies in the finite set S = {α > 0 φ(α) = ψ(α) }, which consists of numbers α > 0 satisfying (α 2 + q 2 ) 2 (α 2 λ 2 1)(α 2 λ 2 2) = (α 2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2 (3.3) or (α 2 + q 2 ) 2 (λ 2 1 α 2 )(α 2 λ 2 2) = (α 2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2. (3.4) Proof. We only need to demonstrate the validity of (3.2), i.e., φ(α ) = ψ(α ). Note that φ(α) is continuously differentiable for α > 0, ψ(α) is continuous for α > 0 differentiable except for α = λ 1 λ 2. Since ω(α) = max{φ(α), ψ(α) }, if α satisfies ω(α ) = min{ω(α) α > 0}, then one of the following holds. (i) φ(α ) = ψ(α ) ; (ii) φ(α ) > ψ(α ) φ(α ) is a local minimum of φ(α); (iii) ψ(α ) > φ(α ) ψ(α ) is a local minimum of ψ(α). First, we claim that (iii) cannot happen. To see this, note that ψ (α) = 2(λ 1 + λ 2 )(α 2 λ 1 λ 2 ) [(α + λ 1 )(α + λ 2 )] 2, which is positive or negative according to α > λ 1 λ 2 or α < λ 1 λ 2, respectively, ψ ( λ 1 λ 2 ) = 0. Because λ1 λ 1 λ 2 λ 2, we see that ψ(α) = ψ(α) is differentiable decreasing on (0, λ 2 ), ψ(α) = ψ(α) is differentiable increasing on (λ 1, + ). Thus, there cannot be α in (0, λ 2 ) (λ 1, + ) satisfying (iii). Furthermore, ψ(λ j ) = 0 for j = 1, 2; so, it is impossible to have α = λ j satisfying (iii). Finally, ψ(α) = ψ(α) is differentiable on (λ 2, λ 1 ) with a local maximum at λ 1 λ 2. Thus, there cannot be α in (λ 2, λ 1 ) satisfying (iii). Next, we show that (ii) cannot happen. The analysis is more involved. Instead of φ(α), we consider its square root F (α) = (α2 q 2 )(α 2 ) ( λ 1 λ 2 ) (1 (α 2 + q 2 )(α + λ 1 )(α + λ 2 ) = 2q2 α 2 + q 2 1 λ 1 λ ) 2. α + λ 1 α + λ 2

8 The Optimal Parameter in HSS Method 7 Then where F (α) = = 4q 2 α (α 2 λ 1 λ 2 ) (α 2 + q 2 ) 2 (α + λ 1 )(α + λ 2 ) + (α2 q 2 ) (α 2 + q 2 ) λ 1 + λ 2 (α 2 + q 2 ) 2 (α + λ 1 ) 2 (α + λ 2 ) 2 P (α), P (α) = α 6 + 4(λ 1λ 2 + q 2 ) λ 1 + λ 2 α 5 + (λ 1 λ 2 + 4q 2 )α 4 ( ) λ 1 (α + λ 1 ) 2 + λ 2 (α + λ 2 ) 2 q 2 (q 2 + 4λ 1 λ 2 )α 2 4q2 λ 1 λ 2 (q 2 + λ 1 λ 2 ) λ 1 + λ 2 α λ 1 λ 2 q 4. Suppose q λ 1 λ 2. Then F (α) < 0 for α (0, q), F (α) > 0 for α ( λ 1 λ 2, + ). So, F (α) is decreasing on (0, q) increasing on [ λ 1 λ 2, + ). Hence, φ(α) = F (α) 2 cannot have a local minimum in these two intervals. Thus, there cannot be an α in these intervals satisfying (ii). Because φ(q) = φ ( ) λ 1 λ 2 = 0, we cannot have α = q or λ 1 λ 2 satisfying (ii). Next, we claim that F (α) has only one critical point in ( q, λ 1 λ 2 ), which is a local minimum. This point will be the unique critical point for φ(α) = F (α) 2 on ( q, λ 1 λ 2 ), which is a local maximum. Thus, there cannot be an α in this interval satisfying (ii). To prove our claim, note that α > 0 is a critical point of F (α) if only if α is a zero of P (α). So, it suffices to show that P (α) only has one positive zero. Now, P (α) = 30α (λ 1λ 2 + q 2 ) λ 1 + λ 2 α (λ 1 λ 2 + 4q 2 )α 2 2q 2 (q 2 + 4λ 1 λ 2 ). Since P (0) < 0 lim α + P (α) = +, we see that P (α) has a positive zero. However, P (α) cannot have two or more positive zeros (counting multiplicity). Otherwise, P (α) = 120α (λ 1λ 2 + q 2 ) λ 1 + λ 2 α (λ 1 λ 2 + 4q 2 )α has a positive zero, which is impossible. Next, we argue that P (α) = 6α (λ 1λ 2 + q 2 ) λ 1 + λ 2 α 4 + 4(λ 1 λ 2 + 4q 2 )α 3 2q 2 (q 2 + 4λ 1 λ 2 )α 4q2 λ 1 λ 2 (q 2 + λ 1 λ 2 ) λ 1 + λ 2 has exactly one positive zero. Since P (0) < 0 lim α + P (α) = +, we see that P (α) has a positive zero. Since P (0) < 0 P (α) only has one positive zero α 1, we see that P (α) is decreasing on (0, α 1 ) increasing on (α 1, + ). So, P (α) can only have one positive zero α 2 > α 1. We can apply the same argument to P (α). Since P (0) < 0 lim α + P (α) = +, we see that P (α) has a positive zero. Since P (0) < 0 P (α) only has one positive zero α 2, we see that P (α) is decreasing on (0, α 2 ) increasing on (α 2, + ). So, P (α) can only have one positive zero α 3 > α 2, our claim is proved. One can use a similar argument to get the desired conclusion if q > λ 1 λ 2. We omit the discussion.

9 8 Z.-Z. Bai, G.H. Golub C.-K. Li Remark 3.3. Using the absolute value function, we can combine (3.3) (3.4) to a single equation (α 2 + q 2 ) 2 (λ 2 1 α 2 )(α 2 λ 2 2) = [(α 2 q 2 )(α 2 λ 1 λ 2 )] 2. Nonetheless, the polynomial equations are easier to solve use. In fact, if λ 1 = λ 2 = λ, then (3.3) (3.4) only has one positive solution, namely, α = λ. Suppose λ 1 > λ 2. If we use the substitution β = α 2, then (3.3) reduces to the quadratic equation which has solutions [(λ 1 λ 2 ) 2 4q 2 ]β 2 + 2q 2 (λ 1 + λ 2 ) 2 β + q 2 [q 2 (λ 1 λ 2 ) 2 4λ 2 1λ 2 2] = 0, q(2λ 1 λ 2 qλ 1 + qλ 2 ) λ 1 λ 2 + 2q q(2λ 1 λ 2 + qλ 1 qλ 2 ) λ 1 λ 2 2q if λ 1 λ 2 2q. Otherwise, the equation is linear has a solution β = 2(q4 λ 2 1 λ2 2 ) (λ 1 + λ 2 ) 2 = 2(q2 + λ 1 λ 2 )(q 2 λ 1 λ 2 ) (λ 1 + λ 2 ) 2 = (λ 1 + λ 2 ) 2 (q 2 λ 1 λ 2 ) 2(λ 1 + λ 2 ) 2 = 1 2 (q2 λ 1 λ 2 ). Of course, these solutions will be useful only if they are positive lie outside the interval [λ 2 2, λ2 1 ]. Similarly, by the substitution β = α2, (3.4) reduces to the quartic equation 2β 4 (λ 1 + λ 2 ) 2 β 3 + 2[λ 2 1λ 2 2 q 2 (λ 1 λ 2 ) 2 + q 4 ]β 2 q 4 (λ 1 + λ 2 ) 2 β + 2q 4 λ 2 1λ 2 2 = 0, (3.5) which has exactly two solutions µ 1 µ 2 with µ 1 [λ 2 2, λ 1λ 2 ) µ 2 ((λ λ2 2 )/2, λ2 1 ]. Furthermore, if q 2 [λ 2 2, λ2 1 ] then µ 1 q 2 µ 2 q 2. In particular, if q = λ i for i {1, 2} then β = λ 2 i is a solution. The verification of the above statements will be given in the last section. As an illustration of Theorems as well as Remark 3.3, we consider a simple example for which λ 1 = 2, λ 2 = 1 q = 1. By straightforward computations, we know that the only positive roots of the equation (3.3) are α1 = 1 α 2 = 5, those of the equation (3.4) are α3 = 1 6 (7 α4 = ) We remark that now equation (3.4) is equivalent to (β 1)(2β 3 7β 2 + β 8) = 0. Based on Theorem 3.1, from direct calculations we have 3(7 3 5) ρ(m(α1)) = ρ(m(α3)) = 0, ρ(m(α2)) = ρ(m(α4)) Therefore, for the HSS iteration method, the optimal parameter is α = 1 the corresponding optimal convergence factor is ρ(m(α )) = 0. On the other h, from Theorem 2.1 we can easily obtain α = σ( α) = Obviously, it holds that ρ(m(α )) < σ( α).

10 The Optimal Parameter in HSS Method 9 Remark 3.4. Note that in our proof of Theorem 3.2, we get much information for the function ω(α) = max{φ(α), ψ(α) } with α > 0. In particular, we show that all the local minima of ω(α) satisfy φ(α) = ψ(α), they are the roots of (3.3) (3.4). Moreover, the local minima of ω(α) are also local minima of ρ(α), that the global minimum of ρ(α) ω(α) occur at the same α. If one can prove independently that the local minima ρ(α) always occur when the eigenvalues of M(α) have the same magnitude, then one can conclude that the functions ω(α) ρ(α) have the same local global minima. Once again, this is difficult to do because of the expression (3.1) in ρ(α) = ρ(m(α)). Using the notations defined in Theorem 3.1, we immediately get the following conclusion from Theorem 3.2. Corollary 3.5. Let the assumptions of Theorem 3.1 be satisfied. Then the optimal α > 0 satisfying ρ(m(α )) = min{ρ(m(α)) α > 0} is a positive root of the equation or (α 2 + q 2 ) 2 (α 2 λ 2 1)(α 2 λ 2 2) = (α 2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2 (α 2 + q 2 ) 2 (λ 2 1 α 2 )(α 2 λ 2 2) = (α 2 q 2 ) 2 (α 2 λ 1 λ 2 ) 2. Remark 3.6. The first equation in Corollary 3.5 can be reduced to a quadratic equation the second one can be reduced to a quartic equation with respect to β = α 2, respectively, analogously to those in Remark 3.3. Remark 3.7. Note that the results in this section are also valid if λ 1 > 0 = λ 2 q 0. Remark 3.8. We should point out again that Theorem 3.2 has been established only for real matrices. For complex matrices, how to determine the optimal iteration parameter of the HSS iteration method is still an open problem. 4 Two-By-Two Block Matrices In this section, we determine the optimal parameter α for a 2-by-2 block matrix of the form [ ] λ1 I A = r E E, (4.1) λ 2 I s where λ 1 > λ 2 > 0. Note that the case when λ 1 = λ 2 is trivial 1. Systems of linear equations with the 2-by-2 block matrices (4.1) arise in many applications; for details we 1 If λ 1 = λ 2 λ, then the iteration matrix of the HSS iteration method is given by M(α) = α λ Q(α), [ α+λ ] where Q(α) := (αi +S) 1 0 E (αi S) is a Cayley transform of S is, thus, unitary, S = E ; 0 see (2.2). Obviously, it holds that ρ(m(α)) = α λ. Therefore, for the HSS iteration method applied to α+λ this special linear system, the optimal parameter is α = λ the corresponding optimal convergence factor is ρ(m(α )) = 0.

11 10 Z.-Z. Bai, G.H. Golub C.-K. Li refer the readers to [17, Chapter 6], [9, Chapters 4, 5 10], [20, 10, 4, 5] references therein. According to Young[20] Varga[17], the matrix A in (4.1) is called a 2-cyclic matrix, is connected with property A. Theorem 4.1. Suppose that the matrix A C n n defined in (4.1) satisfies λ 1 > λ 2 > 0, the nonzero matrix E C r s has nonzero singular values q 1 q 2 q k. Let H = 1 [ ] 2 (A + λ1 I A ) = r 0 S = 1 [ ] 0 λ 2 I s 2 (A 0 E A ) = E 0 be the Hermitian the skew-hermitian parts of the matrix A, respectively. Then, for the correspondingly induced HSS iteration method, the spectral radius ρ(m(α)) of its iteration matrix M(α) (see (2.2)) attains the minimum at α, which is a root of one of the following equations: (α λ 1 λ 2 )(α q 1 q k ) = 0, or where j = 1, k. (α 2 + q 2 j ) 2 (α 2 λ 2 1)(α 2 λ 2 2) = (α 2 q 2 j ) 2 (α 2 λ 1 λ 2 ) 2 (α 2 + q 2 j ) 2 (λ 2 1 α 2 )(α 2 λ 2 2) = (α 2 q 2 j ) 2 (α 2 λ 1 λ 2 ) 2, Proof. Suppose A satisfies the hypotheses of the theorem. Then A is unitarily [ similar ] λ1 q to A 1 A k λ 1 I u λ 2 I v, where denotes the matrix direct sum, A j = j, q j λ 2 u = r k v = s k. For α > 0, define ρ(α) = ρ(m(α)) ρ j (α) = ρ(m j (α)), where M(α) is unitarily similar to M 1 (α) M k (α) α λ 1 α+λ 1 I u α λ 2 α+λ 2 I v, with M j (α) being the HSS iteration matrix associated with A j for j = 1, 2,..., k. Furthermore, define Consider four cases. f j (α) = α λ j / α + λ j for j = 1, 2. Case 1. If r > k s > k, then f 1 (α) f 2 (α) are eigenvalues of M(α) is the largest singular value of M(α). Thus, ρ(α) = max{ f 1 (α), f 2 (α) } ( ) ρ(α ) = ρ λ1 λ 2 = λ1 λ 2 λ1 + λ 2. Case 2. Suppose r > k s = k. Let ρ 0 (α) = f 1 (α). Then ρ(α) = max{ρ j (α) 0 j k}. For α ( 0, λ 1 λ 2 ], ρ(α) = ρ 0 (α) = f 1 (α)

12 The Optimal Parameter in HSS Method 11 is the largest singular value of M(α). If α (λ 3 1 /λ 2) 1/2, then f 1 (α) = α λ 1 α + λ 1 = 1 2λ 1 α + λ 1 2λ 1 ( ) 1 = ρ λ1 λ 2. λ1 λ 2 + λ 1 Thus, ρ(α) ρ ( ) [ λ 1 λ 2 if α 0, (λ1 λ 2 ) 1/2] [ (λ 3 1 /λ 2) 1/2, + ). So, we focus on the interval ( J = (λ 1 λ 2 ) 1/2, (λ 3 1/λ 2 ) 1/2). For α J, let { } { trace (Mj (α)) 2 (α 2 qj 2 φ j (α) = = )(α2 λ 1 λ 2 ) 2 (α 2 + qj 2)(α + λ 1)(α + λ 2 ) ψ(α) = det (M j (α)) = (α λ 1)(α λ 2 ), j = 1, 2,..., k. (α + λ 1 )(α + λ 2 ) Clearly, ψ(α) is independent of the index j. By Theorem 3.1, if φ j (α) ψ(α), then ρ j (α) = φ j (α) + φ j (α) ψ(α); } 2 otherwise, We claim that ρ j (α) = (α λ 1 )(α λ 2 ) (α + λ 1 )(α + λ 2 ). max{ρ j (α) : 1 j k} = max{ρ 1 (α), ρ k (α)}, α J. Suppose 1 < j < k. If φ j (α) < ψ(α) then (α λ 1 )(α λ 2 ) ρ j (α) = (α + λ 1 )(α + λ 2 ), min{φ 1 (α), φ k (α)} φ j (α) < ψ(α). It follows that ρ 1 (α) = ρ j (α) or ρ k (α) = ρ j (α). If φ j (α) ψ(α) α q 2 j, then hence ρ j (α) = 0 φ j (α) φ k (α), φ j (α) + φ j (α) ψ(α) φ k (α) + φ k (α) ψ(α) = ρ k (α); if φ j (α) ψ(α) α < q 2 j, then hence ρ j (α) = 0 < φ j (α) φ 1 (α) φ j (α) + φ j (α) ψ(α) φ 1 (α) + φ 1 (α) ψ(α) = ρ 1 (α).

13 12 Z.-Z. Bai, G.H. Golub C.-K. Li Thus, our claim is proved. For each α J, we have α λ1 / α + λ 1 α λ 2 / α + λ 2, hence ρ 0 (α) = α λ 1 α + λ 1 (α λ 1 )(α λ 2 ) (α + λ 1 )(α + λ 2 ) max{ρ 1(α), ρ k (α)}. Combining the above, we see that Let Then ρ(α) = max{ρ 1 (α), ρ k (α)}, α J. Ω(α) = max{φ 1 (α), φ k (α), ψ(α) }. Ω(α) max{ρ 1 (α) 2, ρ k (α) 2 }. If α J is such that Ω(α ) Ω(α) for all α J, then one of the following holds. (1) φ 1 (α ) = φ k (α ) = ψ(α ) ; (2.a) φ 1 (α ) = φ k (α ) > ψ(α ), α is a local minimum of the function max{φ 1 (α), φ k (α)}, (2.b) φ 1 (α ) = ψ(α ) > φ k (α ), α is a local minimum of the function max{φ 1 (α), ψ(α) }, (2.c) φ k (α ) = ψ(α ) > φ 1 (α ), α is a local minimum of the function max{φ k (α), ψ(α) }; (3.a) max{φ 1 (α ), φ k (α )} < ψ(α ) α is a local minimum of the function ψ(α ), (3.b) max{φ 1 (α ), ψ(α ) } < φ k (α ) α is a local minimum of the function φ k (α ), (3.c) max{φ k (α ), ψ(α ) } < φ 1 (α ) α is a local minimum of the function φ 1 (α ). By the proof of Theorem 3.2, we see that the function ψ(α) has a differentiable local maximum at α > 0, two non-differentiable local minima at λ 2 λ 1, where ψ(λ 1 ) = ψ(λ 2 ) = 0. Thus, condition (3.a) cannot hold. Similarly, for j = 1, k, the proof of Theorem 3.2 shows that the function φ j (α) has a local maximum at α > 0, two local minima at q j λ 1 λ 2, where φ j ( q j ) = φ j ( λ 1 λ 2 ) = 0. Thus, none of conditions (3.b) or (3.c) holds. Now suppose that (1) or (2.a) holds. Then φ 1 (α ) = φ k (α ) implies that α = q 1 q k. In both cases, we have ρ 1 (α ) = φ 1 (α ) + φ 1 (α ) ψ(α ) = φ k (α ) + φ k (α ) ψ(α ) = ρ k (α ), Ω(α ) = max{ρ 1 (α ) 2, ρ k (α ) 2 }. Suppose that (2.b) holds. If φ k (α ) ψ(α ), then ρ k (α ) = φ k (α ) + φ k (α ) ψ(α ) < φ 1 (α ) + φ 1 (α ) ψ(α ) = ρ 1 (α );

14 The Optimal Parameter in HSS Method 13 otherwise, ρ k (α ) = (α λ 1 )(α λ 2 ) (α + λ 1 )(α + λ 2 ) ρ 1(α ). Thus, Ω(α ) = ρ 1 (α ) 2 = max{ρ 1 (α ) 2, ρ k (α ) 2 }. Moreover, since α is a local minimum of the function max{φ 1 (α), ψ(φ) }, by Remark 3.4 we see that α is a root of the equation (α 2 + q 2 1) 2 (α 2 λ 2 1)(α 2 λ 2 2) = (α 2 q 2 1) 2 (α 2 λ 1 λ 2 ) 2 or (α 2 + q 2 1) 2 (λ 2 1 α 2 )(α 2 λ 2 2) = (α 2 q 2 1) 2 (α 2 λ 1 λ 2 ) 2. Suppose that (2.c) holds. We can use an argument similar to the case of (2.b) to conclude that Ω(α ) = ρ k (α ) 2 = max{ρ 1 (α ) 2, ρ k (α ) 2 } that α is a root of the equation (α 2 + qk 2 )2 (α 2 λ 2 1)(α 2 λ 2 2) = (α 2 qk 2 )2 (α 2 λ 1 λ 2 ) 2 or (α 2 + q 2 k )2 (λ 2 1 α 2 )(α 2 λ 2 2) = (α 2 q 2 k )2 (α 2 λ 1 λ 2 ) 2. Note that in all the cases (1), (2.a), (2.b), (2.c), we have Ω(α ) = max{ρ 1 (α ) 2, ρ k (α ) 2 }. Consequently, if α J yields the smallest Ω(α), then ρ(α) 2 = max{ρ 1 (α) 2, ρ k (α) 2 } Ω(α) Ω(α ) = max{ρ 1 (α ) 2, ρ k (α ) 2 } = ρ(α ) 2. So, we only need to consider those α satisfying the specified equations in the theorem to determine the optimal parameter α. Case 3. Suppose r = k s > k. Let ρ 0 (α) = f 2 (α). Then ρ(α) = max{ρ j (α) 0 j k}. Similarly to the proof of Case 2, we can show that ρ(α) ρ( λ 1 λ 2 ) if α is positive lies outside the interval ( J = (λ 3 2/λ 1 ) 1/2, (λ 1 λ 2 ) 1/2). So, we can focus on the interval J. For α J, we have ρ(α) = max{ρ 1 (α), ρ k (α)}. Note that in this case, for each α J, we have α λ2 / α + λ 2 α λ 1 / α + λ 1,

15 14 Z.-Z. Bai, G.H. Golub C.-K. Li hence ρ 0 (α) = α λ 2 α + λ 2 (α λ 1 )(α λ 2 ) (α + λ 1 )(α + λ 2 ) max{ρ 1(α), ρ k (α)}. Finally, if α J is such that ρ(α ) < ρ( λ 1 λ 2 ), we can show that α must satisfy one of the three specified equations using an argument similar to the one in Case 2. Case 4. Suppose r = s = k. Then ρ(α) = max{ρ j (α) 1 j k}. Note that if α is positive lies outside the interval ( J = (λ 3 2/λ 1 ) 1/2, (λ 3 1/λ 2 ) 1/2), then each of the singular values of M(α), which is equal to α λ 1 / α + λ 1 or α λ 2 / α + λ 2, has magnitude larger than ρ( λ 1 λ 2 ). Thus ρ(α) det(m(α)) 1/n ρ( λ 1 λ 2 ). So, we can focus on those α J. For each α J, we can show that ρ(α) = max{ρ 1 (α), ρ k (α)}. Moreover, if α J is such that ρ(α ) < ρ( λ 1 λ 2 ), we can show that α must satisfy one of the three specified equations. Remark 4.2. The second group of equations in Theorem 4.1 can be reduced to a group of quadratic equations the last one can be reduced to a group of quartic equations with respect to β = α 2, respectively, analogously to those in Remark 3.3. As an illustration of Theorem 4.1 Remark 4.2, we consider a simple example for which λ 1 = 2, λ 2 = 1, q 1 = 1 q k = 2. Obviously, α0 = is the root of the first equation in Theorem 4.1. In addition, by straightforward computations, we know that the only positive roots of the group of the equations with respect to q 1 = 1 in Theorem 4.1 are α1 = 1, α2 = (7 α3 = ) , those with respect to q k = 2 in Theorem 4.1 are α 6 = 20 α4 = 2, α5 = ( )

16 The Optimal Parameter in HSS Method 15 We remark that now the second one in the group of equations with respect to q 1 = 1 is equivalent to (β 1)(2β 3 7β 2 + β 8) = 0, the second one in the group of equations with respect to q k = 2 is equivalent to (β 4)(2β 3 β β 32) = 0. Based on Theorem 3.1, from direct calculations we have ρ(m(α 0)) = , ρ(m(α 1)) = 0, ρ(m(α 2)) = 3(7 3 5) ρ(m(α 3)) 0.201, ρ(m(α 4)) = 0, ρ(m(α 5)) 0.146, ρ(m(α 6)) Therefore, for the HSS iteration method, the optimal parameter is α = 1 or α = 2 the corresponding optimal convergence factor is ρ(m(α )) = 0. On the other h, from Theorem 2.1 we can easily obtain Obviously, it holds that ρ(m(α )) < σ( α). α = 2 σ( α) = Remark 4.3. Our proof techniques can be used to hle the case when λ 2 = 0, which also occur in applications; see [4] its references. In such case, we may normalize λ 1 = 1, we always assume that s = k to ensure that A is invertible. In such case, we can use the analysis of Case 2 Case 4 in our proof of Theorem 4.1. Note that in this case, we have { ψ(α) = α 1 α + 1, φ (α 2 qj 2 j(α) = )α } 2 (α 2 + qj 2)(α + λ for j = 1, k. 1) If q 1 = q k, then A is unitarily similar to with A j = A 1 A k I u [ 1 q1 q 1 0 ], j = 1, 2,..., k. So, the analysis reduces to the 2-by-2 case, Theorem 3.2 applies. Suppose q 1 > q k > 0. Then the optimal value α can be easily determined by checking whether ρ 1 (α) ρ k (α) intersect at a point α such that φ 1 (α ) ψ(α ) = φ k (α ) ψ(α ) 0. This happens if only if q 1 q k 1 2 (q 1 + q k ). In this case, α = q 1 q k is the optimal parameter, ρ(m(α )) = q 1 q k q1 q k q 1 + q k q1 q k (q 1 + q k ) 2 4q1 2q2 k (. q 1 q k + 1)(q 1 q k )

17 16 Z.-Z. Bai, G.H. Golub C.-K. Li Otherwise, α = q 1 2q1 1 is the optimal parameter such that φ 1(α ) = ψ(α ) > φ k (α ) ρ(m(α )) = q 1 1 q 1 + 2q 1 1. Note that the second case rarely appears in applications its discussion was not included in [4]. Also, note that there were some typos in the formula of ρ(m(α )) for the first case in [4]. 5 Estimation of Optimal Parameters for n-by-n Matrices Numerical Examples In general, for a nonsymmetric positive definite system of linear equations (1.1), the eigenvalues of its coefficient matrix A is evidently contained in the complex domain D A := [λ min, λ max ] ı[ q, q], where ı is the imaginary unit, λ min λ max are, respectively, the smallest the largest eigenvalues of the Hermitian part H, q is the largest module of the eigenvalues of the skew-hermitian part S, of the coefficient matrix A. If a reduced (simpler lower-dimensional) matrix A R whose eigenvalues possess the same contour as the domain D A is used to approximate the matrix A, then we may expect that the main mathematical numerical properties of the HSS iteration method for the original linear system with the coefficient matrix A can be roughly preserved by the HSS iteration method applied to the linear system with the reduced coefficient matrix A R. A simple choice of the reduced matrix is given by [ ] λmax q A R = q with q = S or q = ρ(h 1 S) λ min λ max. We can then use Theorem 3.2 Corollary 3.5 to estimate the optimal parameter α of the HSS iteration method as follows. Estimation 5.1. Let A R n n be a positive definite matrix, H, S R n n be its symmetric skew-symmetric parts, respectively. Let λ min = min{λ λ λ(h)} λ max = max{λ λ λ(h)}. Suppose λ min q = S or q = ρ(h 1 S) λ min λ max. Then one can use the positive roots of the equation or (α 2 + q 2 ) 2 (α 2 λ 2 max)(α 2 λ 2 min) = (α 2 q 2 ) 2 (α 2 λ min λ max ) 2 (α 2 + q 2 ) 2 (λ 2 max α 2 )(α 2 λ 2 min) = (α 2 q 2 ) 2 (α 2 λ min λ max ) 2 to estimate the optimal parameter α > 0 satisfying ρ(m(α )) = min{ρ(m(α)) α > 0}. Here, M(α) is the iteration matrix of the HSS iteration method, see (2.2).

18 The Optimal Parameter in HSS Method 17 We first illustrate our estimates with the following example of a general nonsymmetric positive definite system of linear equations, see [2]. Consider the two-dimensional convection-diffusion equation (u xx + u yy ) + δ(u x + u y ) = g(x, y) on the unit square [0, 1] [0, 1], with constant coefficient δ subject to Dirichlettype boundary conditions. When the five-point centered finite difference discretization is applied to it, we get the system of linear equations (1.1) with the coefficient matrix A = T I + I T, (5.1) where the equidistant step-size h = 1 m+1 is used in the discretization on both directions the natural lexicographic ordering is employed to the unknowns. In addition, denotes the Kronecker product, T is a tridiagonal matrix given by T = tridiag( 1 R e, 2, 1 + R e ), R e = δh 2 is the mesh Reynolds number. We remark that here the first-order derivatives are also approximated by the centered difference scheme. From Lemma A.1 in [2, Appendix], we know that γ min λ min = min j,k(h) = 4(1 cos(πh)), 1 j,k m γ max λ max = max j,k(h) = 4(1 + cos(πh)) 1 j,k m min λ j,k(s) = 0, 1 j,k m max λ j,k(s) = 4R e cos(πh). 1 j,k m Therefore, the optimal parameter α that minimizes the upper bound σ(α) of the convergence factor ρ(m(α)) of the HSS iteration method is given by see Theorem 2.1. α γ min γ max = λ min λ max = 4 sin(πh), In Table 1 we list the experimental optimal parameter α (denoted by α exp ), the estimated optimal parameter α (denoted by α est ) determined by Estimation 5.1, the upperbound minimizer α, the corresponding spectral radii ρ(m(α)) of the HSS iteration matrix M(α) for α = α exp, α est α. From this table we see that α always overestimates ρ(m(α)) when compared with α est, that α est yields quite a good approximation to α exp. In Table 2 we list the number of iterations (denoted by IT ) the elapsed CPU time in seconds (denoted by CPU ) of the HSS iteration method when it is applied to the nonsymmetric positive definite system of linear equations (1.1) with coefficient matrix

19 18 Z.-Z. Bai, G.H. Golub C.-K. Li Table 1: α versus ρ(m(α)) when m = 32 δ α exp ρ(m(α exp )) α est ρ(m(α est )) α ρ(m( α)) Table 2: IT CPU when m = 32 ε = 10 6 δ α exp IT CPU α est IT CPU (5.1). From this table we see that the numerical results produced with α est coincide with those using α exp, the match is quite pertinent. Here in the next example, we choose the right-h-side vector f such that the exact solution of the system of linear equations is (1, 1,..., 1) T R n. In addition, all runs are initiated from the initial vector x (0) = 0, terminated if the current iteration satisfy f Ax (k) 2 f Ax (0) 2 ε. The experiments are run in MATLAB (version 6.1) with a machine precision The machine used is a Pentium-III 500 personal computer with 256M memory. Then, we use the following example of 2-by-2 block system of linear equations to further confirm the above observations. Consider the system of linear equations (1.1) with the coefficient matrix [ ] B E A = E T, (5.2) 0.5I where B = [ I TH + T H I 0 0 I T H + T H I E = [ I F F I ] R 2m2 2m 2, (5.3) ] R 2m2 m 2, (5.4)

20 The Optimal Parameter in HSS Method 19 T H = tridiag( 1, 2, 1) R m m, F = δh tridiag( 1, 1, 0) R m m, (5.5) with h = 1 m+1 the discretization meshsize, see [4]. For this example, we have r = 2m 2 s = m 2. Hence, the total number of variables is r + s = 3m 2. In Table 3 we list the experimental optimal parameter α exp, the estimated optimal parameter α est determined by Estimation 5.1, the corresponding spectral radii ρ(m(α)) of the HSS iteration matrix M(α) for α = α exp α est. Here, considering Theorem 4.1 the two-by-two block structure of the matrix A in (5.2), we may apply the equations in Estimation 5.1 to the q which is either the largest or the smallest singular value (denoted, respectively, by q 1 or q k ) of the matrix E, also consider the two points α = λ min λ max α = q 1 q k, to obtain a more accurate estimate to the optimal iteration parameter for the HSS iteration method. From this table we can see that α est yields quite a good approximation to α exp. Table 3: α versus ρ(m(α)) δ m α exp ρ(m(α exp )) α est ρ(m(α est )) In Table 4 we list the number of iterations the elapsed CPU time of the HSS iteration method when it is applied to the 2 2 block system of linear equations (1.1) with the coefficient matrix (5.2)-(5.5). From this table we can see that the numerical results produced with α est coincide with those using α exp, the match is quite pertinent. Table 4: IT CPU when ε = 10 6 δ m IT α exp CPU IT α est CPU

21 20 Z.-Z. Bai, G.H. Golub C.-K. Li 6 Proof of Remark 3.3 Here we give the proof of Remark 3.3. The assertion concerning λ 1 = λ 2 the assertion concerning equation (3.3) can be readily verified. We consider the equation (3.4) the reduced equation (3.5). Note that for α > 0, the right side of (3.4) is nonnegative, the left side is nonnegative on the interval [λ 2, λ 1 ] only. So, all the positive solutions β = α 2 of the equation (3.5) lie in [λ 2 2, λ2 1 ]. Define f(β) = (λ 2 1 β)(β λ 2 2), g 1 (β) = (β λ 1 λ 2 ) 2, g 2 (β) = (β q2 ) 2 (β + q 2 ) 2, g(β) = g 1 (β)g 2 (β) h(β) = f(β) g(β), for β > 0. Then β is a positive solution of (3.5) if only if β is a positive solution of h(β) = 0. Obviously, the graph of f(β) is a concave parabola which intersects the β-axis at β = λ 2 2 λ 2 1, while the graph of g 1(β) is a convex parabola touching the β-axis at β = λ 1 λ 2. Note that f(β) = g 1 (β) has two solutions in the intervals [λ 2 2, λ 1λ 2 ) ((λ λ2 2 )/2, λ2 1 ]. The graph of g(β) can be obtained from that of g 1 (β) by reducing the vertical height of each point by the factor g 2 (β) (0, 1). Thus, the solution of f(β) = g 1 (β) nearer to λ 2 2 will move further left to a solution of f(β) = g(β); the solution of f(β) = g 1 (β) nearer to λ 2 1 will move further right to a solution of f(β) = g(β). We claim that h(β) cannot have more than 2 positive solutions in [λ 2 2, λ2 1 ], equivalently, (3.5) cannot have more than 2 positive solutions in [λ 2 2, λ2 1 ]. First, we prove the claim if q 2 < λ 2 2 or q2 > λ 2 1. Let p(β) be the polynomial on the left side of (3.5) divided by 2. Then the product of the roots of (3.5) is the constant term of p(β) equals the positive number q 4 λ 2 1 λ2 2. So, if (3.5) have more than 2 positive roots, then it must have 4 positive roots, say, β 1 β 2 β 3 β 4. Suppose p (β) has zeros β 1 β 2 β 3, p (β) has zeros β 1 β 2, p (β) has zero β 1. We have β 1 β 1 β 2 β 2 β 3 β 1 β 1 β 2. Thus, β 1 β 1 β 3. Now, examining the constant terms of the polynomials p (β)/4 p (β)/24, we see that β 1β 2β 3 = q 4 (λ 1 + λ 2 ) 2 /8 (6.1) β 1 = (λ 1 + λ 2 ) 2 /8. If q 2 < λ 2 2, then q2 < β 1 β 1 β 2 hence β 1β 2β 3 > (q 2 )(q 2 )(β 1 ) = q 4 (λ 1 + λ 2 ) 2 /8, which contradicts (6.1). Similarly, if q 2 > λ 2 1, then q2 > β 4 β 3 β 2 hence β 1β 2β 3 < β 1 (q 2 )(q 2 ) = q 4 (λ 1 + λ 2 ) 2 /8, which again contradicts (6.1). Thus, our claim is proved in these cases.

22 The Optimal Parameter in HSS Method 21 Next, we assume that q 2 [λ 2 2, λ2 1 ]. Then g(β) = (β q2 ) 2 (β λ 1 λ 2 ) 2 (λ 2 1 β)(β λ2 2 ) has two zeros in [λ 2 2, λ2 1 ], namely, β = q2 β = λ 1 λ 2. If λ 2 2 q2 λ 1 λ 2, then f is increasing on [λ 2 2, (λ2 1 +λ2 2 )/2] q2 λ 1 λ 2 (λ 2 1 +λ2 2 )/2. So, f is increasing on [λ 2 2, q2 ]. Since g(β) is decreasing on [λ 2 2, q2 ], the function h(β) have different signs at λ 2 2 q2, it follows that h(β) = 0 has a root in [λ 2 λ 1, q 2 ]. On (q 2, (λ λ2 2 )/2], we have f(β) > g 1(β) g 1 (β)g 2 (β). Thus, f(β) = g 1 (β)g 2 (β) has no root in this interval. Finally, on the interval [(λ λ2 2 )/2, λ2 1 ], f is decreasing g(β) is increasing, the function h(β) assume nonzero values with different signs at the end points. Thus, f(β) = g 1 (β)g 2 (β) has a root in ((λ λ2 2 )/2, λ2 1 ). Suppose λ 1 λ 2 < q 2 λ 2 1. On [λ2 2, λ 1λ 2 ], f is increasing g(β) is decreasing. Moreover, the function h(β) assume nonzero values with different signs at λ 2 2 λ 1λ 2, it follows that h(β) = 0 has a root in (λ 2 2, λ 1λ 2 ). Suppose q 2 < (λ λ2 2 )/2. On (λ 1λ 2, (λ λ2 2 )/2], we have f(β) > g 1(β) g 1 (β)g 2 (β). Thus, f(β) = g 1 (β)g 2 (β) has no root in this interval. On the interval [(λ λ2 2 )/2, λ2 1 ], f is decreasing g(β) is increasing, the function h(β) assume nonzero values with different signs at the end points. Thus, f(β) = g(β) has a root in ((λ λ2 2 )/2, λ2 1 ). Finally, if q 2 [(λ λ2 2 )/2, λ2 1 ], then for β (λ 1λ 2, q 2 ) we have hence (β λ 1 λ 2 ) 2 /(β + q) 2 (0, 1) f(β) > (β q) 2 > g(β). Thus, f(β) = g(β) has no solution in [(λ λ2 2 )/2, q2 ). Now, on the interval [q 2, λ 2 1 ], f is decreasing g(β) is increasing, the function h(β) assume values with different signs at the end points. Thus, f(β) = g(β) has a root in [q 2, λ 2 1 ]. Acknowledgement: Part of this work was done during the first author s visits to the Department of Mathematics of The College of William Mary during May June 2004, to Shanghai Key Laboratory of Contemporary Applied Mathematics of Fudan University. The authors are very much indebted to Jun-Feng Yin for his kind help on running the numerical results.

23 22 Z.-Z. Bai, G.H. Golub C.-K. Li References [1] Z.-Z. Bai, G.H. Golub, L.-Z. Lu J.-F. Yin, Block triangular skew-hermitian splitting methods for positive definite linear systems, SIAM J. Sci. Comput., 26:3(2005), [2] Z.-Z. Bai, G.H. Golub M.K. Ng, Hermitian skew-hermitian splitting methods for non-hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl., 24(2003), [3] Z.-Z. Bai, G.H. Golub M.K. Ng, On successive-overrelaxation acceleration of the Hermitian skew-hermitian splitting iterations, Tech. Report SCCM-02-06, Scientific Computing Computational Mathematics Program, Department of Computer Science, Stanford University, Available on line at [4] Z.-Z. Bai, G.H. Golub J.-Y. Pan, Preconditioned Hermitian skew-hermitian splitting methods for non-hermitian positive semidefinite linear systems, Numer. Math., 98(2004), [5] M. Benzi, M.J. Ger G.H. Golub, Optimization of the Hermitian skew- Hermitian splitting iteration for saddle-point problems, BIT, 43(2003), [6] M. Benzi G.H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl., 26(2004), [7] M. Benzi M.K. Ng, Preconditioned iterative methods for weighted Toeplitz least squares problems, SIAM J. Matrix Anal. Appl., in press. [8] D. Bertaccini, G.H. Golub, S. Serra-Capizzano C.T. Possio, Preconditioned HSS methods for the solution of non-hermitian positive definite linear systems applications to the discrete convection-diffusion equation, Numer. Math., 99(2005), [9] G.H. Golub C.F. Van Loan, Matrix Computations, 3rd Edition, The Johns Hopkins University Press, Baltimore, [10] G.H. Golub A.J. Wathen, An iteration for indefinite systems its application to the Navier-Stokes equations, SIAM J. Sci. Comput., 19(1998), [11] M.-K. Ho M. K. Ng, Splitting iterations for circulant-plus-diagonal systems, Numer. Linear Algebra Appl., 12:8(2005), [12] M.K. Ng, Circulant skew-circulant splitting methods for Toeplitz systems, J. Comput. Appl. Math., 159(2003), [13] J.-Y. Pan, Z.-Z. Bai M.K. Ng, Two-step waveform relaxation methods for implicit linear initial value problems, Numer. Linear Algebra Appl., 12(2005), [14] J.-Y. Pan, M.K. Ng Z.-Z. Bai, New preconditioners for saddle point problems, Appl. Math. Comput., in Press.

24 The Optimal Parameter in HSS Method 23 [15] Y. Saad, Iterative Methods for Sparse Linear Systems, PWS Publishing Company, Boston, [16] V. Simoncini M. Benzi, Spectral properties of the Hermitian skew-hermitian splitting preconditioner for saddle point problems, SIAM J. Matrix Anal. Appl., 26(2004), [17] R.S. Varga, Matrix Iterative Analysis, 2nd Revised Exped Edition, Springer- Verlag, Berlin, [18] H.A. van der Vorst, Iterative Krylov Methods for Large Linear Systems, Cambridge University Press, Cambridge, [19] C.-L. Wang Z.-Z. Bai, Sufficient conditions for the convergent splittings of non- Hermitian positive definite matrices, Linear Algebra Appl., 330(2001), [20] D.M. Young, Iterative Solution of Large Linear Systems, Academic Press, New York London, 1971.

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices Chi-Kwong Li Department of Mathematics The College of William and Mary Williamsburg, Virginia 23187-8795

Διαβάστε περισσότερα

Homework 3 Solutions

Homework 3 Solutions Homework 3 Solutions Igor Yanovsky (Math 151A TA) Problem 1: Compute the absolute error and relative error in approximations of p by p. (Use calculator!) a) p π, p 22/7; b) p π, p 3.141. Solution: For

Διαβάστε περισσότερα

2 Composition. Invertible Mappings

2 Composition. Invertible Mappings Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan Composition. Invertible Mappings In this section we discuss two procedures for creating new mappings from old ones, namely,

Διαβάστε περισσότερα

Example Sheet 3 Solutions

Example Sheet 3 Solutions Example Sheet 3 Solutions. i Regular Sturm-Liouville. ii Singular Sturm-Liouville mixed boundary conditions. iii Not Sturm-Liouville ODE is not in Sturm-Liouville form. iv Regular Sturm-Liouville note

Διαβάστε περισσότερα

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS CHAPTER 5 SOLVING EQUATIONS BY ITERATIVE METHODS EXERCISE 104 Page 8 1. Find the positive root of the equation x + 3x 5 = 0, correct to 3 significant figures, using the method of bisection. Let f(x) =

Διαβάστε περισσότερα

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Partial Differential Equations in Biology The boundary element method. March 26, 2013 The boundary element method March 26, 203 Introduction and notation The problem: u = f in D R d u = ϕ in Γ D u n = g on Γ N, where D = Γ D Γ N, Γ D Γ N = (possibly, Γ D = [Neumann problem] or Γ N = [Dirichlet

Διαβάστε περισσότερα

Tridiagonal matrices. Gérard MEURANT. October, 2008

Tridiagonal matrices. Gérard MEURANT. October, 2008 Tridiagonal matrices Gérard MEURANT October, 2008 1 Similarity 2 Cholesy factorizations 3 Eigenvalues 4 Inverse Similarity Let α 1 ω 1 β 1 α 2 ω 2 T =......... β 2 α 1 ω 1 β 1 α and β i ω i, i = 1,...,

Διαβάστε περισσότερα

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Chapter 6: Systems of Linear Differential. be continuous functions on the interval Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations

Διαβάστε περισσότερα

EE512: Error Control Coding

EE512: Error Control Coding EE512: Error Control Coding Solution for Assignment on Finite Fields February 16, 2007 1. (a) Addition and Multiplication tables for GF (5) and GF (7) are shown in Tables 1 and 2. + 0 1 2 3 4 0 0 1 2 3

Διαβάστε περισσότερα

Congruence Classes of Invertible Matrices of Order 3 over F 2

Congruence Classes of Invertible Matrices of Order 3 over F 2 International Journal of Algebra, Vol. 8, 24, no. 5, 239-246 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/.2988/ija.24.422 Congruence Classes of Invertible Matrices of Order 3 over F 2 Ligong An and

Διαβάστε περισσότερα

Section 8.3 Trigonometric Equations

Section 8.3 Trigonometric Equations 99 Section 8. Trigonometric Equations Objective 1: Solve Equations Involving One Trigonometric Function. In this section and the next, we will exple how to solving equations involving trigonometric functions.

Διαβάστε περισσότερα

Concrete Mathematics Exercises from 30 September 2016

Concrete Mathematics Exercises from 30 September 2016 Concrete Mathematics Exercises from 30 September 2016 Silvio Capobianco Exercise 1.7 Let H(n) = J(n + 1) J(n). Equation (1.8) tells us that H(2n) = 2, and H(2n+1) = J(2n+2) J(2n+1) = (2J(n+1) 1) (2J(n)+1)

Διαβάστε περισσότερα

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit Ting Zhang Stanford May 11, 2001 Stanford, 5/11/2001 1 Outline Ordinal Classification Ordinal Addition Ordinal Multiplication Ordinal

Διαβάστε περισσότερα

6.3 Forecasting ARMA processes

6.3 Forecasting ARMA processes 122 CHAPTER 6. ARMA MODELS 6.3 Forecasting ARMA processes The purpose of forecasting is to predict future values of a TS based on the data collected to the present. In this section we will discuss a linear

Διαβάστε περισσότερα

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Chapter 6: Systems of Linear Differential. be continuous functions on the interval Chapter 6: Systems of Linear Differential Equations Let a (t), a 2 (t),..., a nn (t), b (t), b 2 (t),..., b n (t) be continuous functions on the interval I. The system of n first-order differential equations

Διαβάστε περισσότερα

Every set of first-order formulas is equivalent to an independent set

Every set of first-order formulas is equivalent to an independent set Every set of first-order formulas is equivalent to an independent set May 6, 2008 Abstract A set of first-order formulas, whatever the cardinality of the set of symbols, is equivalent to an independent

Διαβάστε περισσότερα

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch: HOMEWORK 4 Problem a For the fast loading case, we want to derive the relationship between P zz and λ z. We know that the nominal stress is expressed as: P zz = ψ λ z where λ z = λ λ z. Therefore, applying

Διαβάστε περισσότερα

Matrices and Determinants

Matrices and Determinants Matrices and Determinants SUBJECTIVE PROBLEMS: Q 1. For what value of k do the following system of equations possess a non-trivial (i.e., not all zero) solution over the set of rationals Q? x + ky + 3z

Διαβάστε περισσότερα

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with Week 03: C lassification of S econd- Order L inear Equations In last week s lectures we have illustrated how to obtain the general solutions of first order PDEs using the method of characteristics. We

Διαβάστε περισσότερα

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions SCHOOL OF MATHEMATICAL SCIENCES GLMA Linear Mathematics 00- Examination Solutions. (a) i. ( + 5i)( i) = (6 + 5) + (5 )i = + i. Real part is, imaginary part is. (b) ii. + 5i i ( + 5i)( + i) = ( i)( + i)

Διαβάστε περισσότερα

C.S. 430 Assignment 6, Sample Solutions

C.S. 430 Assignment 6, Sample Solutions C.S. 430 Assignment 6, Sample Solutions Paul Liu November 15, 2007 Note that these are sample solutions only; in many cases there were many acceptable answers. 1 Reynolds Problem 10.1 1.1 Normal-order

Διαβάστε περισσότερα

4.6 Autoregressive Moving Average Model ARMA(1,1)

4.6 Autoregressive Moving Average Model ARMA(1,1) 84 CHAPTER 4. STATIONARY TS MODELS 4.6 Autoregressive Moving Average Model ARMA(,) This section is an introduction to a wide class of models ARMA(p,q) which we will consider in more detail later in this

Διαβάστε περισσότερα

ST5224: Advanced Statistical Theory II

ST5224: Advanced Statistical Theory II ST5224: Advanced Statistical Theory II 2014/2015: Semester II Tutorial 7 1. Let X be a sample from a population P and consider testing hypotheses H 0 : P = P 0 versus H 1 : P = P 1, where P j is a known

Διαβάστε περισσότερα

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required) Phys460.nb 81 ψ n (t) is still the (same) eigenstate of H But for tdependent H. The answer is NO. 5.5.5. Solution for the tdependent Schrodinger s equation If we assume that at time t 0, the electron starts

Διαβάστε περισσότερα

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R + Chapter 3. Fuzzy Arithmetic 3- Fuzzy arithmetic: ~Addition(+) and subtraction (-): Let A = [a and B = [b, b in R If x [a and y [b, b than x+y [a +b +b Symbolically,we write A(+)B = [a (+)[b, b = [a +b

Διαβάστε περισσότερα

Second Order RLC Filters

Second Order RLC Filters ECEN 60 Circuits/Electronics Spring 007-0-07 P. Mathys Second Order RLC Filters RLC Lowpass Filter A passive RLC lowpass filter (LPF) circuit is shown in the following schematic. R L C v O (t) Using phasor

Διαβάστε περισσότερα

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in : tail in X, head in A nowhere-zero Γ-flow is a Γ-circulation such that

Διαβάστε περισσότερα

Quadratic Expressions

Quadratic Expressions Quadratic Expressions. The standard form of a quadratic equation is ax + bx + c = 0 where a, b, c R and a 0. The roots of ax + bx + c = 0 are b ± b a 4ac. 3. For the equation ax +bx+c = 0, sum of the roots

Διαβάστε περισσότερα

Statistical Inference I Locally most powerful tests

Statistical Inference I Locally most powerful tests Statistical Inference I Locally most powerful tests Shirsendu Mukherjee Department of Statistics, Asutosh College, Kolkata, India. shirsendu st@yahoo.co.in So far we have treated the testing of one-sided

Διαβάστε περισσότερα

Math221: HW# 1 solutions

Math221: HW# 1 solutions Math: HW# solutions Andy Royston October, 5 7.5.7, 3 rd Ed. We have a n = b n = a = fxdx = xdx =, x cos nxdx = x sin nx n sin nxdx n = cos nx n = n n, x sin nxdx = x cos nx n + cos nxdx n cos n = + sin

Διαβάστε περισσότερα

Numerical Analysis FMN011

Numerical Analysis FMN011 Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 12 Periodic data A function g has period P if g(x + P ) = g(x) Model: Trigonometric polynomial of order M T M (x) =

Διαβάστε περισσότερα

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β 3.4 SUM AND DIFFERENCE FORMULAS Page Theorem cos(αβ cos α cos β -sin α cos(α-β cos α cos β sin α NOTE: cos(αβ cos α cos β cos(α-β cos α -cos β Proof of cos(α-β cos α cos β sin α Let s use a unit circle

Διαβάστε περισσότερα

Section 7.6 Double and Half Angle Formulas

Section 7.6 Double and Half Angle Formulas 09 Section 7. Double and Half Angle Fmulas To derive the double-angles fmulas, we will use the sum of two angles fmulas that we developed in the last section. We will let α θ and β θ: cos(θ) cos(θ + θ)

Διαβάστε περισσότερα

Uniform Convergence of Fourier Series Michael Taylor

Uniform Convergence of Fourier Series Michael Taylor Uniform Convergence of Fourier Series Michael Taylor Given f L 1 T 1 ), we consider the partial sums of the Fourier series of f: N 1) S N fθ) = ˆfk)e ikθ. k= N A calculation gives the Dirichlet formula

Διαβάστε περισσότερα

Other Test Constructions: Likelihood Ratio & Bayes Tests

Other Test Constructions: Likelihood Ratio & Bayes Tests Other Test Constructions: Likelihood Ratio & Bayes Tests Side-Note: So far we have seen a few approaches for creating tests such as Neyman-Pearson Lemma ( most powerful tests of H 0 : θ = θ 0 vs H 1 :

Διαβάστε περισσότερα

New bounds for spherical two-distance sets and equiangular lines

New bounds for spherical two-distance sets and equiangular lines New bounds for spherical two-distance sets and equiangular lines Michigan State University Oct 8-31, 016 Anhui University Definition If X = {x 1, x,, x N } S n 1 (unit sphere in R n ) and x i, x j = a

Διαβάστε περισσότερα

Areas and Lengths in Polar Coordinates

Areas and Lengths in Polar Coordinates Kiryl Tsishchanka Areas and Lengths in Polar Coordinates In this section we develop the formula for the area of a region whose boundary is given by a polar equation. We need to use the formula for the

Διαβάστε περισσότερα

Pg The perimeter is P = 3x The area of a triangle is. where b is the base, h is the height. In our case b = x, then the area is

Pg The perimeter is P = 3x The area of a triangle is. where b is the base, h is the height. In our case b = x, then the area is Pg. 9. The perimeter is P = The area of a triangle is A = bh where b is the base, h is the height 0 h= btan 60 = b = b In our case b =, then the area is A = = 0. By Pythagorean theorem a + a = d a a =

Διαβάστε περισσότερα

Areas and Lengths in Polar Coordinates

Areas and Lengths in Polar Coordinates Kiryl Tsishchanka Areas and Lengths in Polar Coordinates In this section we develop the formula for the area of a region whose boundary is given by a polar equation. We need to use the formula for the

Διαβάστε περισσότερα

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3 1 State vector space and the dual space Space of wavefunctions The space of wavefunctions is the set of all

Διαβάστε περισσότερα

Approximation of distance between locations on earth given by latitude and longitude

Approximation of distance between locations on earth given by latitude and longitude Approximation of distance between locations on earth given by latitude and longitude Jan Behrens 2012-12-31 In this paper we shall provide a method to approximate distances between two points on earth

Διαβάστε περισσότερα

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007 Οδηγίες: Να απαντηθούν όλες οι ερωτήσεις. Αν κάπου κάνετε κάποιες υποθέσεις να αναφερθούν στη σχετική ερώτηση. Όλα τα αρχεία που αναφέρονται στα προβλήματα βρίσκονται στον ίδιο φάκελο με το εκτελέσιμο

Διαβάστε περισσότερα

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X. Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequalit for metrics: Let (X, d) be a metric space and let x,, z X. Prove that d(x, z) d(, z) d(x, ). (ii): Reverse triangle inequalit for norms:

Διαβάστε περισσότερα

Problem Set 3: Solutions

Problem Set 3: Solutions CMPSCI 69GG Applied Information Theory Fall 006 Problem Set 3: Solutions. [Cover and Thomas 7.] a Define the following notation, C I p xx; Y max X; Y C I p xx; Ỹ max I X; Ỹ We would like to show that C

Διαβάστε περισσότερα

( ) 2 and compare to M.

( ) 2 and compare to M. Problems and Solutions for Section 4.2 4.9 through 4.33) 4.9 Calculate the square root of the matrix 3!0 M!0 8 Hint: Let M / 2 a!b ; calculate M / 2!b c ) 2 and compare to M. Solution: Given: 3!0 M!0 8

Διαβάστε περισσότερα

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

6.1. Dirac Equation. Hamiltonian. Dirac Eq. 6.1. Dirac Equation Ref: M.Kaku, Quantum Field Theory, Oxford Univ Press (1993) η μν = η μν = diag(1, -1, -1, -1) p 0 = p 0 p = p i = -p i p μ p μ = p 0 p 0 + p i p i = E c 2 - p 2 = (m c) 2 H = c p 2

Διαβάστε περισσότερα

ES440/ES911: CFD. Chapter 5. Solution of Linear Equation Systems

ES440/ES911: CFD. Chapter 5. Solution of Linear Equation Systems ES440/ES911: CFD Chapter 5. Solution of Linear Equation Systems Dr Yongmann M. Chung http://www.eng.warwick.ac.uk/staff/ymc/es440.html Y.M.Chung@warwick.ac.uk School of Engineering & Centre for Scientific

Διαβάστε περισσότερα

Reminders: linear functions

Reminders: linear functions Reminders: linear functions Let U and V be vector spaces over the same field F. Definition A function f : U V is linear if for every u 1, u 2 U, f (u 1 + u 2 ) = f (u 1 ) + f (u 2 ), and for every u U

Διαβάστε περισσότερα

Lecture 13 - Root Space Decomposition II

Lecture 13 - Root Space Decomposition II Lecture 13 - Root Space Decomposition II October 18, 2012 1 Review First let us recall the situation. Let g be a simple algebra, with maximal toral subalgebra h (which we are calling a CSA, or Cartan Subalgebra).

Διαβάστε περισσότερα

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =? Teko Classes IITJEE/AIEEE Maths by SUHAAG SIR, Bhopal, Ph (0755) 3 00 000 www.tekoclasses.com ANSWERSHEET (TOPIC DIFFERENTIAL CALCULUS) COLLECTION # Question Type A.Single Correct Type Q. (A) Sol least

Διαβάστε περισσότερα

derivation of the Laplacian from rectangular to spherical coordinates

derivation of the Laplacian from rectangular to spherical coordinates derivation of the Laplacian from rectangular to spherical coordinates swapnizzle 03-03- :5:43 We begin by recognizing the familiar conversion from rectangular to spherical coordinates (note that φ is used

Διαβάστε περισσότερα

Second Order Partial Differential Equations

Second Order Partial Differential Equations Chapter 7 Second Order Partial Differential Equations 7.1 Introduction A second order linear PDE in two independent variables (x, y Ω can be written as A(x, y u x + B(x, y u xy + C(x, y u u u + D(x, y

Διαβάστε περισσότερα

Two-parameter preconditioned NSS method for non-hermitian and positive definite linear systems

Two-parameter preconditioned NSS method for non-hermitian and positive definite linear systems 013 9 7 3 Sept. 013 Communication on Applied Mathematics and Computation Vol.7 No.3 DOI 10.3969/j.issn.1006-6330.013.03.005 Two-parameter preconditioned NSS method for non-hermitian and positive definite

Διαβάστε περισσότερα

Inverse trigonometric functions & General Solution of Trigonometric Equations. ------------------ ----------------------------- -----------------

Inverse trigonometric functions & General Solution of Trigonometric Equations. ------------------ ----------------------------- ----------------- Inverse trigonometric functions & General Solution of Trigonometric Equations. 1. Sin ( ) = a) b) c) d) Ans b. Solution : Method 1. Ans a: 17 > 1 a) is rejected. w.k.t Sin ( sin ) = d is rejected. If sin

Διαβάστε περισσότερα

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y

= λ 1 1 e. = λ 1 =12. has the properties e 1. e 3,V(Y Stat 50 Homework Solutions Spring 005. (a λ λ λ 44 (b trace( λ + λ + λ 0 (c V (e x e e λ e e λ e (λ e by definition, the eigenvector e has the properties e λ e and e e. (d λ e e + λ e e + λ e e 8 6 4 4

Διαβάστε περισσότερα

Fractional Colorings and Zykov Products of graphs

Fractional Colorings and Zykov Products of graphs Fractional Colorings and Zykov Products of graphs Who? Nichole Schimanski When? July 27, 2011 Graphs A graph, G, consists of a vertex set, V (G), and an edge set, E(G). V (G) is any finite set E(G) is

Διαβάστε περισσότερα

Solutions to Exercise Sheet 5

Solutions to Exercise Sheet 5 Solutions to Eercise Sheet 5 jacques@ucsd.edu. Let X and Y be random variables with joint pdf f(, y) = 3y( + y) where and y. Determine each of the following probabilities. Solutions. a. P (X ). b. P (X

Διαβάστε περισσότερα

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1. Exercises 0 More exercises are available in Elementary Differential Equations. If you have a problem to solve any of them, feel free to come to office hour. Problem Find a fundamental matrix of the given

Διαβάστε περισσότερα

Testing for Indeterminacy: An Application to U.S. Monetary Policy. Technical Appendix

Testing for Indeterminacy: An Application to U.S. Monetary Policy. Technical Appendix Testing for Indeterminacy: An Application to U.S. Monetary Policy Technical Appendix Thomas A. Lubik Department of Economics Johns Hopkins University Frank Schorfheide Department of Economics University

Διαβάστε περισσότερα

w o = R 1 p. (1) R = p =. = 1

w o = R 1 p. (1) R = p =. = 1 Πανεπιστήµιο Κρήτης - Τµήµα Επιστήµης Υπολογιστών ΗΥ-570: Στατιστική Επεξεργασία Σήµατος 205 ιδάσκων : Α. Μουχτάρης Τριτη Σειρά Ασκήσεων Λύσεις Ασκηση 3. 5.2 (a) From the Wiener-Hopf equation we have:

Διαβάστε περισσότερα

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ. Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο The time integral of a force is referred to as impulse, is determined by and is obtained from: Newton s 2 nd Law of motion states that the action

Διαβάστε περισσότερα

On Numerical Radius of Some Matrices

On Numerical Radius of Some Matrices International Journal of Mathematical Analysis Vol., 08, no., 9-8 HIKARI Ltd, www.m-hikari.com https://doi.org/0.988/ijma.08.75 On Numerical Radius of Some Matrices Shyamasree Ghosh Dastidar Department

Διαβάστε περισσότερα

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018 Journal of rogressive Research in Mathematics(JRM) ISSN: 2395-028 SCITECH Volume 3, Issue 2 RESEARCH ORGANISATION ublished online: March 29, 208 Journal of rogressive Research in Mathematics www.scitecresearch.com/journals

Διαβάστε περισσότερα

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits. EAMCET-. THEORY OF EQUATIONS PREVIOUS EAMCET Bits. Each of the roots of the equation x 6x + 6x 5= are increased by k so that the new transformed equation does not contain term. Then k =... - 4. - Sol.

Διαβάστε περισσότερα

Parametrized Surfaces

Parametrized Surfaces Parametrized Surfaces Recall from our unit on vector-valued functions at the beginning of the semester that an R 3 -valued function c(t) in one parameter is a mapping of the form c : I R 3 where I is some

Διαβάστε περισσότερα

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS EXERCISE 01 Page 545 1. Use matrices to solve: 3x + 4y x + 5y + 7 3x + 4y x + 5y 7 Hence, 3 4 x 0 5 y 7 The inverse of 3 4 5 is: 1 5 4 1 5 4 15 8 3

Διαβάστε περισσότερα

D Alembert s Solution to the Wave Equation

D Alembert s Solution to the Wave Equation D Alembert s Solution to the Wave Equation MATH 467 Partial Differential Equations J. Robert Buchanan Department of Mathematics Fall 2018 Objectives In this lesson we will learn: a change of variable technique

Διαβάστε περισσότερα

Lecture 34 Bootstrap confidence intervals

Lecture 34 Bootstrap confidence intervals Lecture 34 Bootstrap confidence intervals Confidence Intervals θ: an unknown parameter of interest We want to find limits θ and θ such that Gt = P nˆθ θ t If G 1 1 α is known, then P θ θ = P θ θ = 1 α

Διαβάστε περισσότερα

CRASH COURSE IN PRECALCULUS

CRASH COURSE IN PRECALCULUS CRASH COURSE IN PRECALCULUS Shiah-Sen Wang The graphs are prepared by Chien-Lun Lai Based on : Precalculus: Mathematics for Calculus by J. Stuwart, L. Redin & S. Watson, 6th edition, 01, Brooks/Cole Chapter

Διαβάστε περισσότερα

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics Fourier Series MATH 211, Calculus II J. Robert Buchanan Department of Mathematics Spring 2018 Introduction Not all functions can be represented by Taylor series. f (k) (c) A Taylor series f (x) = (x c)

Διαβάστε περισσότερα

Homework 8 Model Solution Section

Homework 8 Model Solution Section MATH 004 Homework Solution Homework 8 Model Solution Section 14.5 14.6. 14.5. Use the Chain Rule to find dz where z cosx + 4y), x 5t 4, y 1 t. dz dx + dy y sinx + 4y)0t + 4) sinx + 4y) 1t ) 0t + 4t ) sinx

Διαβάστε περισσότερα

Lecture 15 - Root System Axiomatics

Lecture 15 - Root System Axiomatics Lecture 15 - Root System Axiomatics Nov 1, 01 In this lecture we examine root systems from an axiomatic point of view. 1 Reflections If v R n, then it determines a hyperplane, denoted P v, through the

Διαβάστε περισσότερα

A Note on Intuitionistic Fuzzy. Equivalence Relation

A Note on Intuitionistic Fuzzy. Equivalence Relation International Mathematical Forum, 5, 2010, no. 67, 3301-3307 A Note on Intuitionistic Fuzzy Equivalence Relation D. K. Basnet Dept. of Mathematics, Assam University Silchar-788011, Assam, India dkbasnet@rediffmail.com

Διαβάστε περισσότερα

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω

ω ω ω ω ω ω+2 ω ω+2 + ω ω ω ω+2 + ω ω+1 ω ω+2 2 ω ω ω ω ω ω ω ω+1 ω ω2 ω ω2 + ω ω ω2 + ω ω ω ω2 + ω ω+1 ω ω2 + ω ω+1 + ω ω ω ω2 + ω 0 1 2 3 4 5 6 ω ω + 1 ω + 2 ω + 3 ω + 4 ω2 ω2 + 1 ω2 + 2 ω2 + 3 ω3 ω3 + 1 ω3 + 2 ω4 ω4 + 1 ω5 ω 2 ω 2 + 1 ω 2 + 2 ω 2 + ω ω 2 + ω + 1 ω 2 + ω2 ω 2 2 ω 2 2 + 1 ω 2 2 + ω ω 2 3 ω 3 ω 3 + 1 ω 3 + ω ω 3 +

Διαβάστε περισσότερα

SOME PROPERTIES OF FUZZY REAL NUMBERS

SOME PROPERTIES OF FUZZY REAL NUMBERS Sahand Communications in Mathematical Analysis (SCMA) Vol. 3 No. 1 (2016), 21-27 http://scma.maragheh.ac.ir SOME PROPERTIES OF FUZZY REAL NUMBERS BAYAZ DARABY 1 AND JAVAD JAFARI 2 Abstract. In the mathematical

Διαβάστε περισσότερα

Lecture 21: Properties and robustness of LSE

Lecture 21: Properties and robustness of LSE Lecture 21: Properties and robustness of LSE BLUE: Robustness of LSE against normality We now study properties of l τ β and σ 2 under assumption A2, i.e., without the normality assumption on ε. From Theorem

Διαβάστε περισσότερα

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p)

2. Let H 1 and H 2 be Hilbert spaces and let T : H 1 H 2 be a bounded linear operator. Prove that [T (H 1 )] = N (T ). (6p) Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 2005-03-08 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok

Διαβάστε περισσότερα

Lecture 2. Soundness and completeness of propositional logic

Lecture 2. Soundness and completeness of propositional logic Lecture 2 Soundness and completeness of propositional logic February 9, 2004 1 Overview Review of natural deduction. Soundness and completeness. Semantics of propositional formulas. Soundness proof. Completeness

Διαβάστε περισσότερα

The ε-pseudospectrum of a Matrix

The ε-pseudospectrum of a Matrix The ε-pseudospectrum of a Matrix Feb 16, 2015 () The ε-pseudospectrum of a Matrix Feb 16, 2015 1 / 18 1 Preliminaries 2 Definitions 3 Basic Properties 4 Computation of Pseudospectrum of 2 2 5 Problems

Διαβάστε περισσότερα

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr 9.9 #. Area inside the oval limaçon r = + cos. To graph, start with = so r =. Compute d = sin. Interesting points are where d vanishes, or at =,,, etc. For these values of we compute r:,,, and the values

Διαβάστε περισσότερα

On a four-dimensional hyperbolic manifold with finite volume

On a four-dimensional hyperbolic manifold with finite volume BULETINUL ACADEMIEI DE ŞTIINŢE A REPUBLICII MOLDOVA. MATEMATICA Numbers 2(72) 3(73), 2013, Pages 80 89 ISSN 1024 7696 On a four-dimensional hyperbolic manifold with finite volume I.S.Gutsul Abstract. In

Διαβάστε περισσότερα

Srednicki Chapter 55

Srednicki Chapter 55 Srednicki Chapter 55 QFT Problems & Solutions A. George August 3, 03 Srednicki 55.. Use equations 55.3-55.0 and A i, A j ] = Π i, Π j ] = 0 (at equal times) to verify equations 55.-55.3. This is our third

Διαβάστε περισσότερα

Finite difference method for 2-D heat equation

Finite difference method for 2-D heat equation Finite difference method for 2-D heat equation Praveen. C praveen@math.tifrbng.res.in Tata Institute of Fundamental Research Center for Applicable Mathematics Bangalore 560065 http://math.tifrbng.res.in/~praveen

Διαβάστε περισσότερα

The challenges of non-stable predicates

The challenges of non-stable predicates The challenges of non-stable predicates Consider a non-stable predicate Φ encoding, say, a safety property. We want to determine whether Φ holds for our program. The challenges of non-stable predicates

Διαβάστε περισσότερα

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER ORDINAL ARITHMETIC JULIAN J. SCHLÖDER Abstract. We define ordinal arithmetic and show laws of Left- Monotonicity, Associativity, Distributivity, some minor related properties and the Cantor Normal Form.

Διαβάστε περισσότερα

Finite Field Problems: Solutions

Finite Field Problems: Solutions Finite Field Problems: Solutions 1. Let f = x 2 +1 Z 11 [x] and let F = Z 11 [x]/(f), a field. Let Solution: F =11 2 = 121, so F = 121 1 = 120. The possible orders are the divisors of 120. Solution: The

Διαβάστε περισσότερα

The Simply Typed Lambda Calculus

The Simply Typed Lambda Calculus Type Inference Instead of writing type annotations, can we use an algorithm to infer what the type annotations should be? That depends on the type system. For simple type systems the answer is yes, and

Διαβάστε περισσότερα

DESIGN OF MACHINERY SOLUTION MANUAL h in h 4 0.

DESIGN OF MACHINERY SOLUTION MANUAL h in h 4 0. DESIGN OF MACHINERY SOLUTION MANUAL -7-1! PROBLEM -7 Statement: Design a double-dwell cam to move a follower from to 25 6, dwell for 12, fall 25 and dwell for the remader The total cycle must take 4 sec

Διαβάστε περισσότερα

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1 Conceptual Questions. State a Basic identity and then verify it. a) Identity: Solution: One identity is cscθ) = sinθ) Practice Exam b) Verification: Solution: Given the point of intersection x, y) of the

Διαβάστε περισσότερα

Section 9.2 Polar Equations and Graphs

Section 9.2 Polar Equations and Graphs 180 Section 9. Polar Equations and Graphs In this section, we will be graphing polar equations on a polar grid. In the first few examples, we will write the polar equation in rectangular form to help identify

Διαβάστε περισσότερα

General 2 2 PT -Symmetric Matrices and Jordan Blocks 1

General 2 2 PT -Symmetric Matrices and Jordan Blocks 1 General 2 2 PT -Symmetric Matrices and Jordan Blocks 1 Qing-hai Wang National University of Singapore Quantum Physics with Non-Hermitian Operators Max-Planck-Institut für Physik komplexer Systeme Dresden,

Διαβάστε περισσότερα

Problem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ.

Problem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ. Chemistry 362 Dr Jean M Standard Problem Set 9 Solutions The ˆ L 2 operator is defined as Verify that the angular wavefunction Y θ,φ) Also verify that the eigenvalue is given by 2! 2 & L ˆ 2! 2 2 θ 2 +

Διαβάστε περισσότερα

PROPERTIES OF CERTAIN INTEGRAL OPERATORS. a n z n (1.1)

PROPERTIES OF CERTAIN INTEGRAL OPERATORS. a n z n (1.1) GEORGIAN MATHEMATICAL JOURNAL: Vol. 2, No. 5, 995, 535-545 PROPERTIES OF CERTAIN INTEGRAL OPERATORS SHIGEYOSHI OWA Abstract. Two integral operators P α and Q α for analytic functions in the open unit disk

Διαβάστε περισσότερα

PARTIAL NOTES for 6.1 Trigonometric Identities

PARTIAL NOTES for 6.1 Trigonometric Identities PARTIAL NOTES for 6.1 Trigonometric Identities tanθ = sinθ cosθ cotθ = cosθ sinθ BASIC IDENTITIES cscθ = 1 sinθ secθ = 1 cosθ cotθ = 1 tanθ PYTHAGOREAN IDENTITIES sin θ + cos θ =1 tan θ +1= sec θ 1 + cot

Διαβάστε περισσότερα

SOLVING CUBICS AND QUARTICS BY RADICALS

SOLVING CUBICS AND QUARTICS BY RADICALS SOLVING CUBICS AND QUARTICS BY RADICALS The purpose of this handout is to record the classical formulas expressing the roots of degree three and degree four polynomials in terms of radicals. We begin with

Διαβάστε περισσότερα

On generalized preconditioned Hermitian and skew-hermitian splitting methods for saddle point problems

On generalized preconditioned Hermitian and skew-hermitian splitting methods for saddle point problems On generalized preconditioned Hermitian skew-hermitian splitting methods for saddle point problems Department of Humanities Social Sciences Zhejiang Industry Polytechnic College Shaoxing, Zhejiang 312000

Διαβάστε περισσότερα

Coefficient Inequalities for a New Subclass of K-uniformly Convex Functions

Coefficient Inequalities for a New Subclass of K-uniformly Convex Functions International Journal of Computational Science and Mathematics. ISSN 0974-89 Volume, Number (00), pp. 67--75 International Research Publication House http://www.irphouse.com Coefficient Inequalities for

Διαβάστε περισσότερα

MA 342N Assignment 1 Due 24 February 2016

MA 342N Assignment 1 Due 24 February 2016 M 342N ssignment Due 24 February 206 Id: 342N-s206-.m4,v. 206/02/5 2:25:36 john Exp john. Suppose that q, in addition to satisfying the assumptions from lecture, is an even function. Prove that η(λ = 0,

Διαβάστε περισσότερα

Appendix S1 1. ( z) α βc. dβ β δ β

Appendix S1 1. ( z) α βc. dβ β δ β Appendix S1 1 Proof of Lemma 1. Taking first and second partial derivatives of the expected profit function, as expressed in Eq. (7), with respect to l: Π Π ( z, λ, l) l θ + s ( s + h ) g ( t) dt λ Ω(

Διαβάστε περισσότερα

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University

Estimation for ARMA Processes with Stable Noise. Matt Calder & Richard A. Davis Colorado State University Estimation for ARMA Processes with Stable Noise Matt Calder & Richard A. Davis Colorado State University rdavis@stat.colostate.edu 1 ARMA processes with stable noise Review of M-estimation Examples of

Διαβάστε περισσότερα