Generalized Quasilinearization versus Newton s Method for Convex-Concave Functions

Σχετικά έγγραφα
2 Composition. Invertible Mappings

CHAPTER 25 SOLVING EQUATIONS BY ITERATIVE METHODS

C.S. 430 Assignment 6, Sample Solutions

Example Sheet 3 Solutions

Concrete Mathematics Exercises from 30 September 2016

SCITECH Volume 13, Issue 2 RESEARCH ORGANISATION Published online: March 29, 2018

EE512: Error Control Coding

Phys460.nb Solution for the t-dependent Schrodinger s equation How did we find the solution? (not required)

Homework 3 Solutions

Second Order Partial Differential Equations

4.6 Autoregressive Moving Average Model ARMA(1,1)

HOMEWORK 4 = G. In order to plot the stress versus the stretch we define a normalized stretch:

Section 8.3 Trigonometric Equations

3.4 SUM AND DIFFERENCE FORMULAS. NOTE: cos(α+β) cos α + cos β cos(α-β) cos α -cos β

Every set of first-order formulas is equivalent to an independent set

Tridiagonal matrices. Gérard MEURANT. October, 2008

Ordinal Arithmetic: Addition, Multiplication, Exponentiation and Limit

6.3 Forecasting ARMA processes

Math 446 Homework 3 Solutions. (1). (i): Reverse triangle inequality for metrics: Let (X, d) be a metric space and let x, y, z X.

Uniform Convergence of Fourier Series Michael Taylor

D Alembert s Solution to the Wave Equation

Coefficient Inequalities for a New Subclass of K-uniformly Convex Functions

Second Order RLC Filters

Math221: HW# 1 solutions

Problem Set 3: Solutions

Section 7.6 Double and Half Angle Formulas

Inverse trigonometric functions & General Solution of Trigonometric Equations

Solutions to Exercise Sheet 5

Areas and Lengths in Polar Coordinates

Areas and Lengths in Polar Coordinates

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

Lecture 2. Soundness and completeness of propositional logic

9.09. # 1. Area inside the oval limaçon r = cos θ. To graph, start with θ = 0 so r = 6. Compute dr

Jesse Maassen and Mark Lundstrom Purdue University November 25, 2013

Approximation of distance between locations on earth given by latitude and longitude

Exercises 10. Find a fundamental matrix of the given system of equations. Also find the fundamental matrix Φ(t) satisfying Φ(0) = I. 1.

ST5224: Advanced Statistical Theory II

CHAPTER 48 APPLICATIONS OF MATRICES AND DETERMINANTS

Partial Differential Equations in Biology The boundary element method. March 26, 2013

Appendix to On the stability of a compressible axisymmetric rotating flow in a pipe. By Z. Rusak & J. H. Lee

Απόκριση σε Μοναδιαία Ωστική Δύναμη (Unit Impulse) Απόκριση σε Δυνάμεις Αυθαίρετα Μεταβαλλόμενες με το Χρόνο. Απόστολος Σ.

derivation of the Laplacian from rectangular to spherical coordinates

Congruence Classes of Invertible Matrices of Order 3 over F 2

Homomorphism in Intuitionistic Fuzzy Automata

k A = [k, k]( )[a 1, a 2 ] = [ka 1,ka 2 ] 4For the division of two intervals of confidence in R +

Matrices and Determinants

Other Test Constructions: Likelihood Ratio & Bayes Tests

( y) Partial Differential Equations

DESIGN OF MACHINERY SOLUTION MANUAL h in h 4 0.

Homework 8 Model Solution Section

ANSWERSHEET (TOPIC = DIFFERENTIAL CALCULUS) COLLECTION #2. h 0 h h 0 h h 0 ( ) g k = g 0 + g 1 + g g 2009 =?

Nowhere-zero flows Let be a digraph, Abelian group. A Γ-circulation in is a mapping : such that, where, and : tail in X, head in

forms This gives Remark 1. How to remember the above formulas: Substituting these into the equation we obtain with

A Note on Intuitionistic Fuzzy. Equivalence Relation

Strain gauge and rosettes

SCHOOL OF MATHEMATICAL SCIENCES G11LMA Linear Mathematics Examination Solutions

Quadratic Expressions

1 String with massive end-points

CHAPTER 101 FOURIER SERIES FOR PERIODIC FUNCTIONS OF PERIOD

Srednicki Chapter 55

Chapter 6: Systems of Linear Differential. be continuous functions on the interval

DiracDelta. Notations. Primary definition. Specific values. General characteristics. Traditional name. Traditional notation

A Two-Sided Laplace Inversion Algorithm with Computable Error Bounds and Its Applications in Financial Engineering

( ) 2 and compare to M.

w o = R 1 p. (1) R = p =. = 1

The Simply Typed Lambda Calculus

Statistical Inference I Locally most powerful tests

Practice Exam 2. Conceptual Questions. 1. State a Basic identity and then verify it. (a) Identity: Solution: One identity is csc(θ) = 1

Finite Field Problems: Solutions

Testing for Indeterminacy: An Application to U.S. Monetary Policy. Technical Appendix

ORDINAL ARITHMETIC JULIAN J. SCHLÖDER

Fourier Series. MATH 211, Calculus II. J. Robert Buchanan. Spring Department of Mathematics

SOME PROPERTIES OF FUZZY REAL NUMBERS

On a four-dimensional hyperbolic manifold with finite volume

Optimal Parameter in Hermitian and Skew-Hermitian Splitting Method for Certain Two-by-Two Block Matrices

Econ 2110: Fall 2008 Suggested Solutions to Problem Set 8 questions or comments to Dan Fetter 1

ΗΜΥ 220: ΣΗΜΑΤΑ ΚΑΙ ΣΥΣΤΗΜΑΤΑ Ι Ακαδημαϊκό έτος Εαρινό Εξάμηνο Κατ οίκον εργασία αρ. 2

Overview. Transition Semantics. Configurations and the transition relation. Executions and computation

MATH423 String Theory Solutions 4. = 0 τ = f(s). (1) dτ ds = dxµ dτ f (s) (2) dτ 2 [f (s)] 2 + dxµ. dτ f (s) (3)

Commutative Monoids in Intuitionistic Fuzzy Sets

Lecture 26: Circular domains

Lecture 2: Dirac notation and a review of linear algebra Read Sakurai chapter 1, Baym chatper 3

Section 9.2 Polar Equations and Graphs

Appendix S1 1. ( z) α βc. dβ β δ β

Problem Set 9 Solutions. θ + 1. θ 2 + cotθ ( ) sinθ e iφ is an eigenfunction of the ˆ L 2 operator. / θ 2. φ 2. sin 2 θ φ 2. ( ) = e iφ. = e iφ cosθ.

CRASH COURSE IN PRECALCULUS

6.1. Dirac Equation. Hamiltonian. Dirac Eq.

ΚΥΠΡΙΑΚΗ ΕΤΑΙΡΕΙΑ ΠΛΗΡΟΦΟΡΙΚΗΣ CYPRUS COMPUTER SOCIETY ΠΑΓΚΥΠΡΙΟΣ ΜΑΘΗΤΙΚΟΣ ΔΙΑΓΩΝΙΣΜΟΣ ΠΛΗΡΟΦΟΡΙΚΗΣ 19/5/2007

Solution Series 9. i=1 x i and i=1 x i.

Fractional Colorings and Zykov Products of graphs

Arithmetical applications of lagrangian interpolation. Tanguy Rivoal. Institut Fourier CNRS and Université de Grenoble 1

EE101: Resonance in RLC circuits

A summation formula ramified with hypergeometric function and involving recurrence relation

: Monte Carlo EM 313, Louis (1982) EM, EM Newton-Raphson, /. EM, 2 Monte Carlo EM Newton-Raphson, Monte Carlo EM, Monte Carlo EM, /. 3, Monte Carlo EM

2. THEORY OF EQUATIONS. PREVIOUS EAMCET Bits.

Potential Dividers. 46 minutes. 46 marks. Page 1 of 11

A General Note on δ-quasi Monotone and Increasing Sequence

Math 6 SL Probability Distributions Practice Test Mark Scheme

SOLVING CUBICS AND QUARTICS BY RADICALS

On New Subclasses of Analytic Functions with Respect to Conjugate and Symmetric Conjugate Points

Transcript:

www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 Generalized Quasilinearization versus Newton s Method for Convex-Concave Functions Cesar Martínez-Garza (Corresponding author) Division of Science The Pennsylvania State University - Berks Campus Tulpehocken Rd., Reading, PA 19610, USA Tel: 1-610-396-6438 E-mail: cxm58@psu.edu Abstract In this paper we use the Method of Generalized Quasilinearization to obtain monotone Newton-like comparative schemes to solve the equation F(x) = 0, where F(x) C[Ω, R], Ω R. Here, F(x) admits the decomposition F(x) = f (x) + g(x), where f (x) and g(x) are convex and concave functions, in C[Ω, R], respectively. The monotone sequences of iterates are shown to converge quadratically. Four cases are explored in this manuscript. Keywords: Newton s Method, Generalized Quasilinearization, Monotone Iterative Technique, Convex Functions 1. Introduction Lakshmikantham and Vatsala (004) showed that the method of Generalized Quasilinearization can be successfully employed to solve the equation 0 = f (x), (1) where f C[R, R], by comparing it to the initial value problem (IVP) x = f (x), x(0) = x 0, () on [0, T]. By exploiting the properties of the method of Generalized Quasilinearization (Lakshmikantham and Vatsala, 1998) they constructed a comparable framework to Newton s method to solve for an isolated zero, x 0 of (1). Newton s method simplicity and rapid convergence renders it as a first choice to solve (1), but its limitations have given rise to a broad body of work that continues to this day (Bellman and Kalaba,1965; Kelley, 1995; Ortega and Rheinboldt (1970); Potra (1987)). The generation of Newton-like schemes for the solution to (1) from the method of Generalized Quasilinearization is a logical choice, as Quasilinearization follows a similar methodology to Newton s method albeit for a seemingly different purpose (Bellman and Kalaba,1965). The main iteration formula for Newton s Method is x n+1 = x n f (x n), n 0, (3) f x (x n ) which requires that f x (x) be continuous and also nonzero in a neighborhood of a simple root of (1). This in turn requires knowledge of the monotone character of f (x). The similarities between Newton s method and standard Quasilinearization are enhanced when expressing (3) in the form 0 = f (x n ) + f x (x n )(x n+1 x n ), (4) and comparing it with the iterative scheme afforded by Quasilinearization to approximate (): x n+1 = f (x n) + f x (x n )(x n+1 x n ), x n+1 (0) = x 0. (5) The similarities between the method of Generalized Quasilinearization and Newton s method come to light when we look at the Newton-Fourier method as is shown next: let f be defined as in (1), and let D = [α 0,β 0 ] be a neighborhood of the simple zero of (1). If 0 < f (α 0 ), 0 < f (β 0 ), f x (x) > 0, and f xx (x) > 0, meaning that f is monotone increasing in D, then there exist monotone sequences {α n } and {β n } that converge quadratically to the simple zero of (1) in D. These sequences are generated by the iterative scheme 0 = f (β n ) + f x (β n )(β n+1 β n ), (6) 0 = f (α n ) + f x (β n )(α n+1 α n ), (7) (Lakshmikantham, V. 1998). Although two sequences are required, the Newton-Fourier method is not more expensive than the traditional Newton s method since the derivative computations are the same for both iterates (Potra, 1987). Comparatively, consider the IVP () with the following conditions: f x (x) 0, f xx (x) 0, α 0 f (β 0), β 0 f (α 0), Published by Canadian Center of Science and Education 63

www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 where α 0 (t) β 0 (t), t [0, T], and α 0 (t), β 0 (t) are the upper and lower solutions of (), see Ladde, Lakshmikantham, and Vatsala (1985) and Lakshmikantham & Vatsala (1998, 005). The corresponding iterative scheme using Generalized Quasilinearization is: α n+1 = f (β n) + f x (β n )(α n+1 α n ), α n+1 (0) = x 0, (8) β n+1 = f (α n) + f x (β n )(β n+1 β n ), β n+1 (0) = x 0, (9) where α 0 x β 0 on [0, T]. These schemes give rise to sequences of iterates that converge uniformly and quadratically to x(t), the unique solution of () on [0,T]. For complete details refer to Lakshmikantham and Vatsala (1998). If one considers the case where f is monotone decreasing, then the Newton-Fourier method is no longer applicable, while the solution of a comparable IVP with f monotone decreasing can be approximated with Generalized Quasilinearization. Below we present four cases where iterative schemes are developed to approximate (1) where f can be decomposed into a convex and a concave function. The corresponding Generalized Quasilinearization theorems and proofs can be found in Lakshmikantham and Vatsala (1998, section 1.3).. Statement of the problem Consider the scalar equation 0 = F(x) = f (x) + g(x), (10) where, f, g C[Ω, R], Ω R, and [α 0,β 0 ] = D Ω. Assume that (??) has a simple root x = r in D. Let f and g be convex and concave functions respectively such that, f x, g x, f xx, g xx exist, are continuous and satisfy f xx 0, g xx 0, x Ω. (11) The purpose of this paper is to produce iterative schemes that will generate two monotone sequences {α n } and {β n } that converge from both left and right to r. We will show that this convergence is quadratic in the four following cases. 3. Main Results Theorem 1. Assume that (10), and (11) hold true, and Then, there exist monotone sequences {α n } and {β n } such that 0 < f (α 0 ) + g(α 0 ), (1) 0 > f (β 0 ) + g(β 0 ), (13) f x (β 0 ) + g x (α 0 ) < 0, (14) α 0 x β 0. α 0 <α 1 < <α n < r <β n < <β 1 <β 0 (15) which converge quadratically to r. The sequences {α n } and {β n } are generated by the following iterative scheme: Proof. Let p = α 0 α 1 ; by (1) and (16) with n = 0, 0 = f (α n ) + f x (α n )(α n+1 α n ) + g(α n ) + g x (β n )(α n+1 α n ), (16) 0 = f (β n ) + f x (α n )(β n+1 β n ) + g(β n ) + g x (β n )(β n+1 β n ). (17) 0 < [ f (α 0 ) + g(α 0 )] [ f (α 0 ) + f x (α 0 )(α 1 α 0 ) + g(α 0 ) + g x (β 0 )(α 1 α 0 )] = [ f x (α 0 )(α 1 α 0 ) + g x (β 0 )(α 1 α 0 )] 0 < [ f x (α 0 ) + g x (β 0 )]p By condition (11) f x (α 0 ) < f x (β 0 ) and g x (β 0 ) < g x (α 0 ), so 0 < [ f x (α 0 ) + g x (β 0 )]p < [ f x (β 0 ) + g x (α 0 )]p. Then, condition (14) allows us to conclude that p < 0, thus α 0 <α 1. Now, show that β 0 >β 1 ; let p = β 1 β 0. From (13) and (17) with n = 0, we have 0 < [ f (β 0 ) + f x (α 0 )(β 1 β 0 ) + g(β 0 ) + g x (β 0 )(β 1 β 0 )] [ f (α 0 ) + g(β 0 )] = [ f x (α 0 )(β 1 β 0 ) + g x (β 0 )(β 1 β 0 )] 0 < [ f x (α 0 ) + g x (β 0 )]p 64 ISSN 1916-9795 E-ISSN 1916-9809

www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 Again, by (11) and (14) we conclude that p < 0, making β 1 <β 0. To show that α 1 <β 1, we combine (16) and (17), both with n = 0: 0 = [ f (α 0 ) + f x (α 0 )(α 1 α 0 ) + g(α 0 ) + g x (β 0 )(α 1 α 0 )] [ f (β 0 ) + f x (α 0 )(β 1 β 0 ) + g(β 0 ) + g x (β 0 )(β 1 β 0 )] = [ f (α 0 ) f (β 0 )] + [g(α 0 ) g(β 0 )] + [ f x (α 0 ) + g x (β 0 )][α 1 α 0 + β 0 β 1 ]. Using the Mean Value Theorem (MVT) with σ 1,σ (α 0,β 0 ) we next obtain, 0 = [ f x (σ 1 ) + g x (σ )][α 0 β 0 ] + [ f x (α 0 ) + g x (β 0 )][α 1 α 0 + β 0 β 1 ]. Now, (11) states that f x (x) and g x (x) are strictly increasing and strictly decreasing respectively. So, f x (α 0 ) < f x (σ 1 ) < f x (β 0 ) and g x (β 0 ) < g x (σ ) < g x (α 0 ). 0 < [ f x (α 0 ) + g x (β 0 )][α 0 β 0 ] + [ f x (α 0 ) + g x (β 0 )][α 1 α 0 + β 0 β 1 ] = [ f x (α 0 ) + g x (β 0 )]p < [ f x (β 0 ) + g x (α 0 )]p. Then, p < 0 by (14) and α 1 <β 1. The next step is to show that α 1 < r <β 1. Let p = α 1 r. The fact that f (r) + g(r) = 0 together with (16) (with n = 0), yields: 0 = [ f (α 0 ) + f x (α 0 )(α 1 α 0 ) + g(α 0 ) + g x (β 0 )(α 1 α 0 )] [ f (r) + g(r)] = [ f (α 0 ) f (r)] + [g(α 0 ) g(r)] + [ f x (α 0 ) + g x (β 0 )][α 1 α 0 ]. We again use the MVT with γ 1,γ (α 0, r), and (11) to obtain 0 = [ f x (γ 1 ) + g x (γ )][α 0 r] + [ f x (α 0 ) + g x (β 0 )][α 1 α 0 ] < [ f x (β 0 ) g x (α 0 )][α 0 r] + [ f x (α 0 ) + g x (β 0 )][α 1 α 0 ] < [ f x (β 0 ) g x (α 0 )][α 0 r] + [ f x (β 0 ) g x (α 0 )][α 1 α 0 ] = [ f x (β 0 ) + g x (α 0 )][α 0 r + α 1 α 0 ] < [ f x (β 0 ) + g x (α 0 )]p. Where we have again appealed to the strictly increasing and strictly decreasing natures of f x (x) and g x (x), respectively. Condition (14) leads to p < 0, so α 1 < r. By letting p = r β 1 one can easily show that r <β 1. Now that both the base and n = 1 cases have been proven, by induction we see that (15) holds true, where lim n α n = r = lim n β n. To show the quadratic convergence of the current scheme, we let p n+1 = r α n+1 > 0, and q n+1 = β n+1 r > 0. Consider the case p n+1 = r α n > 0. Combining (10) and (16) we have: 0 = [ f (r) + g(r)] [ f (α n ) + f x (α n )(α n+1 α n ) + g(α n ) + g x (β n )(α n+1 α n )] = [ f (r) f (α n )] + [g(r) g(α n )] + [ f x (α n ) + g x (β n )][α n α n+1 ] The Mean Value Theorem with η 1,η (α n, r) produces the next result where we let p n = r α n. We also observe that by (11) f x (η 1 ) < f x (r) and g x (η ) < g x (α n ). 0 = [ f x (η 1 ) + g x (η )][r α n ] + [ f x (α n ) + g x (β n )][α n r + r α n+1 ] < [ f x (r) + g x (α n )]p n + [ f x (α n ) + g x (β n )][p n+1 p n ] < [ f x (r) f x (α n )]p n [g x (β n ) g x (α n )]p n + [ f x (α n ) + g x (β n )]p n+1. Application of the MVT once more yields the results below with ζ 1 (α n, r), ζ (α n,β n ); we also let q n = β n r. 0 < f xx (ζ 1 )p n g xx (ζ )[β n r + r α n ]p n + [ f x (α n ) + g x (β n )]p n+1 = f xx (ζ 1 )p n g xx (ζ )[q n + p n ]p n + [ f x (α n ) + g x (β n )]p n+1. The concave-convex behavior of f and g in C[Ω, R] given by (11) along with condition (14) yield: 0 < f xx (ζ 1 ) p n + g xx (ζ ) [q n + p n ]p n f x (β 0 ) + g x (α 0 ) p n+1 < M 1 p n + M (q n + p n ) p n M 3 p n+1, (18) Published by Canadian Center of Science and Education 65

www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 where Now, (18) yields, M 1 > f xx (x), M > g xx (x), M 3 < f x (β 0 ) + g x (α 0 ), for x D. (19) p n+1 < M 1 M 3 p n + M M 3 ( qn p n + p n ) = M 1 p n + M ( q n p n + p ) n ( < M 1 p 3 n + M p n + 1 ) q n, since (p n q n ) = p n p n q n + q n 0 which implies 3p n + q n p n + p n q n. We arrive at the desired result: p n+1 < (M 1 + 3M ) p n + M q n. (0) Similarly, for β n+1 = β n r > 0 we obtain the estimate ( 3M1 q n+1 < ) + M q n + M 1 p n, (1) where, M 1, M are defined in (19). The combination of the results (0) with (1) yields: p n+1 + q n+1 < [(M 1 + 3M ) ] [( ) p n + M q 3M1 n + + M q n + M 1 ( 3M1 = + 3M ) ( p 3M1 n + + 3M ) q n. p n ] Finally, p n+1 + q n+1 = r α n+1 + β n+1 r < K ( p n + q n), () where, K = 3 (M 1 + M ). This completes the proof of the theorem. If F(x) is a decreasing function with an isolated zero x = r D, one can modify Theorem 1 accordingly as follows. Theorem. Assume that (10), and (11) hold true, and 0 > f (α 0 ) + g(α 0 ), (3) 0 < f (β 0 ) + g(β 0 ), (4) f x (α 0 ) + g x (β 0 ) > 0, (5) α 0 x β 0. Then, there exist monotone sequences {α n } and {β n } which satisfy (15), and which converge quadratically to r. The sequences {α n } and {β n } are generated by the following iterative scheme: 0 = f (α n ) + f x (β n )(α n+1 α n ) + g(α n ) + g x (α n )(α n+1 α n ), (6) 0 = f (β n ) + f x (β n )(β n+1 β n ) + g(β n ) + g x (α n )(β n+1 β n ). (7) The mechanics of the proof of Theorem are the same as those of Theorem 1. Here we present the highlights of the proof of Theorem. Letting p = α 0 α 1 and combining (3) with (6) with n=0 yields the same result given for p = β 1 β 0 while combining (4) with (7) with n=0: 0 > [ f x (β 0 ) + g x (α 0 )]p > [ f x (α 0 ) + g x (β 0 )]p, which by (11) and (5) indicates that p < 0 on both cases, thus α 0 <α 1 and β 1 <β 0. Likewise, when showing that α 1 < r <β 1 we obtain 0 > [ f x (β 0 ) + g x (α 0 )]p > [ f x (α 0 ) + g x (β 0 )]p, where we first let p = α 1 r followed by p = r β 1. So, the n-case becomes 0 > [ f x (β n ) + g x (α n )]p > [ f x (α 0 ) + g x (β 0 )]p, which establishes that α n < r <β n, from which (15) follows. Quadratic convergence follows in the same fashion as Theorem 1. 66 ISSN 1916-9795 E-ISSN 1916-9809

www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 Theorem 3. Assume that (10), and (11) hold true, and 0 < f (β 0 ) + g(α 0 ), (8) 0 > f (α 0 ) + g(β 0 ), (9) f x (β 0 ) < 0, (30) f x (β 0 ) + g x (α 0 ) < 0, (31) α 0 x β 0. Then, there exist monotone sequences {α n } and {β n } which satisfy (15), and which converge quadratically to r, the isolated zero of (10). The sequences {α n } and {β n } are generated by the following iterative scheme: 0 = f (β n ) + f x (β n )(α n+1 α n ) + g(α n ) + g x (β n )(α n+1 α n ), (3) 0 = f (α n ) + f x (β n )(β n+1 β n ) + g(β n ) + g x (β n )(β n+1 β n ). (33) The proof follows the same strategy as the previous proofs, but with some variation in the results which we address below. Proof. Begin by subtracting (3) with n = 0 from (8) 0 < f (β 0 ) + g(α 0 ) [ f (β 0 ) + f x (β 0 )(α 1 α 0 ) + g(α 0 ) + g x (β 0 )(α 1 α 0 )] = f x (β 0 )(α 0 α 1 ) + g x (β 0 )(α 0 α 1 ) < [ f x (β 0 ) + g x (α 0 )](α 0 α 1 ) By (31) we conclude that α 0 <α 1. Next, subtract (33) from (9). 0 < f (α 0 ) + f x (β 0 )(β 1 β 0 ) + g(β 0 ) + g x (β 0 )(β 1 β 0 ) [ f (α 0 ) + g(β 0 )] = f x (β 0 )(β 1 β 0 ) + g x (β 0 )(β 1 β 0 ) < [ f x (β 0 ) + g x (α 0 )](β 1 β 0 ) Again, by (31) we conclude that β 1 <β 0 Now, subtract (33) with n=0 from (3) with n=0: 0 = f (β 0 ) + f x (β 0 )(α 1 α 0 ) + g(α 0 ) + g x (β 0 )(α 1 α 0 ) [ f (α 0 ) + f x (β 0 )(β 1 β 0 ) + g(β 0 ) + g x (β 0 )(β 1 β 0 )] = [ f (β 0 ) f (α 0 )] [g(β 0 ) g(α 0 )] + f x (β 0 )(α 1 α 0 β 1 + β 0 ) + g x (β 0 )(α 1 α 0 β 1 + β 0 ) = f x (σ)(β 0 α 0 ) g x (η)(β 0 α 0 ) + f x (β 0 )(α 1 α 0 β 1 + β 0 ) + g x (β 0 )(α 1 α 0 β 1 + β 0 ) < f x (β 0 )(β 0 α 0 ) g x (β 0 )(β 0 α 0 ) + f x (β 0 )(α 1 α 0 β 1 + β 0 ) + g x (β 0 )(α 1 α 0 β 1 + β 0 ) = f x (β 0 )(β 0 α 0 + α 1 α 0 β 1 + β 0 ) + g x (β 0 )(β 0 α 0 + α 1 α 0 β 1 + β 0 ) = f x (β 0 )(β 0 α 0 ) + f x (β 0 )(α 1 β 1 ) + g x (β 0 )(α 1 β 1 ) < f x (β 0 )(β 0 α 0 ) + f x (β 0 )(α 1 β 1 ) + g x (α 0 )(α 1 β 1 ) By (30), f x (β 0 ) < 0 and since β 0 α 0 > 0, we obtain 0 < f x (β 0 )(β 0 α 0 ) + f x (β 0 )(α 1 β 1 ) + g x (α 0 )(α 1 β 1 ) < ( f x (β 0 ) + g x (α 0 ))(α 1 β 1 ) Which establishes that α 1 <β 1 by virtue of (31). We now establish that α 1 < r <β 1. Subtract (10) with x=r from (3) with n=0: Published by Canadian Center of Science and Education 67

www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 0 = f (β 0 ) + f x (β 0 )(α 1 α 0 ) + g(α 0 ) + g x (β 0 )(α 1 α 0 ) [ f (r) + g(r)] = [ f (β 0 ) f (r)] [g(r) g(α 0 )] + f x (β 0 )(α 1 α 0 ) + g x (β 0 )(α 1 α 0 ) = f x (σ)(β 0 r) g x (η)(r α 0 ) + f x (β 0 )(α 1 α 0 ) + g x (β 0 )(α 1 α 0 ) < f x (β 0 )(β 0 r) + g x (α 0 )(α 0 r) + f x (β 0 )(α 1 α 0 ) + g x (β 0 )(α 1 α 0 ) < f x (β 0 )(β 0 r + α 1 α 0 ) + g x (α 0 )(α 0 r + α 1 α 0 ) = f x (β 0 )(β 0 α 0 ) + ( f x (β 0 ) + g x (α 0 ))(α 1 r) 0 < ( f x (β 0 ) + g x (α 0 ))(α 1 r) Where, r <σ<β 0, and α 0 <η<r by the MVT. Condition (11) implies that f x (β 0 ) > f x (σ) and g x (β 0 ) < g x (η) < g x (α 0 ). We have again appealed to (30) and finally to (31), so we conclude that α 1 r < 0. Similarly, subtract (3) with n=0 from (10) with x=r. 0 = f (r) + g(r) [ f (α 0 ) + f x (β 0 )(β 1 β 0 ) + g(β 0 ) + g x (β 0 )(β 1 β 0 )] = [ f (r) f (α 0 )] [g(β 0 ) g(r)] f x (β 0 )(β 1 β 0 ) g x (β 0 )(β 1 β 0 ) = f x (σ)(r α 0 ) g x (η)(β 0 r) + f x (β 0 )(β 0 β 1 ) + g x (β 0 )(β 0 β 1 ) < f x (β 0 )(r α 0 ) + g x (α 0 )(r β 0 ) + f x (β 0 )(β 0 β 1 ) + g x (α 0 )(β 0 β 1 ) = f x (β 0 )(β 0 α 0 ) + ( f x (β 0 ) + g x (α 0 ))(r β 1 ) < ( f x (β 0 ) + g x (α 0 ))(r β 1 ) So, we now conclude that α 0 <α 1 < r <β 1 <β 0 ; hence, by induction we obtain (15). Quadratic convergence now follows. Let p n+1 = r α n+1 > 0, and q n+1 = β n+1 r > 0. Subtracting (3) from (10) with x=r we obtain the following inequalities. 0 = f (r) + g(r) [ f (β n ) + f x (β n )(α n+1 α n ) + g(α n ) + g x (β n )(α n+1 α n )] = [ f (β n ) f (r)] + [g(r) g(α n )] f x (β n )(α n+1 r + r α n ) g x (β n )(α n+1 r + r α n ) = f x (σ)(β n r) + g x (η)(r α n ) + f x (β n )(p n+1 p n ) + g x (β n )(p n+1 p n ) = f x (σ)(β n α n + α n r) + g x (η)p n + f x (β n )(p n+1 p n ) + g x (β n )(p n+1 p n ) = f x (σ)(β n α n ) + f x (σ)p n + g x (η)p n + f x (β n )(p n+1 p n ) + g x (β n )(p n+1 p n ) = f x (σ)(β n α n ) f x (σ)p n g x (η)p n + f x (β n )(p n p n+1 ) + g x (β n )(p n p n+1 ) < f x (β n )(β n α n ) f x (r)p n g x (r)p n + f x (β n )(p n p n+1 ) + g x (β n )(p n p n+1 ) Again, f x (β n ) < 0 and by the MVT, r <σ<β n and α n <η<r. By (11), f x (β 0 ) > f x (σ), f x (r) > f x (σ),and - g x (r) > g x (η). So, 0 < [ f x (β n ) f x (r)]p n + [g x (β n ) g x (r)]p n [ f x (β n ) + g x (β n )]p n+1 < [ f x (β n ) f x (r)]p n + [g x (β n ) g x (r)]p n [ f x (β 0 ) + g x (α 0 )]p n+1 = f xx (ζ)(β n r)p n + g xx (ξ)(β n r)p n [ f x (β 0 ) + g x (α 0 )]p n+1 = [ f xx (ζ) + g xx (ξ)]p n q n [ f x (β 0 ) + g x (α 0 )]p n+1 Since, p n p n q n + q n 0 it follows that 1 (p n + q n) p n q n. So, And, placing bounds on the derivatives as before, we obtain, ( p [ f x (β 0 ) + g x (α 0 )]p n+1 < [ f xx (ζ) + g xx (ξ)] n + q ) n [ M1 + M Np n+1 < [ M1 + M p n+1 < N ] ( ) p n + q n ) ] ( p n + q n 68 ISSN 1916-9795 E-ISSN 1916-9809

www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 Now, subtract (10) with x=r from (33). 0 = f (r) + g(r) [ f (α n ) + f x (β n )(β n+1 β n ) + g(β n ) + g x (β n )(β n+1 β n )] =[ f (r) f (α n )] [g(β n ) g(r)] [ f x (β n ) + g x (β n )](β n+1 r + r β n ) = f x (σ)(r β n + β n α n ) g x (η)q n [ f x (β n ) + g x (β n )](q n+1 q n ) = f x (σ)q n + f x (σ)(β n α n ) g x (η)q n [ f x (β n ) + g x (β n )](q n+1 q n ). Where we again used the MVT with α n < σ < r and r < η < β n. By (11) we have that f x (σ) < f x (α n ) and g x (η) < g x (β n ), using condition (30) we produce the following inequalities: 0 < f x (α n )q n + f x (β n )(β n α n ) g x (β n )q n [ f x (β n ) + g x (β n )](q n+1 q n ) < f x (α n )q n g x (β n )q n + g x (β n )q n + f x (β n )q n [ f x (β n ) + g x (β n )]q n+1 < [ f x (β n ) f x (α n )]q n [ f x (β n ) + g x (β n )]q n+1 = f xx (ζ)(β n α n )q n [ f x (β n ) + g x (β n )]q n+1 = f xx (ζ)(β n r + r α n )q n [ f x (β n ) + g x (β n )]q n+1 = f xx (ζ)(q n + p n )q n [ f x (β n ) + g x (β n )]q n+1 = f xx (ζ)(q n + p n q n ) [ f x (β n ) + g x (β n )]q n+1 ( 3q ) f xx (ζ) n + p n [ f x (β n ) + g x (β n )]q n+1. Here, we have used the fact that p n p n q n + q n 0 3q n + p n q n + p n q n. Now, place bounds on the derivatives. Combine results. Finally, [ M1 + M p n+1 + q n+1 < N ( 3q ) 0 < M n 1 + p n Nq n+1 q n+1 < M 1 N p n + 3M 1 N q n ] ( ) p n + q M 1 n + N p n + 3M 1 [ M1 N q n [ M1 = N + M 1 N + M ] p n + N N + 3M 1 N + M N [ ] [ ] M1 + M = p 4M1 + M n + q n N N ] q n where, K = max [ ] M 1 +M N, 4M 1+M N. p n+1 + q n+1 = r α n+1 + β n+1 r < K ( p n + q n), Theorem 4. Assume that (10), and (11) hold true, and 0 > f (β 0 ) + g(α 0 ), 0 < f (α 0 ) + g(β 0 ), f x (α 0 ) < 0, f x (α 0 ) + g x (β 0 ) > 0, α 0 x β 0. Published by Canadian Center of Science and Education 69

www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 Then, there exist monotone sequences {α n } and {β n } which satisfy (15), and which converge quadratically to r, the isolated zero of (10). The sequences {α n } and {β n } are generated by the following iterative scheme: 0 = f (α n ) + f x (α n )(α n+1 α n ) + g(β n ) + g x (α n )(α n+1 α n ), (34) 0 = f (β n ) + f x (α n )(β n+1 β n ) + g(α n ) + g x (α n )(β n+1 β n ). (35) The proof of this theorem is not presented as it follows the patterns established in the proof of the previous theorem. The results above are found to be in agreement, when f is either convex or concave accordingly, with those presented in (Lakshmikantham, V. 1998). Acknowledgments The author wishes to express his utmost gratitude to Dr. V. Lakshmikantham at the Florida Institute of Technology for the original idea and his initial guidance, and to Dr. Ali Alikhani at Penn State Berks for his revisions and suggestions to this manuscript. Further recognition is rendered to the reviewer of the document whose comments were essential to bring this article to its final form. References Bellman, R.E. & Kalaba, R.E. (1965). Quasilinearization and nonlinear boundary value problems. New York: Elsevier. Kelley, C.T. (1995). Iterative methods for linear and nonlinear equations, Philadelphia: SIAM. Ladde, G.S., Lakshmikantham, V. & Vatsala, A.S. (1985). Monotone Iterative Technique for Nonlinear Differential Equations. Boston: Pitman. Lakshmikantham, V. & Vatsala, A.S. (1998). Generalized Quasilinearization for Nonlinear Problems. Dordrecht, The Netherlands: Kluwer Academic Publishers. Lakshmikantham, V. & Vatsala, A.S. (005). Generalized quasilinearization versus Newton s method. Applied Mathematics and Computation, 164, 53-530. Ortega, J.M. & Rheinboldt, W.C. (1970). Iterative Solution of Nonlinear Equations in Several Variables. New York: Academic Press. Potra, F.A. (1987). On a monotone Newton-like method, Computing, 39, 33-46. 70 ISSN 1916-9795 E-ISSN 1916-9809