www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 Generalized Quasilinearization versus Newton s Method for Convex-Concave Functions Cesar Martínez-Garza (Corresponding author) Division of Science The Pennsylvania State University - Berks Campus Tulpehocken Rd., Reading, PA 19610, USA Tel: 1-610-396-6438 E-mail: cxm58@psu.edu Abstract In this paper we use the Method of Generalized Quasilinearization to obtain monotone Newton-like comparative schemes to solve the equation F(x) = 0, where F(x) C[Ω, R], Ω R. Here, F(x) admits the decomposition F(x) = f (x) + g(x), where f (x) and g(x) are convex and concave functions, in C[Ω, R], respectively. The monotone sequences of iterates are shown to converge quadratically. Four cases are explored in this manuscript. Keywords: Newton s Method, Generalized Quasilinearization, Monotone Iterative Technique, Convex Functions 1. Introduction Lakshmikantham and Vatsala (004) showed that the method of Generalized Quasilinearization can be successfully employed to solve the equation 0 = f (x), (1) where f C[R, R], by comparing it to the initial value problem (IVP) x = f (x), x(0) = x 0, () on [0, T]. By exploiting the properties of the method of Generalized Quasilinearization (Lakshmikantham and Vatsala, 1998) they constructed a comparable framework to Newton s method to solve for an isolated zero, x 0 of (1). Newton s method simplicity and rapid convergence renders it as a first choice to solve (1), but its limitations have given rise to a broad body of work that continues to this day (Bellman and Kalaba,1965; Kelley, 1995; Ortega and Rheinboldt (1970); Potra (1987)). The generation of Newton-like schemes for the solution to (1) from the method of Generalized Quasilinearization is a logical choice, as Quasilinearization follows a similar methodology to Newton s method albeit for a seemingly different purpose (Bellman and Kalaba,1965). The main iteration formula for Newton s Method is x n+1 = x n f (x n), n 0, (3) f x (x n ) which requires that f x (x) be continuous and also nonzero in a neighborhood of a simple root of (1). This in turn requires knowledge of the monotone character of f (x). The similarities between Newton s method and standard Quasilinearization are enhanced when expressing (3) in the form 0 = f (x n ) + f x (x n )(x n+1 x n ), (4) and comparing it with the iterative scheme afforded by Quasilinearization to approximate (): x n+1 = f (x n) + f x (x n )(x n+1 x n ), x n+1 (0) = x 0. (5) The similarities between the method of Generalized Quasilinearization and Newton s method come to light when we look at the Newton-Fourier method as is shown next: let f be defined as in (1), and let D = [α 0,β 0 ] be a neighborhood of the simple zero of (1). If 0 < f (α 0 ), 0 < f (β 0 ), f x (x) > 0, and f xx (x) > 0, meaning that f is monotone increasing in D, then there exist monotone sequences {α n } and {β n } that converge quadratically to the simple zero of (1) in D. These sequences are generated by the iterative scheme 0 = f (β n ) + f x (β n )(β n+1 β n ), (6) 0 = f (α n ) + f x (β n )(α n+1 α n ), (7) (Lakshmikantham, V. 1998). Although two sequences are required, the Newton-Fourier method is not more expensive than the traditional Newton s method since the derivative computations are the same for both iterates (Potra, 1987). Comparatively, consider the IVP () with the following conditions: f x (x) 0, f xx (x) 0, α 0 f (β 0), β 0 f (α 0), Published by Canadian Center of Science and Education 63
www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 where α 0 (t) β 0 (t), t [0, T], and α 0 (t), β 0 (t) are the upper and lower solutions of (), see Ladde, Lakshmikantham, and Vatsala (1985) and Lakshmikantham & Vatsala (1998, 005). The corresponding iterative scheme using Generalized Quasilinearization is: α n+1 = f (β n) + f x (β n )(α n+1 α n ), α n+1 (0) = x 0, (8) β n+1 = f (α n) + f x (β n )(β n+1 β n ), β n+1 (0) = x 0, (9) where α 0 x β 0 on [0, T]. These schemes give rise to sequences of iterates that converge uniformly and quadratically to x(t), the unique solution of () on [0,T]. For complete details refer to Lakshmikantham and Vatsala (1998). If one considers the case where f is monotone decreasing, then the Newton-Fourier method is no longer applicable, while the solution of a comparable IVP with f monotone decreasing can be approximated with Generalized Quasilinearization. Below we present four cases where iterative schemes are developed to approximate (1) where f can be decomposed into a convex and a concave function. The corresponding Generalized Quasilinearization theorems and proofs can be found in Lakshmikantham and Vatsala (1998, section 1.3).. Statement of the problem Consider the scalar equation 0 = F(x) = f (x) + g(x), (10) where, f, g C[Ω, R], Ω R, and [α 0,β 0 ] = D Ω. Assume that (??) has a simple root x = r in D. Let f and g be convex and concave functions respectively such that, f x, g x, f xx, g xx exist, are continuous and satisfy f xx 0, g xx 0, x Ω. (11) The purpose of this paper is to produce iterative schemes that will generate two monotone sequences {α n } and {β n } that converge from both left and right to r. We will show that this convergence is quadratic in the four following cases. 3. Main Results Theorem 1. Assume that (10), and (11) hold true, and Then, there exist monotone sequences {α n } and {β n } such that 0 < f (α 0 ) + g(α 0 ), (1) 0 > f (β 0 ) + g(β 0 ), (13) f x (β 0 ) + g x (α 0 ) < 0, (14) α 0 x β 0. α 0 <α 1 < <α n < r <β n < <β 1 <β 0 (15) which converge quadratically to r. The sequences {α n } and {β n } are generated by the following iterative scheme: Proof. Let p = α 0 α 1 ; by (1) and (16) with n = 0, 0 = f (α n ) + f x (α n )(α n+1 α n ) + g(α n ) + g x (β n )(α n+1 α n ), (16) 0 = f (β n ) + f x (α n )(β n+1 β n ) + g(β n ) + g x (β n )(β n+1 β n ). (17) 0 < [ f (α 0 ) + g(α 0 )] [ f (α 0 ) + f x (α 0 )(α 1 α 0 ) + g(α 0 ) + g x (β 0 )(α 1 α 0 )] = [ f x (α 0 )(α 1 α 0 ) + g x (β 0 )(α 1 α 0 )] 0 < [ f x (α 0 ) + g x (β 0 )]p By condition (11) f x (α 0 ) < f x (β 0 ) and g x (β 0 ) < g x (α 0 ), so 0 < [ f x (α 0 ) + g x (β 0 )]p < [ f x (β 0 ) + g x (α 0 )]p. Then, condition (14) allows us to conclude that p < 0, thus α 0 <α 1. Now, show that β 0 >β 1 ; let p = β 1 β 0. From (13) and (17) with n = 0, we have 0 < [ f (β 0 ) + f x (α 0 )(β 1 β 0 ) + g(β 0 ) + g x (β 0 )(β 1 β 0 )] [ f (α 0 ) + g(β 0 )] = [ f x (α 0 )(β 1 β 0 ) + g x (β 0 )(β 1 β 0 )] 0 < [ f x (α 0 ) + g x (β 0 )]p 64 ISSN 1916-9795 E-ISSN 1916-9809
www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 Again, by (11) and (14) we conclude that p < 0, making β 1 <β 0. To show that α 1 <β 1, we combine (16) and (17), both with n = 0: 0 = [ f (α 0 ) + f x (α 0 )(α 1 α 0 ) + g(α 0 ) + g x (β 0 )(α 1 α 0 )] [ f (β 0 ) + f x (α 0 )(β 1 β 0 ) + g(β 0 ) + g x (β 0 )(β 1 β 0 )] = [ f (α 0 ) f (β 0 )] + [g(α 0 ) g(β 0 )] + [ f x (α 0 ) + g x (β 0 )][α 1 α 0 + β 0 β 1 ]. Using the Mean Value Theorem (MVT) with σ 1,σ (α 0,β 0 ) we next obtain, 0 = [ f x (σ 1 ) + g x (σ )][α 0 β 0 ] + [ f x (α 0 ) + g x (β 0 )][α 1 α 0 + β 0 β 1 ]. Now, (11) states that f x (x) and g x (x) are strictly increasing and strictly decreasing respectively. So, f x (α 0 ) < f x (σ 1 ) < f x (β 0 ) and g x (β 0 ) < g x (σ ) < g x (α 0 ). 0 < [ f x (α 0 ) + g x (β 0 )][α 0 β 0 ] + [ f x (α 0 ) + g x (β 0 )][α 1 α 0 + β 0 β 1 ] = [ f x (α 0 ) + g x (β 0 )]p < [ f x (β 0 ) + g x (α 0 )]p. Then, p < 0 by (14) and α 1 <β 1. The next step is to show that α 1 < r <β 1. Let p = α 1 r. The fact that f (r) + g(r) = 0 together with (16) (with n = 0), yields: 0 = [ f (α 0 ) + f x (α 0 )(α 1 α 0 ) + g(α 0 ) + g x (β 0 )(α 1 α 0 )] [ f (r) + g(r)] = [ f (α 0 ) f (r)] + [g(α 0 ) g(r)] + [ f x (α 0 ) + g x (β 0 )][α 1 α 0 ]. We again use the MVT with γ 1,γ (α 0, r), and (11) to obtain 0 = [ f x (γ 1 ) + g x (γ )][α 0 r] + [ f x (α 0 ) + g x (β 0 )][α 1 α 0 ] < [ f x (β 0 ) g x (α 0 )][α 0 r] + [ f x (α 0 ) + g x (β 0 )][α 1 α 0 ] < [ f x (β 0 ) g x (α 0 )][α 0 r] + [ f x (β 0 ) g x (α 0 )][α 1 α 0 ] = [ f x (β 0 ) + g x (α 0 )][α 0 r + α 1 α 0 ] < [ f x (β 0 ) + g x (α 0 )]p. Where we have again appealed to the strictly increasing and strictly decreasing natures of f x (x) and g x (x), respectively. Condition (14) leads to p < 0, so α 1 < r. By letting p = r β 1 one can easily show that r <β 1. Now that both the base and n = 1 cases have been proven, by induction we see that (15) holds true, where lim n α n = r = lim n β n. To show the quadratic convergence of the current scheme, we let p n+1 = r α n+1 > 0, and q n+1 = β n+1 r > 0. Consider the case p n+1 = r α n > 0. Combining (10) and (16) we have: 0 = [ f (r) + g(r)] [ f (α n ) + f x (α n )(α n+1 α n ) + g(α n ) + g x (β n )(α n+1 α n )] = [ f (r) f (α n )] + [g(r) g(α n )] + [ f x (α n ) + g x (β n )][α n α n+1 ] The Mean Value Theorem with η 1,η (α n, r) produces the next result where we let p n = r α n. We also observe that by (11) f x (η 1 ) < f x (r) and g x (η ) < g x (α n ). 0 = [ f x (η 1 ) + g x (η )][r α n ] + [ f x (α n ) + g x (β n )][α n r + r α n+1 ] < [ f x (r) + g x (α n )]p n + [ f x (α n ) + g x (β n )][p n+1 p n ] < [ f x (r) f x (α n )]p n [g x (β n ) g x (α n )]p n + [ f x (α n ) + g x (β n )]p n+1. Application of the MVT once more yields the results below with ζ 1 (α n, r), ζ (α n,β n ); we also let q n = β n r. 0 < f xx (ζ 1 )p n g xx (ζ )[β n r + r α n ]p n + [ f x (α n ) + g x (β n )]p n+1 = f xx (ζ 1 )p n g xx (ζ )[q n + p n ]p n + [ f x (α n ) + g x (β n )]p n+1. The concave-convex behavior of f and g in C[Ω, R] given by (11) along with condition (14) yield: 0 < f xx (ζ 1 ) p n + g xx (ζ ) [q n + p n ]p n f x (β 0 ) + g x (α 0 ) p n+1 < M 1 p n + M (q n + p n ) p n M 3 p n+1, (18) Published by Canadian Center of Science and Education 65
www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 where Now, (18) yields, M 1 > f xx (x), M > g xx (x), M 3 < f x (β 0 ) + g x (α 0 ), for x D. (19) p n+1 < M 1 M 3 p n + M M 3 ( qn p n + p n ) = M 1 p n + M ( q n p n + p ) n ( < M 1 p 3 n + M p n + 1 ) q n, since (p n q n ) = p n p n q n + q n 0 which implies 3p n + q n p n + p n q n. We arrive at the desired result: p n+1 < (M 1 + 3M ) p n + M q n. (0) Similarly, for β n+1 = β n r > 0 we obtain the estimate ( 3M1 q n+1 < ) + M q n + M 1 p n, (1) where, M 1, M are defined in (19). The combination of the results (0) with (1) yields: p n+1 + q n+1 < [(M 1 + 3M ) ] [( ) p n + M q 3M1 n + + M q n + M 1 ( 3M1 = + 3M ) ( p 3M1 n + + 3M ) q n. p n ] Finally, p n+1 + q n+1 = r α n+1 + β n+1 r < K ( p n + q n), () where, K = 3 (M 1 + M ). This completes the proof of the theorem. If F(x) is a decreasing function with an isolated zero x = r D, one can modify Theorem 1 accordingly as follows. Theorem. Assume that (10), and (11) hold true, and 0 > f (α 0 ) + g(α 0 ), (3) 0 < f (β 0 ) + g(β 0 ), (4) f x (α 0 ) + g x (β 0 ) > 0, (5) α 0 x β 0. Then, there exist monotone sequences {α n } and {β n } which satisfy (15), and which converge quadratically to r. The sequences {α n } and {β n } are generated by the following iterative scheme: 0 = f (α n ) + f x (β n )(α n+1 α n ) + g(α n ) + g x (α n )(α n+1 α n ), (6) 0 = f (β n ) + f x (β n )(β n+1 β n ) + g(β n ) + g x (α n )(β n+1 β n ). (7) The mechanics of the proof of Theorem are the same as those of Theorem 1. Here we present the highlights of the proof of Theorem. Letting p = α 0 α 1 and combining (3) with (6) with n=0 yields the same result given for p = β 1 β 0 while combining (4) with (7) with n=0: 0 > [ f x (β 0 ) + g x (α 0 )]p > [ f x (α 0 ) + g x (β 0 )]p, which by (11) and (5) indicates that p < 0 on both cases, thus α 0 <α 1 and β 1 <β 0. Likewise, when showing that α 1 < r <β 1 we obtain 0 > [ f x (β 0 ) + g x (α 0 )]p > [ f x (α 0 ) + g x (β 0 )]p, where we first let p = α 1 r followed by p = r β 1. So, the n-case becomes 0 > [ f x (β n ) + g x (α n )]p > [ f x (α 0 ) + g x (β 0 )]p, which establishes that α n < r <β n, from which (15) follows. Quadratic convergence follows in the same fashion as Theorem 1. 66 ISSN 1916-9795 E-ISSN 1916-9809
www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 Theorem 3. Assume that (10), and (11) hold true, and 0 < f (β 0 ) + g(α 0 ), (8) 0 > f (α 0 ) + g(β 0 ), (9) f x (β 0 ) < 0, (30) f x (β 0 ) + g x (α 0 ) < 0, (31) α 0 x β 0. Then, there exist monotone sequences {α n } and {β n } which satisfy (15), and which converge quadratically to r, the isolated zero of (10). The sequences {α n } and {β n } are generated by the following iterative scheme: 0 = f (β n ) + f x (β n )(α n+1 α n ) + g(α n ) + g x (β n )(α n+1 α n ), (3) 0 = f (α n ) + f x (β n )(β n+1 β n ) + g(β n ) + g x (β n )(β n+1 β n ). (33) The proof follows the same strategy as the previous proofs, but with some variation in the results which we address below. Proof. Begin by subtracting (3) with n = 0 from (8) 0 < f (β 0 ) + g(α 0 ) [ f (β 0 ) + f x (β 0 )(α 1 α 0 ) + g(α 0 ) + g x (β 0 )(α 1 α 0 )] = f x (β 0 )(α 0 α 1 ) + g x (β 0 )(α 0 α 1 ) < [ f x (β 0 ) + g x (α 0 )](α 0 α 1 ) By (31) we conclude that α 0 <α 1. Next, subtract (33) from (9). 0 < f (α 0 ) + f x (β 0 )(β 1 β 0 ) + g(β 0 ) + g x (β 0 )(β 1 β 0 ) [ f (α 0 ) + g(β 0 )] = f x (β 0 )(β 1 β 0 ) + g x (β 0 )(β 1 β 0 ) < [ f x (β 0 ) + g x (α 0 )](β 1 β 0 ) Again, by (31) we conclude that β 1 <β 0 Now, subtract (33) with n=0 from (3) with n=0: 0 = f (β 0 ) + f x (β 0 )(α 1 α 0 ) + g(α 0 ) + g x (β 0 )(α 1 α 0 ) [ f (α 0 ) + f x (β 0 )(β 1 β 0 ) + g(β 0 ) + g x (β 0 )(β 1 β 0 )] = [ f (β 0 ) f (α 0 )] [g(β 0 ) g(α 0 )] + f x (β 0 )(α 1 α 0 β 1 + β 0 ) + g x (β 0 )(α 1 α 0 β 1 + β 0 ) = f x (σ)(β 0 α 0 ) g x (η)(β 0 α 0 ) + f x (β 0 )(α 1 α 0 β 1 + β 0 ) + g x (β 0 )(α 1 α 0 β 1 + β 0 ) < f x (β 0 )(β 0 α 0 ) g x (β 0 )(β 0 α 0 ) + f x (β 0 )(α 1 α 0 β 1 + β 0 ) + g x (β 0 )(α 1 α 0 β 1 + β 0 ) = f x (β 0 )(β 0 α 0 + α 1 α 0 β 1 + β 0 ) + g x (β 0 )(β 0 α 0 + α 1 α 0 β 1 + β 0 ) = f x (β 0 )(β 0 α 0 ) + f x (β 0 )(α 1 β 1 ) + g x (β 0 )(α 1 β 1 ) < f x (β 0 )(β 0 α 0 ) + f x (β 0 )(α 1 β 1 ) + g x (α 0 )(α 1 β 1 ) By (30), f x (β 0 ) < 0 and since β 0 α 0 > 0, we obtain 0 < f x (β 0 )(β 0 α 0 ) + f x (β 0 )(α 1 β 1 ) + g x (α 0 )(α 1 β 1 ) < ( f x (β 0 ) + g x (α 0 ))(α 1 β 1 ) Which establishes that α 1 <β 1 by virtue of (31). We now establish that α 1 < r <β 1. Subtract (10) with x=r from (3) with n=0: Published by Canadian Center of Science and Education 67
www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 0 = f (β 0 ) + f x (β 0 )(α 1 α 0 ) + g(α 0 ) + g x (β 0 )(α 1 α 0 ) [ f (r) + g(r)] = [ f (β 0 ) f (r)] [g(r) g(α 0 )] + f x (β 0 )(α 1 α 0 ) + g x (β 0 )(α 1 α 0 ) = f x (σ)(β 0 r) g x (η)(r α 0 ) + f x (β 0 )(α 1 α 0 ) + g x (β 0 )(α 1 α 0 ) < f x (β 0 )(β 0 r) + g x (α 0 )(α 0 r) + f x (β 0 )(α 1 α 0 ) + g x (β 0 )(α 1 α 0 ) < f x (β 0 )(β 0 r + α 1 α 0 ) + g x (α 0 )(α 0 r + α 1 α 0 ) = f x (β 0 )(β 0 α 0 ) + ( f x (β 0 ) + g x (α 0 ))(α 1 r) 0 < ( f x (β 0 ) + g x (α 0 ))(α 1 r) Where, r <σ<β 0, and α 0 <η<r by the MVT. Condition (11) implies that f x (β 0 ) > f x (σ) and g x (β 0 ) < g x (η) < g x (α 0 ). We have again appealed to (30) and finally to (31), so we conclude that α 1 r < 0. Similarly, subtract (3) with n=0 from (10) with x=r. 0 = f (r) + g(r) [ f (α 0 ) + f x (β 0 )(β 1 β 0 ) + g(β 0 ) + g x (β 0 )(β 1 β 0 )] = [ f (r) f (α 0 )] [g(β 0 ) g(r)] f x (β 0 )(β 1 β 0 ) g x (β 0 )(β 1 β 0 ) = f x (σ)(r α 0 ) g x (η)(β 0 r) + f x (β 0 )(β 0 β 1 ) + g x (β 0 )(β 0 β 1 ) < f x (β 0 )(r α 0 ) + g x (α 0 )(r β 0 ) + f x (β 0 )(β 0 β 1 ) + g x (α 0 )(β 0 β 1 ) = f x (β 0 )(β 0 α 0 ) + ( f x (β 0 ) + g x (α 0 ))(r β 1 ) < ( f x (β 0 ) + g x (α 0 ))(r β 1 ) So, we now conclude that α 0 <α 1 < r <β 1 <β 0 ; hence, by induction we obtain (15). Quadratic convergence now follows. Let p n+1 = r α n+1 > 0, and q n+1 = β n+1 r > 0. Subtracting (3) from (10) with x=r we obtain the following inequalities. 0 = f (r) + g(r) [ f (β n ) + f x (β n )(α n+1 α n ) + g(α n ) + g x (β n )(α n+1 α n )] = [ f (β n ) f (r)] + [g(r) g(α n )] f x (β n )(α n+1 r + r α n ) g x (β n )(α n+1 r + r α n ) = f x (σ)(β n r) + g x (η)(r α n ) + f x (β n )(p n+1 p n ) + g x (β n )(p n+1 p n ) = f x (σ)(β n α n + α n r) + g x (η)p n + f x (β n )(p n+1 p n ) + g x (β n )(p n+1 p n ) = f x (σ)(β n α n ) + f x (σ)p n + g x (η)p n + f x (β n )(p n+1 p n ) + g x (β n )(p n+1 p n ) = f x (σ)(β n α n ) f x (σ)p n g x (η)p n + f x (β n )(p n p n+1 ) + g x (β n )(p n p n+1 ) < f x (β n )(β n α n ) f x (r)p n g x (r)p n + f x (β n )(p n p n+1 ) + g x (β n )(p n p n+1 ) Again, f x (β n ) < 0 and by the MVT, r <σ<β n and α n <η<r. By (11), f x (β 0 ) > f x (σ), f x (r) > f x (σ),and - g x (r) > g x (η). So, 0 < [ f x (β n ) f x (r)]p n + [g x (β n ) g x (r)]p n [ f x (β n ) + g x (β n )]p n+1 < [ f x (β n ) f x (r)]p n + [g x (β n ) g x (r)]p n [ f x (β 0 ) + g x (α 0 )]p n+1 = f xx (ζ)(β n r)p n + g xx (ξ)(β n r)p n [ f x (β 0 ) + g x (α 0 )]p n+1 = [ f xx (ζ) + g xx (ξ)]p n q n [ f x (β 0 ) + g x (α 0 )]p n+1 Since, p n p n q n + q n 0 it follows that 1 (p n + q n) p n q n. So, And, placing bounds on the derivatives as before, we obtain, ( p [ f x (β 0 ) + g x (α 0 )]p n+1 < [ f xx (ζ) + g xx (ξ)] n + q ) n [ M1 + M Np n+1 < [ M1 + M p n+1 < N ] ( ) p n + q n ) ] ( p n + q n 68 ISSN 1916-9795 E-ISSN 1916-9809
www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 Now, subtract (10) with x=r from (33). 0 = f (r) + g(r) [ f (α n ) + f x (β n )(β n+1 β n ) + g(β n ) + g x (β n )(β n+1 β n )] =[ f (r) f (α n )] [g(β n ) g(r)] [ f x (β n ) + g x (β n )](β n+1 r + r β n ) = f x (σ)(r β n + β n α n ) g x (η)q n [ f x (β n ) + g x (β n )](q n+1 q n ) = f x (σ)q n + f x (σ)(β n α n ) g x (η)q n [ f x (β n ) + g x (β n )](q n+1 q n ). Where we again used the MVT with α n < σ < r and r < η < β n. By (11) we have that f x (σ) < f x (α n ) and g x (η) < g x (β n ), using condition (30) we produce the following inequalities: 0 < f x (α n )q n + f x (β n )(β n α n ) g x (β n )q n [ f x (β n ) + g x (β n )](q n+1 q n ) < f x (α n )q n g x (β n )q n + g x (β n )q n + f x (β n )q n [ f x (β n ) + g x (β n )]q n+1 < [ f x (β n ) f x (α n )]q n [ f x (β n ) + g x (β n )]q n+1 = f xx (ζ)(β n α n )q n [ f x (β n ) + g x (β n )]q n+1 = f xx (ζ)(β n r + r α n )q n [ f x (β n ) + g x (β n )]q n+1 = f xx (ζ)(q n + p n )q n [ f x (β n ) + g x (β n )]q n+1 = f xx (ζ)(q n + p n q n ) [ f x (β n ) + g x (β n )]q n+1 ( 3q ) f xx (ζ) n + p n [ f x (β n ) + g x (β n )]q n+1. Here, we have used the fact that p n p n q n + q n 0 3q n + p n q n + p n q n. Now, place bounds on the derivatives. Combine results. Finally, [ M1 + M p n+1 + q n+1 < N ( 3q ) 0 < M n 1 + p n Nq n+1 q n+1 < M 1 N p n + 3M 1 N q n ] ( ) p n + q M 1 n + N p n + 3M 1 [ M1 N q n [ M1 = N + M 1 N + M ] p n + N N + 3M 1 N + M N [ ] [ ] M1 + M = p 4M1 + M n + q n N N ] q n where, K = max [ ] M 1 +M N, 4M 1+M N. p n+1 + q n+1 = r α n+1 + β n+1 r < K ( p n + q n), Theorem 4. Assume that (10), and (11) hold true, and 0 > f (β 0 ) + g(α 0 ), 0 < f (α 0 ) + g(β 0 ), f x (α 0 ) < 0, f x (α 0 ) + g x (β 0 ) > 0, α 0 x β 0. Published by Canadian Center of Science and Education 69
www.ccsenet.org/jmr Journal of Mathematics Research Vol., No. 3; August 010 Then, there exist monotone sequences {α n } and {β n } which satisfy (15), and which converge quadratically to r, the isolated zero of (10). The sequences {α n } and {β n } are generated by the following iterative scheme: 0 = f (α n ) + f x (α n )(α n+1 α n ) + g(β n ) + g x (α n )(α n+1 α n ), (34) 0 = f (β n ) + f x (α n )(β n+1 β n ) + g(α n ) + g x (α n )(β n+1 β n ). (35) The proof of this theorem is not presented as it follows the patterns established in the proof of the previous theorem. The results above are found to be in agreement, when f is either convex or concave accordingly, with those presented in (Lakshmikantham, V. 1998). Acknowledgments The author wishes to express his utmost gratitude to Dr. V. Lakshmikantham at the Florida Institute of Technology for the original idea and his initial guidance, and to Dr. Ali Alikhani at Penn State Berks for his revisions and suggestions to this manuscript. Further recognition is rendered to the reviewer of the document whose comments were essential to bring this article to its final form. References Bellman, R.E. & Kalaba, R.E. (1965). Quasilinearization and nonlinear boundary value problems. New York: Elsevier. Kelley, C.T. (1995). Iterative methods for linear and nonlinear equations, Philadelphia: SIAM. Ladde, G.S., Lakshmikantham, V. & Vatsala, A.S. (1985). Monotone Iterative Technique for Nonlinear Differential Equations. Boston: Pitman. Lakshmikantham, V. & Vatsala, A.S. (1998). Generalized Quasilinearization for Nonlinear Problems. Dordrecht, The Netherlands: Kluwer Academic Publishers. Lakshmikantham, V. & Vatsala, A.S. (005). Generalized quasilinearization versus Newton s method. Applied Mathematics and Computation, 164, 53-530. Ortega, J.M. & Rheinboldt, W.C. (1970). Iterative Solution of Nonlinear Equations in Several Variables. New York: Academic Press. Potra, F.A. (1987). On a monotone Newton-like method, Computing, 39, 33-46. 70 ISSN 1916-9795 E-ISSN 1916-9809