1. Introduction
Finding the multiple zeros of nonlinear equations is an important and challenging task in the field of numerical analysis and applied sciences [
1,
2]. In this study, we consider iterative methods to find a multiple root
(having a known multiplicity
) of a nonlinear equation of the following form:
where
is an analytic function in
D surrounding the required zero
.
Several higher-order techniques have been developed and analyzed in the literature (see [
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13]). Most of them are based on the modified Newton’s method [
14], which is given by:
It has a second-order convergence and is one of the best known one-point iterative methods for multiple zeros. However, it requires the evaluation of the first-order derivative at each step. Finding the derivative is not an easy task. In addition, the derivative-free methods are important in the cases where the derivative of function is either very small, or does not exist or is not easy to evaluate.
Traub–Steffensen [
15] proposed the following derivative free method:
or
for
in the Newton method (
1). Here,
is a divided difference and
. Then, the recursive scheme (
1) takes the form of the Traub–Steffensen method, which is defined as below:
Recently, some higher-order derivative-free methods have been presented in the literature (see [
16,
17,
18,
19]). Kumar et al. [
16] suggested a second order one-point derivative free scheme. In addition, Behl et al. [
17] and Kumar et al [
18,
19] advanced fourth order convergent derivative free methods for multiple zeros. According to the Kung–Traub hypothesis [
20], the methods of [
17,
18,
19] require three functional evaluations per iteration. Therefore, they have optimal convergence order.
The purpose of this study is to design some new efficient derivative-free techniques that are capable of achieving a high order convergence with a minimum number of evaluations of the involved function. Following these ideas, we derive two-step derivative-free techniques that have fourth order convergence. The new methods consume only three evaluations of the involved function, per iteration. So, it is an optimal scheme in the sense of the Kung–Traub hypothesis [
20]. The algorithm is based on the Traub–Steffensen method (
2) and is further modified in the second stage by using the Traub–Steffensen-like iteration. Numerical results also demonstrate the superiority of our methods over the existing ones.
2. Development of Scheme
For
, we propose the following iterative approach:
where
and
.
The results are calculated for different values of n. Firstly, we consider the case for and establish the fourth-order convergence in the following Theorem 1.
Theorem 1. Consider that is a multiple zero of Φ having multiplicity . We also assume that is an analytic function in D that contains the vicinity of the required zero α. Then, the algorithm (3) has fourth-order convergence, ifwhere , for . Proof. The error at the
k-th stage is given by
. Adopt the Taylor’s series expansion for the function
about
with the assumptions
,
and
we have:
where
for
.
Similarly, we have the following Taylor’s series expansion of
about
:
where
By inserting expressions (
4) and (
5) in the first step of (
3), we yield:
Expand the Taylor series expansion of
about
, it follows:
Using (
4), (
5) and (
7), we have:
and
From the expressions (
8) and (
9) that
and
, respectively. Then, we expand the Taylor series expansion of weight function
in the neighborhood of
in the following way:
Inserting (
4)–(
10) in the last step of (
3), we obtain
where
Here, the expression of
is not reproduced explicitly due to its considerable length. We set the coefficients of
,
and
to zero at the same time and solving the resulting equations. Then, we obtain
where
By using expression (
12) in (
11), we have:
Hence proved Theorem 1. □
Theorem 2. We adopt the statement of Theorem 1 in the same sense. Then, the algorithm (3) has at least fourth order convergence for , ifwhere Proof. Keeping in the mind that
,
,
and
Expand
about
with the help of Taylor series expansion, we have
where
for
.
Similarly, Taylor series expansion of
about
provide the following expression:
where
Using (
13) and (
14) in first step of (
3), we get:
Expand the Taylor series expansion of
about
, which is given by:
From the expressions (
13), (
14) and (
16), we have:
By using (
10) and (
13)–(
18) in the last step of (
3), we obtain:
where
.
We set the coefficients of
and
to zero and solve the resulting equations. Then, we obtain:
Adopt the expression (
20) in (
19), we have the following error equation:
Hence, we proved Theorem 2. □
3. Generalization of the Method
For the multiplicity
, we define the following Theorem 3 for the method (
3).
Theorem 3. Using the statement of Theorem 1, the algorithm (3) for the case has at least fourth order convergence, if: Moreover, the error equation of (3) is given by Proof. Keeping in mind that
, and
, we have the following Taylor series expansion of
about
:
where
for
.
Similarly, expand
about
leads us
where
Use the expressions (
21) and (
22) in the first step of Equation (
3), we obtain
Expansion of
around
yields:
Using (
21), (
22) and (
24) in the expressions of
and
, we have:
and
Insert expressions (
10) and (
21)–(
26) in the second step of (
3), we have:
where
for
and
for .
If we set coefficients
,
and
equal to zero and solve the resulting equations, we get:
Adopt the expression (
28) in (
27), we have the following error equation:
Hence, the theorem is proved. □
Remark 1. The algorithms (3) reaches at fourth order convergence provided the conditions of Theorem 3 are satisfied. Only three evaluations of function, namely , and , are used per iteration in order to achieve this convergence rate. Therefore, the Kung–Traub hypothesis [20] confirms the optimal convergence of our algorithm (3). Remark 2. It is worth noting that b, which is employed in , only exists in the error equations of the cases and , but not in . However, for , we have noticed that it occurs in terms of and higher order. In general, such terms are expensive to compute. Furthermore, these terms are not required to demonstrate the desired fourth-order convergence.
Some Special Cases
We have explored several cases based on the weight function
which satisfy the conditions of Theorems 1–3. But, some the important simple forms are given below:
The corresponding technique to each of the above forms can be expressed as follows:
4. Numerical Results
We choose the combinations of (
30)–(
32) with scheme (
3) for
, called by (M1a), (M2a) and (M3a), respectively. In addition, we again consider the combinations of (
30)–(
32) in the scheme (
3) with
, denoted by (M1b), (M2b) and (M3b), respectively. The examples not only illustrate the feasibility and effectiveness of our methods, but also confirm the theoretical aspects. In order to verify the computational order of convergence (COC), we use the following formula (see [
21])
The performance of the new algorithms is compared with the following six known methods:
- (i)
Li–Liao–Cheng method (LLC) [
5]:
- (ii)
Li–Cheng–Neta method (LCN) [
6]:
where
- (iii)
Sharma–Sharma method (SSM) [
7]:
- (iv)
Zhou–Chen–Song method (ZCS) [
8]:
- (v)
Soleymani–Babajee–Lotfi method (SBM) [
10]:
where
- (vi)
Kansal–Kanwar–Bhatia method (KKB) [
13]:
where
The calculations are performed with
Mathematica [
22] using multiple-precision arithmetic. We consider the five nonlinear problems for comparisons, which are depicted in the
Table 1. The numerical results in
Table 2,
Table 3,
Table 4,
Table 5,
Table 6 and
Table 7 include:
The multiplicity of the corresponding function.
The number of iterations based on the stopping criterion .
The first three estimated errors in the iterations.
The computational order of convergence (COC) using (
33).
The CPU time required in the execution of program which is computed by the Mathematica command “TimeUsed[ ]”.
The configurations of the used computer for the calculation work are given below:
Made: HP
Installed memory (RAM): 4 GB RAM
Window edition: Windows 7 Professional
System type: 32-bit-Operating System.
The multiplicity of the above considered functions is calculated by the following formula:
where
and
. We applied this formula in our method M1 and obtained the multiplicity. The obtained results are depicted in the
Table 2. Similarly, we can apply M2 and M3.
Remark 3. The numerical results in Table 3, Table 4, Table 5, Table 6 and Table 7 show that the proposed techniques exhibit the consistent convergence behavior. Our methods consume the same or a fewer number of iterations for the considered problems in comparison to the other mentioned methods. Table 3, Table 4, Table 5, Table 6 and Table 7 demonstrate that the estimated errors of the presented algorithms are less than other methods. Furthermore, our methods execute the results in the short span of time as compared to the existing ones. 5. Conclusions
This study proposed some optimal derivative-free numerical techniques for multiple zeros on nonlinear equations. The fourth order is investigated based on the standard hypotheses. The applicability of new techniques has been illustrated on five nonlinear equations that were converted from real-life situations. The performances of our methods were compared to other existing methods of identical order. The numerical results show that the new derivative-free algorithms are superior to the existing ones.
Author Contributions
All authors have contributed equally to the development of the paper. All authors have read and agreed to the published version of the manuscript.
Funding
This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. D-130-713-1443.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D-130-713-1443). The authors, therefore, acknowledge with thanks DSR technical and financial support.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
- Sahlan, M.N.; Afshari, H. Three new approaches for solving a class of strongly nonlinear two-point boundary value problems. Bound. Value Probl. 2021, 2021, 60. [Google Scholar] [CrossRef]
- Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
- Dong, C. A family of multipoint iterative functions for finding multiple roots of equations. Int. J. Comput. Math. 1987, 21, 363–367. [Google Scholar] [CrossRef]
- Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
- Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef] [Green Version]
- Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
- Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Math. Appl. 2011, 235, 4199–4206. [Google Scholar] [CrossRef] [Green Version]
- Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef] [Green Version]
- Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
- Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef] [Green Version]
- Geum, Y.H.; Kim, Y.I.; Neta, B. Constructing a family of optimal eighth-order modified Newton-type multiple-zero finders along with the dynamics behind their purely imaginary extraneous fixed points. J. Comp. Appl. Math. 2018, 333, 131–156. [Google Scholar] [CrossRef]
- Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
- Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef] [Green Version]
- Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
- Kumar, D.; Sharma, J.R.; Argyros, I.K. Optimal one-point iterative function free from derivatives for multiple roots. Mathematics 2020, 8, 709. [Google Scholar] [CrossRef]
- Behl, R.; Bhalla, S.; Magreñán, Á.A.; Moysi, A. An Optimal Derivative Free Family of Chebyshev-Halley’s Method for Multiple Zeros. Mathematics 2021, 9, 546. [Google Scholar] [CrossRef]
- Kumar, S.; Kumar, D.; Sharma, J.R.; Jäntschi, L. A Family of Derivative Free Optimal Fourth Order Methods for Computing Multiple Roots. Symmetry 2020, 12, 1969. [Google Scholar] [CrossRef]
- Kumar, S.; Kumar, D.; Sharma, J.R.; Argyros, I.K. An efficient class of fourth-order derivative-free method for multiple-roots. Int. J. Nonlinear Sci. Numer. Simul. 2021, 2021, 000010151520200161. [Google Scholar] [CrossRef]
- Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
- Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
- Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
Table 1.
The considered five nonlinear problems for numerical experiments.
Table 1.
The considered five nonlinear problems for numerical experiments.
Problems | Root | Initial Guess |
---|
Van der Waals problem [23] | | |
| 1.75 | 2.6 |
Academic problem: | | |
| 0 | 0.1 |
Planck law radiation problem: [23] | | |
| 4.9651142317… | 5.6 |
Manning problem for isentropic supersonic flow: [24] | | |
| | |
| 1.8411294068… | 1.5 |
Complex root problem: | | |
| i | 1.1 i |
Table 2.
Multiplicity of considered problems in
Table 1.
Table 2.
Multiplicity of considered problems in
Table 1.
Problems | Multiplicity |
---|
| 2 |
| 3 |
| 3 |
| 4 |
| 5 |
Table 3.
Results of the methods for problem .
Table 3.
Results of the methods for problem .
Methods | k | | | | COC | CPU |
---|
LLC | 6 | | | | 4.000 | 0.0748 |
LCN | 6 | | | | 4.000 | 0.0932 |
SSM | 6 | | | | 4.000 | 0.0779 |
ZCM | 6 | | | | 4.000 | 0.0767 |
SBM | 6 | | | | 4.000 | 0.0781 |
KKB | 6 | | | | 4.000 | 0.0765 |
M1a | 5 | | | | 4.000 | 0.0621 |
M1b | 6 | | | | 4.000 | 0.0652 |
M2a | 4 | | | | 4.000 | 0.0597 |
M2b | 5 | | | | 4.000 | 0.0634 |
M3a | 5 | | | | 4.000 | 0.0643 |
M3b | 6 | | | | 4.000 | 0.0667 |
Table 4.
Results of the methods for problem .
Table 4.
Results of the methods for problem .
Methods | k | | | | COC | CPU |
---|
LLC | 3 | | | 0 | 4.000 | 0.7334 |
LCN | 3 | | | 0 | 4.000 | 0.7481 |
SSM | 3 | | | 0 | 4.000 | 0.8890 |
ZCM | 3 | | | 0 | 4.000 | 0.8743 |
SBM | 3 | | | 0 | 4.000 | 1.0292 |
KKB | 3 | | | 0 | 4.000 | 0.7492 |
M1a | 3 | | | 0 | 4.000 | 0.3272 |
M1b | 3 | | | 0 | 4.000 | 0.3431 |
M2a | 3 | | | 0 | 4.000 | 0.3743 |
M2b | 3 | | | 0 | 4.000 | 0.3287 |
M3a | 3 | | | 0 | 4.000 | 0.3590 |
M3b | 3 | | | 0 | 4.000 | 0.3435 |
Table 5.
Results of the methods for problem .
Table 5.
Results of the methods for problem .
Methods | k | | | | COC | CPU |
---|
LLC | 4 | | | | 4.000 | 0.8582 |
LCN | 4 | | | | 4.000 | 0.9523 |
SSM | 4 | | | | 4.000 | 1.1247 |
ZCS | 4 | | | | 4.000 | 1.1401 |
SBM | 4 | | | | 4.000 | 1.2872 |
KKB | 4 | | | | 4.000 | 0.9525 |
M1a | 3 | | | 0 | 4.000 | 0.2527 |
M1b | 3 | | | 0 | 4.000 | 0.2234 |
M2a | 3 | | | 0 | 4.000 | 0.2812 |
M2b | 3 | | | 0 | 4.000 | 0.2793 |
M3a | 3 | | | 0 | 4.000 | 0.2199 |
M3b | 3 | | | 0 | 4.000 | 0.2345 |
Table 6.
Results of the methods for problem .
Table 6.
Results of the methods for problem .
Methods | k | | | | COC | CPU |
---|
LLC | 4 | | | | 4.000 | 1.6843 |
LCN | 4 | | | | 4.000 | 1.7481 |
SSM | 4 | | | | 4.000 | 1.7789 |
ZCS | 4 | | | | 4.000 | 1.8104 |
SBM | 4 | | | | 4.000 | 1.9812 |
KKB | 4 | | | | 4.000 | 1.8882 |
M1a | 4 | | | | 4.000 | 1.4823 |
M1b | 4 | | | | 4.000 | 1.1866 |
M2a | 4 | | | | 4.000 | 1.1854 |
M2b | 4 | | | | 4.000 | 1.2019 |
M3a | 4 | | | | 4.000 | 1.2124 |
M3b | 4 | | | | 4.000 | 1.2173 |
Table 7.
Results of the methods for problem .
Table 7.
Results of the methods for problem .
Methods | k | | | | COC | CPU |
---|
LLC | 4 | | | | 4.000 | 1.3732 |
LCN | 4 | | | | 4.000 | 2.0124 |
SSM | 4 | | | | 4.000 | 1.9973 |
ZCS | 4 | | | | 4.000 | 1.9969 |
SBM | 4 | | | | 4.000 | 2.4342 |
KKB | 4 | | | | 4.000 | 1.9509 |
M1a | 4 | | | | 4.000 | 0.4219 |
M1b | 4 | | | | 4.000 | 0.4058 |
M2a | 4 | | | | 4.000 | 0.4215 |
M2b | 4 | | | | 4.000 | 0.4207 |
M3a | 4 | | | | 4.000 | 0.4058 |
M3b | 4 | | | | 4.000 | 0.4219 |
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).