 Research
 Open Access
A new Newtonlike method for solving nonlinear equations
 B. Saheya^{1, 2}View ORCID ID profile,
 Guoqing Chen^{1},
 Yunkang Sui^{3}Email author and
 Caiying Wu^{1}
 Received: 27 April 2016
 Accepted: 25 July 2016
 Published: 5 August 2016
Abstract
This paper presents an iterative scheme for solving nonline ar equations. We establish a new rational approximation model with linear numerator and denominator which has generalizes the local linear model. We then employ the new approximation for nonlinear equations and propose an improved Newton’s method to solve it. The new method revises the Jacobian matrix by a rank one matrix each iteration and obtains the quadratic convergence property. The numerical performance and comparison show that the proposed method is efficient.
Keywords
 Rational approximate function
 Improved Newton’s method
 Local convergence
Mathematics Subject Classification
 90C25
 90C30
Background
In 1669, Newton first used the Newton iteration (2) to solve a cubic equation. In 1690 Raphson first employed the formula (3) to solve a general cubic equations. Then Fourier (1890), Cauchy (1829), and Fine (1916) established the convergence theorem of Newton’s method for different cases. In 1948, Kantorovich (1948) established the convergence theorem referred to the Newton–Kantorovich theorem. This theorem is the main tool for proving the convergence of various Newtontype methods.
In this paper, we are interested in a Newtontype method with high computational efficiency for solving the system of nonlinear equations (1). Motivated by the approach in Sui et al. (2014), we provide a new rational model \(R:{\mathbb {R}}^n \rightarrow {\mathbb {R}}^m\). Although our approximation function is similar to the real valued function RALND studied in Sui et al. (2014), the proposed function is different from the RALND function. Based on this model, we linearize the nonlinear function F(x) and obtain a linear equation that is different from the first order Taylor polynomial. We then propose an improved Newton’s algorithm to solve nonlinear equations (1). In the new algorithm, in order to reflect more curvature information of nonlinear functions, the Jacobian matrix is updated by rank one matrix in each iteration. This method possesses high computational efficiency , and therefore does not increase calculation of function value, Jacobian or inverse Jacobian. Applying Newton’s method’s validation criteria, we prove that the algorithm is welldefined and the convergence rate is quadratic under some suitable conditions. The preliminary numerical experiment results and comparison are reported, showing the effectiveness of the algorithm.
This paper is organized as follows. We give a new rational approximation and improved Newton’s method in the next section. In section “Convergence analysis”, converge analysis is discussed and some numerical experiment results are reported in section “Numerical experiments”. The last section is a brief conclusion.
Rational approximation and improved Newton’s method
There are two differences between Algorithm 1 and Newton’s method. First, INM uses the rank one technique to revise the Jacobian in every iteration. Second, INM utilises the function values of the previous iteration point.
Convergence analysis
In this section, we prove the local quadratic convergence of Algorithm 1 for system of nonlinear equations. The techniques of the proof are similar to Newton’s method for nonlinear equations. In the rest of this paper, we make the following assumptions:
Assumption 1
 (i)
\(J(x^*)\) is nonsingular and there exist a constant \(\mu > 0\), such that \(\Vert J(x^*)\Vert \le \mu\).
 (ii) The function F is continuously differentiable in the open convex set \(D\subset {\mathbb {R}}^n\), and there exists a constant \(\gamma >0\), such that for all \(x,y\in D\)$$\begin{aligned} \Vert J(x)  J(y)\Vert \le \gamma \Vert xy\Vert . \end{aligned}$$
For proving the convergence theorem we need the following Lemmas.
Lemma 1
Proof
Please refer to Lemma 4.1.12 in Dennis and Schnabel (1993). \(\square\)
Lemma 2
Proof
Please refer to Lemma 4.1.16 in Dennis and Schnabel (1993). \(\square\)
Theorem 1
Proof
Numerical experiments
This section is devoted to the numerical results. First, we show the numerical comparison between Algorithm 1, Newton’s method and a third order Newton’s method for finding a root of real function. This provides the numerical evidence that Algorithm 1 is better then Newton’s method. Secondly, we demonstrate the performance of Algorithm 1 for solving system of nonlinear equations. Algorithm 1 has been applied to some popular test problems and compared with Newton’s method and a third order method. All codes were written in Mathematica10.0 and run on a PC with an Intel i7 3.6GHz CPU processor, 4GB memory and 64bit Windows 7 operating system.
Finding roots of real function
Test equations and range of initial point
Equation  Range of initial 

\(f_1(x) = \exp (x)\sin (x)+\ln (1+x^2)=0\)  \(x_0\in [0.1,1]\) 
\(f_2(x) = \exp (x)\sin (x)+\cos (x)\ln (1+x)=0\)  \(x_0\in [1,1]\) 
\(f_3(x) = \exp (\sin (x))x/5 1 =0\)  \(x_0\in [0.5,1]\) 
\(f_4(x) = (x+1)\exp (\sin (x))x^2\exp (\cos (x))=0\)  \(x_0\in [1.5,1]\) 
\(f_5(x) = \sin (x)+\cos (x)+\tan (x)1=0\)  \(x_0\in [1,1]\) 
\(f_6(x) = \exp (x)\cos (x)=0\)  \(x_0\in [1,0.5]\) 
\(f_7(x) = \ln (1+x^2)+\exp (x^23x)\sin (x)=0\)  \(x_0\in [0.2,1]\) 
\(f_8(x) = x^3+\ln (1+x)=0\)  \(x_0\in [0.5,1]\) 
\(f_9(x) = \sin (x)x/3=0\)  \(x_0\in [0.5,1]\) 
\(f_5(x) = (x10)^610^6=0\)  \(x_0\in [1,1]\) 
 INM:

denotes the iteration formula (21),
 NM:

denotes Newton’s method,
 3NM:

denotes the third order Newton’s method (Darvishi and Barati 2007a),
 It:

denotes the average number of iterations,
 Re:

denotes the average value of \(f(x_k)\) when the iteration stop,
 Fa:

denotes the number of failures in solving equations.
Numerical experiment results of INM, NM and 3NM
Equation  INM  NM  3NM  

It  Re  Fa  It  Re  Fa  It  Re  Fa  
\(f_1\)  3.3  9.1830E−8  0  4.5  7.6091E−8  0  3.2  1.1458E−7  0 
\(f_2\)  3.0  2.5350E−9  0  4.0  9.4571E−10  0  3.0  1.3982E−13  0 
\(f_3\)  2.8  1.5251E−8  0  3.1  1.7424E−7  0  2.3  1.9551E−7  0 
\(f_4\)  3.1  4.0420E−8  0  3.5  2.1129E−9  0  2.7  4.9111E−8  0 
\(f_5\)  3.0  1.1038E−8  0  3.6  5.9695E−9  0  2.6  4.9111E−8  0 
\(f_6\)  3.7  2.4752E−8  0  4.3  1.1354E−7  0  2.9  9.2102E−8  0 
\(f_7\)  3.3  1.400E−8  0  3.9  1.2565E−7  0  2.6  4.6376E−8  0 
\(f_8\)  3.2  1.3959E−7  0  3.7  1.2625E−9  0  2.6  2.2066E−9  0 
\(f_9\)  3.7  2.7669E−9  0  5.8  5.1190E−8  0  2.6  4.3207E−8  0 
\(f_{10}\)  3.2  5.4249E−9  0  4.3  1.2257E−7  0  2.9  1.3057E−7  0 
Solving system of nonlinear equations
 INM:

denotes Algorithm 1,
 NM:

denotes Newton’s method,
 3NM:

denotes the third order Newton method (Darvishi and Barati 2007a),
 Dim:

denotes the size of problem,
 It:

denotes the number of iterations,
 Ti:

denotes the value of the CPU time in seconds,
 –:

denotes that the number of iterations exceeded 100.
It is observed from Table 4 that in terms of the number of iterations and computation time, the efficiency of Algorithm 1 is better than Newton’s method for most of the testing problems, and the efficiency of Algorithm 1 is close to the third order convergence method 3NM (Darvishi and Barati 2007a).
Test problems
Function  Name  Function  Name 

F0  Rosenbrock  F1  Powell badly scaled 
F2  Freudenstein and Roth  F3  Powell singular 
F4  Trigonometric  F5  Trigonometric exponential 
F6  Trigexp  F7  Broyden tridiagonal 
F8  Extend Power singular  F9  Discrete boundary 
F10  Discrete integral equation  F11  Broyden banded 
Numerical experiment results of INM, NM and 3NM
Pr  Dim  INM  NM  3NM  Pr  Dim  INM  NM  3NM  

It  Ti  It  Ti  It  Ti  It  Ti  It  Ti  It  Ti  
F0  2  2  0.1E−8  2  0.1E−8  1  0.1E−8  F1  2  7  0.001  11  0.001  8  0.1E−8 
F2  2  27  0.005  42  0.005  –  –  F3  2  9  0.001  11  0.001  8  0.001 
F4  10  6  0.008  7  0.007  –  –  F5  10  8  0.007  10  0.007  19  0.021 
F4  50  5  0.349  9  0.641  –  –  F5  50  8  0.121  10  0.187  19  2.589 
F4  100  5  2.193  9  3.811  –  –  F5  100  8  0.858  10  0.998  19  3.661 
F4  500  6  0.008  7  0.007  –  –  F5  500  8  61.71  10  75.82  19  141.9 
F6  10  5  0.015  5  0.001  4  0.015  F7  10  6  0.1E−8  7  0.1E−8  –  – 
F6  50  5  0.140  5  0.125  4  0.125  F7  50  8  0.078  10  0.094  –  – 
F6  100  5  0.702  5  0.764  4  0.733  F7  100  9  0.533  11  0.633  –  – 
F6  500  5  37.39  5  35.820  4  32.00  F7  500  11  59.18  14  75.53  –  – 
F8  8  11  0.003  13  0.003  9  0.004  F9  10  2  0.002  2  0.002  2  0.002 
F8  60  11  0.212  13  0.239  10  0.209  F9  50  2  0.020  2  0.020  1  0.015 
F8  100  11  0.854  13  0.953  10  0.837  F9  100  2  0.140  2  0.136  1  0.082 
F8  500  12  71.61  14  83.23  10  62.05  F9  500  1  6.515  1  6.038  1  5.966 
F10  10  2  0.004  2  0.004  2  0.005  F11  10  5  0.005  5  0.004  4  0.005 
F10  50  2  0.291  2  0.278  2  0.005  F11  50  5  0.114  5  0.123  4  0.143 
F10  100  2  2.183  3  3.216  2  3.087  F11  100  5  0.567  5  0.623  4  0.630 
F10  500  2  284.2  2  355.3  2  352.8  F11  500  5  33.84  5  32.76  4  29.70 
Conclusion
In this paper, we present an improved Newton’s method for system of nonlinear equations by reuse of the previous iteration information. In the novel method, the function value of the previous iteration point was utilized for correcting the Newton direction. The proposed new method also has the quadratic convergence property. From the numerical results obtained for a set of standard test problems, it appears that the rank one revised implementation scheme described, where the Jacobian matrix is updated by a rank one matrix, may allow considerable computational savings for iteration number and computing time. Moreover, two kinds of numerical comparisons are presented in this paper. The first one is the numerical comparison between the new Newton formula, Newton’s method and a third order Newton method for finding roots of scalar functions. From this comparison we see that the proposed algorithm is efficient for one dimensional real function. The second comparison is for multivariate vector equations. From this comparison we see that the numerical performance of the proposed algorithm in the case of multidimensional is better than the onedimensional case. This is a very interesting discovery which may be helpful in other contexts.
Declarations
Authors' contributions
BS and GC conceived and designed the study; YS organized the manuscript; BS and CW performed the numerical experiments. All authors read and approved the final manuscript.
Acknowledgements
This paper was partly supported by the Natural Science Foundation of Inner Mongolia (Award Number: 2014MS0119, 2014MS0102) and the China National Funds for Distinguished Young Scientists (Award Number: 11401326).
Competing interests
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Andrei N (2008) An unconstrained optimization test functions collection. Adv Model Optim 10:147–161Google Scholar
 Ariyawansa KA (1990) Deriving collinear scaling algorithms as extensions of quasiNewton methods and the local convergence of DFPand BFGSrelated collinear scaling algorithms. Math Program 49:23–48View ArticleGoogle Scholar
 Ariyawansa KA, Lau DTM (1992) Local and Qsuperlinear convergence of a class of collinear scaling algorithms that extends quasinewton methods with broyden’s bounded class of updates. Optimization 23(4):323–339View ArticleGoogle Scholar
 Cauchy AL (1829) Sur la détermination approximative des racines d’une équation algébrique ou transcendante. In: Lecons sur le calcul differentiel, Buré fréres, Paris, pp 575–600Google Scholar
 Darvishi MT, Barati A (2007) A thirdorder Newtontype method to solve system of nonlinear equations. Appl Math Comput 187:630–635Google Scholar
 Darvishi MT, Barati A (2007) Afourthorder method from quadrature formulae to solve systems of nonlinear equations. Appl Math Comput 188:257–261Google Scholar
 Davidon WC (1980) Conic approximation and collinear Horizontal for optimizer. SIAM J Numer Anal 17:268–281View ArticleGoogle Scholar
 Dembo RS, Eisenstat SC, Steihaug T (1982) Inexact newton methods. SIAM J Numer Anal 19(2):400–408View ArticleGoogle Scholar
 Deng NY, Li ZF (1995) Some global convergence properties of a conicvariable metric algorithm for minimization with inexact line searches. Optim Methods Softw 5(1):105–122View ArticleGoogle Scholar
 Dennis JE, Schnabel RB (1993) Numerical methods for unconstrained optimization and nonlinear equations. SIAM, PhiladelphiaGoogle Scholar
 Dolan ED, More JJ (2002) Benchmarking optimization software with performance profiles. Math Program 91:201–213View ArticleGoogle Scholar
 Fine HB (1916) On Newton’s method of approximation. Proc Natl Acad Sci USA 2(9):546–552View ArticleGoogle Scholar
 Fourier JBJ (1890) Question d’analyse algébrique. In: Oeuvres complétes(2), GauthierVillars, Paris, pp 243–253Google Scholar
 Frontini M, Sormani E (2004) Thirdorder methods from quadrature formulae for solving systems of nonlinear equations. Appl Math Comput 149:771–782Google Scholar
 Gourgeon H, Nocedal J (1985) A conic algorithm for optimization. SIAM J Sci Stat Comput 6(2):253–267View ArticleGoogle Scholar
 GrauSánchez M, Grau A, Noguera M (2011) On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J Comput Appl Math 236:1259–1266View ArticleGoogle Scholar
 Homeier HHH (2004) Amodified Newton method with cubic convergence: the multivariable case. J Comput Appl Math 169:161–169View ArticleGoogle Scholar
 Kantorovich LL (1948) On Newton’s method for functional equations. Dokl Akad Nauk SSSR 59:1237–1240Google Scholar
 Kelley CT (2003) Solving nonlinear equations with Newton’s method. SIAM, PhiladelphiaView ArticleGoogle Scholar
 Moré JJ, Garbow BS, Hillstrom KE (1981) Testing unconstrained optimization software. ACM Trans Math Softw 7:17–41View ArticleGoogle Scholar
 Noor MA, Waseem M (2009) Some iterative methods for solving a system of nonlinear equations. Comput Math Appl 57:101–106View ArticleGoogle Scholar
 Ortega JM, Rheinboldt WC (1970) Iterative solution of nonlinear equations in several variables. Academic Press, New YorkGoogle Scholar
 Petković MS, Neta B, Petković LD, Džunić J (2013) Multipoint methods for solving nonlinear equations. Elsevier, AmsterdamGoogle Scholar
 Petković MS, Neta B, Petković LD, Džunić J (2013) Multipoint methods for solving nonlinear equations: a survy. Appl Math Comput 226:635–660Google Scholar
 Qi L, Sun J (1993) A nonsmooth version of Newton’s method. Math Program 58(1–3):353–367View ArticleGoogle Scholar
 Sharma JR, Guha RK, Sharma R (2013) An efficient fourth order weightedNewton method for systems of nonlinear equations. Numer Algorithms 62:307–323View ArticleGoogle Scholar
 Sheng S (1995) Interpolation by conic model for unconstrained optimization. Computing 54:83–98View ArticleGoogle Scholar
 Sorensen DC (1980) The qsuperlinear convergence of a collinear scaling algorithm for unconstrained optimization. SIAM J Numer Anal 17(1):84–114View ArticleGoogle Scholar
 Sui Y, Saheya, Chen, G (2014) An improvement for the rational approximation RALND at accumulated twopoint information. Math Numer Sinica 36(1):51–64Google Scholar
 Thukral R (2016) New modification of Newton method with thirdorder convergence for solving nonlinear equations of type \(f(0)=0\). Am J Comput Appl Math 69(1):14–18Google Scholar
 Traub JF (1964) Iterative method for the solution of equations. PrenticeHall, Englewood CliffsGoogle Scholar