A novel computational approach to approximate fuzzy interpolation polynomials
 Ahmad Jafarian^{1}Email author,
 Raheleh Jafari^{2},
 Maysaa Mohamed Al Qurashi^{3} and
 Dumitru Baleanu^{4, 5}
Received: 20 June 2016
Accepted: 15 August 2016
Published: 27 August 2016
Abstract
This paper build a structure of fuzzy neural network, which is well sufficient to gain a fuzzy interpolation polynomial of the form \(y_{p}=a_{n}x_{p}^n+ \cdots +a_{1}x_{p}+a_{0}\) where \(a_{j}\) is crisp number (for \(j=0,\ldots ,n)\), which interpolates the fuzzy data \((x_{j},y_{j})\,(for\,j=0,\ldots ,n)\). Thus, a gradient descent algorithm is constructed to train the neural network in such a way that the unknown coefficients of fuzzy polynomial are estimated by the neural network. The numeral experimentations portray that the present interpolation methodology is reliable and efficient.
Keywords
Fuzzy neural networks Fuzzy interpolation polynomial Cost function Learning algorithmBackground
Artificial neural networks (ANNs) are mathematical or computational models based on biological neural networks. Neural networks consist of universal approximation potentiality, and they function best when the system has a high endurance to error when used to model. Recently, there have been rapid growth of ANNs which was utilized in various fields (Abbasbandy and Otadi 2006; Chen and Zhang 2009; Guo and Qin 2009; Jafarian and Jafari 2012; Jafarian et al. 2015a, b; Jafarian and Measoomynia 2011, 2012; Song et al. 2013; Wai and Lin 2013). One of the vital roles of ANN is finding FIPs as it proposed in this research.
Interpolation theory is one of the basic tool in applied and numerical mathematics. Interpolation has been used extensively, because it is one of the noteworthy techniques of function approximation (Boffi and Gastaldi 2006; Mastylo 2010; Rajan and Chaudhuri 2001). Using Newton’s divided difference scheme, a new technique was established in Schroeder et al. (1991) for polynomial interpolation. The problem related to multivariate interpolation has grabbed the attention of researchers world wide (Neidinger 2009; Olver 2006). There are various multivariate interpolation methods. In Olver (2006) they used a multivariate Vandermode matrix and its LU factorization, and Neidinger (2009) utilized the Newtonform interpolation. We recall that sparse grid interpolation is a further technique. In recent years this procedure is widely executed for the provision of an average approximation to a smooth function (Xiu and Hesthaven 2005). Utilizing the Lagrange interpolating polynomials, this approach introduces a polynomial interpolant on the basis of amounts of the function at the points in an amalgamation of product grids of minute dimension (Barthelmann et al. 2000). Existing trends on interpolation networks, have been revealed in Llanas and Sainz (2006), Sontag (1992). Numerable proof based on the notation that single hidden layer FNNs taking into account \(m+1\) neurons, is able to learn \(m+1\) isolated data \((x_{i},f_{i})\,\,(for\,i=0,\ldots ,m)\) with zero error has been established in Ito (2001). The detailed introduction and survey of major results can be extracted from Refs. Szabados and Vertesi (1990), Tikhomirov (1990).
This paper starts with a summary explanation of fuzzy numbers and fuzzy interpolation, then we provide the method of FNN for finding the crisp solution of the FIP. Two numerical examples are proposed to establish the validity and performance of the justified approach in “Numerical examples” section. Finally, “Concluding remarks” section presents the conclusions.
Method description
Fuzzy interpolation polynomial

Input unit$$[o]^{\alpha }=\left[ [x_{p}]^{\alpha }_{l},[x_{p}]^{\alpha }_{u}\right] ,\quad p=0, \ldots ,n.$$(5)

Hidden units$$[O_{j}]^\alpha =f\left( \left[ [net_{j}]_l^\alpha ,[net_{j}]_u^\alpha \right] \right) = \left( \left( [o]^{\alpha }_{l}\right) ^{j},\left( [o]^{\alpha }_{u}\right) ^{j}\right) ,\quad j=1,\ldots ,n.$$(6)

Output unit$$\begin{aligned}{}[y_{p}]^{\alpha }&= {} F\left( [Net]_l^{\alpha }+a_{0},[Net]_u^{\alpha }+a_{0}\right) \nonumber \\&= {} \left( [Net]_l^{\alpha }+a_{0},[Net]_u^{\alpha }+a_{0}\right) ,\quad p=0,\ldots ,n, \end{aligned}$$(7)We haveand$$[Net]_l^{\alpha }=\sum _{j\epsilon M}[O_{j}]_{l}^{\alpha } \cdot a_{j}+\sum _{j\epsilon C}[O_{j}]_{u}^{\alpha }\cdot a_{j},$$where \(M=\{a_{j}\ge 0\}, C=\{a_{j}< 0\}\) and \(M\cup C=\{1,\ldots , n\}\).$$[Net]_u^{\alpha }=\sum _{j\epsilon M}[O_{j}]_{u}^{\alpha } \cdot a_{j}+\sum _{j\epsilon C}[O_{j}]_{l}^{\alpha }\cdot a_{j},$$
Cost function
Obviously, \(e\longrightarrow 0\) means \([y_{p}]^{\alpha }\longrightarrow [d_{p}]^{\alpha }\).
Fuzzy neural network learning approach
Upper bound approximation
Theorem 1
Proof
The proof of theorem can be followed from the below results. \(\square\)
Theorem 2
Proof
Theorem 4 implies the validity of (19). \(\square\)
Lemma 3
If \(\vartheta \subset E_{0}\) be a compact set, hence \(\vartheta\) is uniformly supportbounded, i.e. exists a compact set \(V\subset {\mathfrak{R}},\) implicates \(\forall u\in \vartheta , \hbox {Supp}(u)\subset V\).
Theorem 4
Then \(\forall u\in \vartheta , \ d(p(u)q(u))\le k.\)
Proof
See Liu (2000). \(\square\)
Numerical examples
The following examples has been used to narrate the methodology proposed in this paper.
Example 5
Neural network approximation for the coefficients
t  \(x_0(t)\)  \(x_1(t)\)  \(x_2(t)\)  \(x_3(t)\)  Error for FNN 

1  4.9018  6.9215  5.9307  7.9121  58,756.65 
2  4.5321  6.6450  5.5480  7.6010  6479.790 
3  4.0231  6.2056  5.1250  7.2212  1741.483 
4  3.6850  5.8401  4.7851  6.7945  577.7597 
5  3.2032  5.4001  4.3365  6.3330  210.8822 
\(\vdots\)  \(\vdots\)  \(\vdots\)  \(\vdots\)  \(\vdots\)  \(\vdots\) 
15  2.0008  4.0007  3.0008  5.0006  0.366883 
16  2.0007  4.0005  3.0006  5.0005  0.151818 
17  2.0005  4.0004  3.0005  5.0003  0.062895 
18  2.0004  4.0003  3.0004  5.0002  0.026075 
19  2.0003  4.0002  3.0003  5.0001  0.010815 
Example 6
This constrained is resolved by utilizing the technique of neural network suggested in this context, assuming \(x_0=0.5, x_1=2.5, x_2=1.5, \eta =3\times 10^{2}\) and \(\gamma =3\times 10^{2}\).
Neural network approximation for the coefficients
t  \(x_0(t)\)  \(x_1(t)\)  \(x_2(t)\)  Error for FNN 

1  −0.5915  −2.5895  −1.5784  2330.5296 
2  −0.9910  −2.9033  −1.9664  1896.6752 
3  −1.3356  −3.3346  −2.3696  999.56201 
4  −1.8050  −3.8798  −2.7561  401.56201 
5  −2.2257  −4.1035  −3.1100  95.188500 
\(\vdots\)  \(\vdots\)  \(\vdots\)  \(\vdots\)  \(\vdots\) 
13  −2.9996  −4.9995  −3.9996  0.86688366 
14  −2.9998  −4.9996  −3.9998  0.54635274 
15  −2.9999  −4.9998  −3.9999  0.23614301 
16  −3.0000  −4.9999  −4.0000  0.06896850 
17  −3.0000  −5.0000  −4.0000  0.02003805 
Concluding remarks
This research introduces a new methodology in order to find a FIP which interpolates the fuzzy data \((x_j , y_j )\,\, (\hbox {for}\,j = 0, \ldots , n)\). To achieve this goal, a FNN equivalent to FIP was built, and a fast learning algorithm was defined for approximating the crisp unknown coefficients of the given polynomial. The proposed method was based on approximating FNN and the MATLAB software is used for the simulations. The innovative method was validated with two examples. The simulation results clearly illustrated the efficiency and computational advantages of the proposed technique. In particular, the error of approximation is minute.
Declarations
Authors' contributions
All authors contributed equally to this work. All authors read and approve the final manuscript.
Acknowledgements
The research is supported by a grant from the “Research Center of the Center for Female Scientific and Medical Colleges”, Deanship of Scientific Research, King Saud University. The authors are also thankful to visiting professor program at King Saud University for support.
Competing interests
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Abbasbandy S, Otadi M (2006) Numerical solution of fuzzy polynomials by fuzzy neural network. Appl Math Comput 181:1084–1089Google Scholar
 Barthelmann V, Novak E, Ritter K (2000) High dimensional polynomial interpolation on sparse grids. Adv Comput Math 12:273–288View ArticleGoogle Scholar
 Boffi D, Gastaldi L (2006) Interpolation estimates for edge finite elements and application to band gap computation. Appl Numer Math 56:1283–1292View ArticleGoogle Scholar
 Chen Lh, Zhang Xy (2009) Application of artificial neural networks to classify water quality of the yellow river. Fuzzy Inf Eng 9:15–23View ArticleGoogle Scholar
 Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2:303–314View ArticleGoogle Scholar
 Guo B, Qin L (2009) Tactile sensor signal processing with artificial neural networks. Fuzzy Inf Eng 54:54–62View ArticleGoogle Scholar
 Ishibuchi H, Kwon K, Tanaka H (1995) A learning of fuzzy neural networks with triangular fuzzy weghts. Fuzzy Sets Syst 71:277–293View ArticleGoogle Scholar
 Ito Y (2001) Independence of unscaled basis functions and finite mappings by neural networks. Math Sci 26:117–126Google Scholar
 Jafarian A, Measoomynia S (2011) Solving fuzzy polynomials using neural nets with a new learning algorithm. Appl Math Sci 5:2295–2301Google Scholar
 Jafarian A, Jafari R (2012) Approximate solutions of dual fuzzy polynomials by feedback neural networks. J Soft Comput Appl. doi:https://doi.org/10.5899/2012/jsca00005
 Jafarian A, Measoomynia S (2012) Utilizing feedback neural network approach for solving linear Fredholm integral equations system. Appl Math Model. doi:https://doi.org/10.1016/j.apm
 Jafarian A, Jafari R, Khalili A, Baleanud D (2015a) Solving fully fuzzy polynomials using feedback neural networks. Int J Comput Math 92:742–755View ArticleGoogle Scholar
 Jafarian A, Measoomy S, Abbasbandy S (2015b) Artificial neural networks based modeling for solving Volterra integral equations system. Appl Soft Comput 27:391–398View ArticleGoogle Scholar
 Liu P (2000) Analyses of regular fuzzy neural networks for approximation capabilities. Fuzzy Sets Syst 114:329–338View ArticleGoogle Scholar
 Llanas B, Sainz FJ (2006) Constructive approximate interpolation by neural networks. J Comput Appl Math 188:283–308View ArticleGoogle Scholar
 Mastylo M (2010) Interpolation estimates for entropy numbers with applications to nonconvex bodies. J Approx Theory 162:10–23View ArticleGoogle Scholar
 Neidinger RD (2009) Multivariable interpolating polynomials in newton forms. In: Joint mathematics meetings, Washington, DC, pp 5–8Google Scholar
 Olver PJ (2006) On multivariate interpolation. Stud Appl Math 116:201–240View ArticleGoogle Scholar
 Rajan D, Chaudhuri S (2001) Generalized interpolation and its application in superresolution imaging. Image Vis Comput 19:957–969View ArticleGoogle Scholar
 Schroeder H, Murthy VK, Krishnamurthy EV (1991) Systolic algorithm for polynomial interpolation and related problems. Parallel Comput 17:493–503View ArticleGoogle Scholar
 Song Q, Zhao Z, Yang J (2013) Passivity and passification for stochastic TakagiSugeno fuzzy systems with mixed timevarying delays. Neurocomputing 122:330–337View ArticleGoogle Scholar
 Sontag ED (1992) Feedforward nets for interpolation and classification. J Comput Syst Sci 45:20–48View ArticleGoogle Scholar
 Szabados J, Vertesi P (1990) Interpolation of functions. World Scientific, SingaporeView ArticleGoogle Scholar
 Tikhomirov VM (1990) Approximation theory, analysis II. In: Gamkrelidze RV (ed) Encyclopaedia of mathematical sciences, vol 14. Springer, BerlinGoogle Scholar
 Wai RJ, Lin YW (2013) Adaptive movingtarget tracking control of a visionbased mobile robot via a dynamic petri recurrent fuzzy neural network. IEEE Trans Fuzzy Syst 21:688–701View ArticleGoogle Scholar
 Xiu D, Hesthaven JS (2005) Highorder collocation methods for differential equations with random inputs. SIAM J Sci Comput 27:18–39View ArticleGoogle Scholar
 Zadeh LA (2005) Toward a generalized theory of uncertainty (GTU) an outline. Inf Sci 172:1–40View ArticleGoogle Scholar