 Research
 Open Access
 Published:
A novel computational approach to approximate fuzzy interpolation polynomials
SpringerPlus volume 5, Article number: 1428 (2016)
Abstract
This paper build a structure of fuzzy neural network, which is well sufficient to gain a fuzzy interpolation polynomial of the form \(y_{p}=a_{n}x_{p}^n+ \cdots +a_{1}x_{p}+a_{0}\) where \(a_{j}\) is crisp number (for \(j=0,\ldots ,n)\), which interpolates the fuzzy data \((x_{j},y_{j})\,(for\,j=0,\ldots ,n)\). Thus, a gradient descent algorithm is constructed to train the neural network in such a way that the unknown coefficients of fuzzy polynomial are estimated by the neural network. The numeral experimentations portray that the present interpolation methodology is reliable and efficient.
Background
Artificial neural networks (ANNs) are mathematical or computational models based on biological neural networks. Neural networks consist of universal approximation potentiality, and they function best when the system has a high endurance to error when used to model. Recently, there have been rapid growth of ANNs which was utilized in various fields (Abbasbandy and Otadi 2006; Chen and Zhang 2009; Guo and Qin 2009; Jafarian and Jafari 2012; Jafarian et al. 2015a, b; Jafarian and Measoomynia 2011, 2012; Song et al. 2013; Wai and Lin 2013). One of the vital roles of ANN is finding FIPs as it proposed in this research.
Interpolation theory is one of the basic tool in applied and numerical mathematics. Interpolation has been used extensively, because it is one of the noteworthy techniques of function approximation (Boffi and Gastaldi 2006; Mastylo 2010; Rajan and Chaudhuri 2001). Using Newton’s divided difference scheme, a new technique was established in Schroeder et al. (1991) for polynomial interpolation. The problem related to multivariate interpolation has grabbed the attention of researchers world wide (Neidinger 2009; Olver 2006). There are various multivariate interpolation methods. In Olver (2006) they used a multivariate Vandermode matrix and its LU factorization, and Neidinger (2009) utilized the Newtonform interpolation. We recall that sparse grid interpolation is a further technique. In recent years this procedure is widely executed for the provision of an average approximation to a smooth function (Xiu and Hesthaven 2005). Utilizing the Lagrange interpolating polynomials, this approach introduces a polynomial interpolant on the basis of amounts of the function at the points in an amalgamation of product grids of minute dimension (Barthelmann et al. 2000). Existing trends on interpolation networks, have been revealed in Llanas and Sainz (2006), Sontag (1992). Numerable proof based on the notation that single hidden layer FNNs taking into account \(m+1\) neurons, is able to learn \(m+1\) isolated data \((x_{i},f_{i})\,\,(for\,i=0,\ldots ,m)\) with zero error has been established in Ito (2001). The detailed introduction and survey of major results can be extracted from Refs. Szabados and Vertesi (1990), Tikhomirov (1990).
This paper is inclined to the motive in order to deliver a fuzzy modeling technique by the utilization of FNNs for finding a FIP of the form
where \(a_{j}\,\epsilon\,{\mathbb {R}}\,\,(for\,j=0,\ldots ,n)\), which interpolates the fuzzy data \((x_{j},y_{j})\,(for\,j=0,\ldots ,n)\). The proposed network is a formation, abiding of three layers whereas the extension principle of Zadeh (2005) elaborately describes the inputoutput connection of each unit. In the latest model, the unrevealed coefficients of fuzzy polynomial can be approximated by employing a cost function. Moreover, a learning technique which is associated with gradient decent procedure is formulated for the adjustment of connection weights to any achievable degree of precision.
This paper starts with a summary explanation of fuzzy numbers and fuzzy interpolation, then we provide the method of FNN for finding the crisp solution of the FIP. Two numerical examples are proposed to establish the validity and performance of the justified approach in “Numerical examples” section. Finally, “Concluding remarks” section presents the conclusions.
Method description
Basically, the interpolation theory has a wide range of applications in mathematical analysis. In numerical analysis, the interpolation is a method or operation of finding from a few given terms of a series, as of numbers or observations, other intermediate terms in conformity with the law of the series. Generally, the interpolation techniques are in phase with elementary model of an interpolating function which can be stated as:
with the basis function \(\phi _{j}(x): {\mathbb {R}}\rightarrow {\mathbb {R}}\) that elates in interpolation criteria:
Suppose that \({\hat{x}}_{1}, \ldots ,\hat{x}_{n}\) be n fuzzy points in \(E^{n}\) whereas, a fuzzy number \({\hat{y}}_{j}\in E\) is in direct relation with each \({\hat{x}}_{j}\) for \(j=1,\ldots ,n.\) The sought function can be portrayed as follows:
where the \({\hat{\phi}}_{j}: E^{n}\rightarrow E\) for \(j=1,\ldots ,n\) exhibit fuzzy functions that compensate the stipulation of interpolation:
Fuzzy interpolation polynomial
The interested are vested in finding FIP of the form
where \(a_{j} \,\epsilon\,{\mathbb {R}}\,(for\,j=0,\ldots ,n)\), that interpolates the fuzzy data \((x_{j},y_{j})\,\,(for\,j=0,\ldots ,n)\). Taking into account a three layer FNN construction which is displayed by Fig. 1. The inputoutput connection of each unit of the offered neural network can be portrayed as mentioned below, when the \(\alpha\)level sets of the fuzzy input \(x_{p}\) is nonnegative, i.e., \(0\le [x_{p}]_{l}^{\alpha }\le [x_{p}]_{u}^{\alpha }\):

Input unit
$$[o]^{\alpha }=\left[ [x_{p}]^{\alpha }_{l},[x_{p}]^{\alpha }_{u}\right] ,\quad p=0, \ldots ,n.$$(5) 
Hidden units
$$[O_{j}]^\alpha =f\left( \left[ [net_{j}]_l^\alpha ,[net_{j}]_u^\alpha \right] \right) = \left( \left( [o]^{\alpha }_{l}\right) ^{j},\left( [o]^{\alpha }_{u}\right) ^{j}\right) ,\quad j=1,\ldots ,n.$$(6) 
Output unit
$$\begin{aligned}{}[y_{p}]^{\alpha }&= {} F\left( [Net]_l^{\alpha }+a_{0},[Net]_u^{\alpha }+a_{0}\right) \nonumber \\&= {} \left( [Net]_l^{\alpha }+a_{0},[Net]_u^{\alpha }+a_{0}\right) ,\quad p=0,\ldots ,n, \end{aligned}$$(7)We have
$$[Net]_l^{\alpha }=\sum _{j\epsilon M}[O_{j}]_{l}^{\alpha } \cdot a_{j}+\sum _{j\epsilon C}[O_{j}]_{u}^{\alpha }\cdot a_{j},$$and
$$[Net]_u^{\alpha }=\sum _{j\epsilon M}[O_{j}]_{u}^{\alpha } \cdot a_{j}+\sum _{j\epsilon C}[O_{j}]_{l}^{\alpha }\cdot a_{j},$$where \(M=\{a_{j}\ge 0\}, C=\{a_{j}< 0\}\) and \(M\cup C=\{1,\ldots , n\}\).
Cost function
The input signals \(x_p\ (\hbox {for}\,p = 0, \ldots , n)\) are represented to the network and then \(y_n(x_p)\) which is an representing the network output upon the presentation of \(a_j\ (\hbox {for}\,j = 0, \ldots , n)\), is calculated. Defining of cost function over the model parameters makes it a good forecaster. The mean squared error is termed to be as one of the vastly popular usable cost function. Now, let the \(\alpha\)level sets of the fuzzy target output \(d_{p}\) are exhibited as:
A cost function which is required to be diminished is stated for each \(\alpha\)level sets as depicted:
where
Generally the summed up error of the suggested neural network is extracted by:
Obviously, \(e\longrightarrow 0\) means \([y_{p}]^{\alpha }\longrightarrow [d_{p}]^{\alpha }\).
Fuzzy neural network learning approach
Suppose connection weights \(a_{j}\,(for\,j=0,\ldots ,n)\) are randomly actuated by crisp numbers. Now tweaked rule is illustrated as (Ishibuchi et al. 1995):
where t denotes the number of moderation, \(\eta\) signifies the rate of learning and \(\gamma\) implies as the stationary momentum term. We calculate \(\frac{\partial e_{p}(\alpha )}{\partial a_{j}}\) as follows:
Hence complexities lies in the calculation of the derivatives \(\frac{\partial e_{p}^{l}(\alpha )}{\partial a_{j}}\) and \(\frac{\partial e_{p}^{u}(\alpha )}{\partial a_{j}}\). So we have:
and
where
also we have
and
where
Upper bound approximation
Theorem 1
Suppose \(p: {\mathfrak{R}}\rightarrow {\mathfrak{R}}\) is a continuous function, hence for each compact set \(\vartheta \subset E_{0}\) (the set of all the bounded fuzzy set), and \(\psi >0,\) there are \(m\in N,\) and \(a_{0},a_{i}\in {\mathfrak{R}}, i=1,2,\ldots ,m,\) which imply
where \(\psi\) is a finite number.
Proof
The proof of theorem can be followed from the below results. \(\square\)
If \(p: {\mathfrak{R}}\rightarrow {\mathfrak{R}}\), by applying the methodology of the extension principle, p can be extended to the fuzzy function that denotes by \(p: E_{0}\rightarrow E\) as follows:
p is termed as expanded function. Also, \(cc({\mathfrak{R}})\) implies the bounded set of closed intervals of \({\mathfrak{R}}\). clearly
Moreover
Henceforth, we let
Theorem 2
Suppose \(p: {\mathfrak{R}}\rightarrow {\mathfrak{R}}\) is a continuous function, hence for each compact set \(\vartheta \subset E_{0}, \varrho >0\) and arbitrary \(\varepsilon >0,\) exist \(m\in N,\) and \(a_{0},a_{i}\in {\mathfrak{R}},\) \(i=1,2,\ldots ,m,\) implicate
where \(\varrho\) is a finite number. The bottom and top bounds of the \(\alpha\)level set of fuzzy function diminish to \(\varrho,\) but the center goes to \(\varepsilon\).
Proof
Because \(\vartheta \subset E_{0}\) is a compact set, and so by Lemma 3, it can be supposed that \(V\subset {\mathfrak{R}}\) be the compact set associated to \(\vartheta . \forall \varepsilon >0\), therefore by the final outcome in Cybenko (1989), exist \(m\in N\), and \(a_{0},a_{i}\in {\mathfrak{R}}, i=1,2,\ldots ,m\), which imply
holds. Let \(q(\hat{x})=\sum \nolimits _{i=1}^{m}p_{i}(\hat{x})a_{i}+a_{0}, \hat{x}\in {\mathfrak{R}}\), then
Theorem 4 implies the validity of (19). \(\square\)
Lemma 3
If \(\vartheta \subset E_{0}\) be a compact set, hence \(\vartheta\) is uniformly supportbounded, i.e. exists a compact set \(V\subset {\mathfrak{R}},\) implicates \(\forall u\in \vartheta , \hbox {Supp}(u)\subset V\).
Theorem 4
Supposing \(\vartheta \subset E_{0}\) be compact, V the corresponding compact set of \(\vartheta,\) and \(p,q: {\mathfrak{R}}\rightarrow \ {\mathfrak{R}}\) are the continuous functions that compensate the relation mentioned below
Then \(\forall u\in \vartheta , \ d(p(u)q(u))\le k.\)
Proof
See Liu (2000). \(\square\)
Numerical examples
The following examples has been used to narrate the methodology proposed in this paper.
Example 5
The connection between three tanks and pipeline which is denoted by a constant H is represented by Fig. 2. It is a requirement to pump water in order to transfer it from one tank to the further two tanks. The mentioned system suffice the relation mentioned below
here \(F_{1}=\sqrt{2x}, F_{2}=x\sqrt{x}, F_{3}=x^{3}\) are considered to be the flow quantity, where x is taken to be the elapsed time. The height of the pipe is mentioned by the term \(H , A_{0}, A_{1}, A_{2}\) and \(A_{3}\) are the pump characteristic coefficients, to be mentioned
In below, four real uncertain data have been mentioned
The iteration of data is continued for 19 times.
We use \(x_0=5, x_1=7, x_2=6, x_3=8, \eta =1\times 10^{2}\) and \(\gamma =1\times 10^{2}\) for FNN. The approximation results are depicted in Table 1. The precision level of the solutions \(x_0(t), x_1(t), x_2(t)\) and \(x_3(t)\) are shown in Fig. 3, t implies the iterative numbers. It is eminent that by incrementing the iterations, the cost function diminishes to zero. The convergency criteria of the approximated solutions are portrayed using Figs. 4, 5, 6 and 7. For the purpose of attaining the exact solutions, the iterations in the figures have to be increased.
Example 6
Contemplate the sequential interpolation points:
The exact solution for the given problem can be stated as:
This constrained is resolved by utilizing the technique of neural network suggested in this context, assuming \(x_0=0.5, x_1=2.5, x_2=1.5, \eta =3\times 10^{2}\) and \(\gamma =3\times 10^{2}\).
The approximation results are depicted in Table 2. The precision level of the solutions \(x_0(t), x_1(t)\) and \(x_2(t)\) are shown in Fig. 8, t implies the number of iterations.
Concluding remarks
This research introduces a new methodology in order to find a FIP which interpolates the fuzzy data \((x_j , y_j )\,\, (\hbox {for}\,j = 0, \ldots , n)\). To achieve this goal, a FNN equivalent to FIP was built, and a fast learning algorithm was defined for approximating the crisp unknown coefficients of the given polynomial. The proposed method was based on approximating FNN and the MATLAB software is used for the simulations. The innovative method was validated with two examples. The simulation results clearly illustrated the efficiency and computational advantages of the proposed technique. In particular, the error of approximation is minute.
References
Abbasbandy S, Otadi M (2006) Numerical solution of fuzzy polynomials by fuzzy neural network. Appl Math Comput 181:1084–1089
Barthelmann V, Novak E, Ritter K (2000) High dimensional polynomial interpolation on sparse grids. Adv Comput Math 12:273–288
Boffi D, Gastaldi L (2006) Interpolation estimates for edge finite elements and application to band gap computation. Appl Numer Math 56:1283–1292
Chen Lh, Zhang Xy (2009) Application of artificial neural networks to classify water quality of the yellow river. Fuzzy Inf Eng 9:15–23
Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2:303–314
Guo B, Qin L (2009) Tactile sensor signal processing with artificial neural networks. Fuzzy Inf Eng 54:54–62
Ishibuchi H, Kwon K, Tanaka H (1995) A learning of fuzzy neural networks with triangular fuzzy weghts. Fuzzy Sets Syst 71:277–293
Ito Y (2001) Independence of unscaled basis functions and finite mappings by neural networks. Math Sci 26:117–126
Jafarian A, Measoomynia S (2011) Solving fuzzy polynomials using neural nets with a new learning algorithm. Appl Math Sci 5:2295–2301
Jafarian A, Jafari R (2012) Approximate solutions of dual fuzzy polynomials by feedback neural networks. J Soft Comput Appl. doi:10.5899/2012/jsca00005
Jafarian A, Measoomynia S (2012) Utilizing feedback neural network approach for solving linear Fredholm integral equations system. Appl Math Model. doi:10.1016/j.apm
Jafarian A, Jafari R, Khalili A, Baleanud D (2015a) Solving fully fuzzy polynomials using feedback neural networks. Int J Comput Math 92:742–755
Jafarian A, Measoomy S, Abbasbandy S (2015b) Artificial neural networks based modeling for solving Volterra integral equations system. Appl Soft Comput 27:391–398
Liu P (2000) Analyses of regular fuzzy neural networks for approximation capabilities. Fuzzy Sets Syst 114:329–338
Llanas B, Sainz FJ (2006) Constructive approximate interpolation by neural networks. J Comput Appl Math 188:283–308
Mastylo M (2010) Interpolation estimates for entropy numbers with applications to nonconvex bodies. J Approx Theory 162:10–23
Neidinger RD (2009) Multivariable interpolating polynomials in newton forms. In: Joint mathematics meetings, Washington, DC, pp 5–8
Olver PJ (2006) On multivariate interpolation. Stud Appl Math 116:201–240
Rajan D, Chaudhuri S (2001) Generalized interpolation and its application in superresolution imaging. Image Vis Comput 19:957–969
Schroeder H, Murthy VK, Krishnamurthy EV (1991) Systolic algorithm for polynomial interpolation and related problems. Parallel Comput 17:493–503
Song Q, Zhao Z, Yang J (2013) Passivity and passification for stochastic TakagiSugeno fuzzy systems with mixed timevarying delays. Neurocomputing 122:330–337
Sontag ED (1992) Feedforward nets for interpolation and classification. J Comput Syst Sci 45:20–48
Szabados J, Vertesi P (1990) Interpolation of functions. World Scientific, Singapore
Tikhomirov VM (1990) Approximation theory, analysis II. In: Gamkrelidze RV (ed) Encyclopaedia of mathematical sciences, vol 14. Springer, Berlin
Wai RJ, Lin YW (2013) Adaptive movingtarget tracking control of a visionbased mobile robot via a dynamic petri recurrent fuzzy neural network. IEEE Trans Fuzzy Syst 21:688–701
Xiu D, Hesthaven JS (2005) Highorder collocation methods for differential equations with random inputs. SIAM J Sci Comput 27:18–39
Zadeh LA (2005) Toward a generalized theory of uncertainty (GTU) an outline. Inf Sci 172:1–40
Authors' contributions
All authors contributed equally to this work. All authors read and approve the final manuscript.
Acknowledgements
The research is supported by a grant from the “Research Center of the Center for Female Scientific and Medical Colleges”, Deanship of Scientific Research, King Saud University. The authors are also thankful to visiting professor program at King Saud University for support.
Competing interests
The authors declare that they have no competing interests.
Author information
Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Jafarian, A., Jafari, R., Mohamed Al Qurashi, M. et al. A novel computational approach to approximate fuzzy interpolation polynomials. SpringerPlus 5, 1428 (2016). https://doi.org/10.1186/s4006401630775
Received:
Accepted:
Published:
Keywords
 Fuzzy neural networks
 Fuzzy interpolation polynomial
 Cost function
 Learning algorithm