A novel computational approach to approximate fuzzy interpolation polynomials

This paper build a structure of fuzzy neural network, which is well sufficient to gain a fuzzy interpolation polynomial of the form \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$y_{p}=a_{n}x_{p}^n+ \cdots +a_{1}x_{p}+a_{0}$$\end{document}yp=anxpn+⋯+a1xp+a0 where \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$a_{j}$$\end{document}aj is crisp number (for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$j=0,\ldots ,n)$$\end{document}j=0,…,n), which interpolates the fuzzy data \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(x_{j},y_{j})\,(for\,j=0,\ldots ,n)$$\end{document}(xj,yj)(forj=0,…,n). Thus, a gradient descent algorithm is constructed to train the neural network in such a way that the unknown coefficients of fuzzy polynomial are estimated by the neural network. The numeral experimentations portray that the present interpolation methodology is reliable and efficient.

introduces a polynomial interpolant on the basis of amounts of the function at the points in an amalgamation of product grids of minute dimension (Barthelmann et al. 2000). Existing trends on interpolation networks, have been revealed in Llanas and Sainz (2006), Sontag (1992). Numerable proof based on the notation that single hidden layer FNNs taking into account m + 1 neurons, is able to learn m + 1 isolated data (x i , f i ) (for i = 0, . . . , m) with zero error has been established in Ito (2001). The detailed introduction and survey of major results can be extracted from Refs. Szabados and Vertesi (1990), Tikhomirov (1990).
This paper is inclined to the motive in order to deliver a fuzzy modeling technique by the utilization of FNNs for finding a FIP of the form where a j ǫ R (for j = 0, . . . , n), which interpolates the fuzzy data (x j , y j ) (for j = 0, . . . , n). The proposed network is a formation, abiding of three layers whereas the extension principle of Zadeh (2005) elaborately describes the input-output connection of each unit. In the latest model, the unrevealed coefficients of fuzzy polynomial can be approximated by employing a cost function. Moreover, a learning technique which is associated with gradient decent procedure is formulated for the adjustment of connection weights to any achievable degree of precision.
This paper starts with a summary explanation of fuzzy numbers and fuzzy interpolation, then we provide the method of FNN for finding the crisp solution of the FIP. Two numerical examples are proposed to establish the validity and performance of the justified approach in "Numerical examples" section. Finally, "Concluding remarks" section presents the conclusions.

Method description
Basically, the interpolation theory has a wide range of applications in mathematical analysis. In numerical analysis, the interpolation is a method or operation of finding from a few given terms of a series, as of numbers or observations, other intermediate terms in conformity with the law of the series. Generally, the interpolation techniques are in phase with elementary model of an interpolating function which can be stated as: with the basis function φ j (x) : R → R that elates in interpolation criteria: Suppose that x 1 , . . . ,x n be n fuzzy points in E n whereas, a fuzzy number ŷ j ∈ E is in direct relation with each x j for j = 1, . . . , n. The sought function can be portrayed as follows: (1) y p = a n x n p + · · · + a 1 x p + a 0 , (3) s : E n → E :ŝ x = n j=1ŷ j ·φ j x , where the φ j : E n → E for j = 1, . . . , n exhibit fuzzy functions that compensate the stipulation of interpolation:

Fuzzy interpolation polynomial
The interested are vested in finding FIP of the form where a j ǫ R (for j = 0, . . . , n), that interpolates the fuzzy data (x j , y j ) (for j = 0, . . . , n) . Taking into account a three layer FNN construction which is displayed by Fig. 1. The input-output connection of each unit of the offered neural network can be portrayed as mentioned below, when the α-level sets of the fuzzy input x p is nonnegative, i.e., • Input unit • Hidden units φ j x k = 1, for k = j, 0, for k � = j.

Cost function
The input signals x p (for p = 0, . . . , n) are represented to the network and then y n (x p ) which is an representing the network output upon the presentation of a j (for j = 0, . . . , n) , is calculated. Defining of cost function over the model parameters makes it a good forecaster. The mean squared error is termed to be as one of the vastly popular usable cost function. Now, let the α-level sets of the fuzzy target output d p are exhibited as: A cost function which is required to be diminished is stated for each α-level sets as depicted: where Generally the summed up error of the suggested neural network is extracted by:

Fuzzy neural network learning approach
Suppose connection weights a j (for j = 0, . . . , n) are randomly actuated by crisp numbers. Now tweaked rule is illustrated as (Ishibuchi et al. 1995): where t denotes the number of moderation, η signifies the rate of learning and γ implies as the stationary momentum term. We calculate ∂e p (α) ∂a j as follows: Hence complexities lies in the calculation of the derivatives ∂e l p (α) ∂a j and ∂e u p (α) ∂a j . So we have: (11) e = α n p=0 e p (α). (12) and where also we have and where

Upper bound approximation
Theorem 1 Suppose p : R → R is a continuous function, hence for each compact set ϑ ⊂ E 0 (the set of all the bounded fuzzy set), and ψ > 0, there are m ∈ N , and a 0 , a i ∈ R, i = 1, 2, . . . , m, which imply where ψ is a finite number.
Proof The proof of theorem can be followed from the below results.
If p : R → R, by applying the methodology of the extension principle, p can be extended to the fuzzy function that denotes by p : E 0 → E as follows: p is termed as expanded function. Also, cc(R) implies the bounded set of closed intervals of R. clearly Moreover Henceforth, we let Theorem 2 Suppose p : R → R is a continuous function, hence for each compact set ϑ ⊂ E 0 , ̺ > 0 and arbitrary ε > 0, exist m ∈ N , and a 0 , a i ∈ R, i = 1, 2, . . . , m, implicate where ̺ is a finite number. The bottom and top bounds of the α-level set of fuzzy function diminish to ̺, but the center goes to ε.
Proof Because ϑ ⊂ E 0 is a compact set, and so by Lemma 3, it can be supposed that V ⊂ R be the compact set associated to ϑ.∀ε > 0, therefore by the final outcome in Cybenko (1989), exist m ∈ N, and a 0 , a i ∈ R, i = 1, 2, . . . , m, which imply holds. Let q(x) = m i=1 p i (x)a i + a 0 ,x ∈ R, then Theorem 4 implies the validity of (19).
Theorem 4 Supposing ϑ ⊂ E 0 be compact, V the corresponding compact set of ϑ, and p, q : R → R are the continuous functions that compensate the relation mentioned below Then ∀u ∈ ϑ, d(p(u) − q(u)) ≤ k.

Numerical examples
The following examples has been used to narrate the methodology proposed in this paper.
Example 5 The connection between three tanks and pipeline which is denoted by a constant H is represented by Fig. 2. It is a requirement to pump water in order to transfer it from one tank to the further two tanks. The mentioned system suffice the relation mentioned below are considered to be the flow quantity, where x is taken to be the elapsed time. The height of the pipe is mentioned by the term H , A 0 , A 1 , A 2 and A 3 are the pump characteristic coefficients, to be mentioned In below, four real uncertain data have been mentioned The iteration of data is continued for 19 times.
We use x 0 = 5, x 1 = 7, x 2 = 6, x 3 = 8, η = 1 × 10 −2 and γ = 1 × 10 −2 for FNN. The approximation results are depicted in Table 1. The precision level of the solutions x 0 (t), x 1 (t), x 2 (t) and x 3 (t) are shown in Fig. 3, t implies the iterative numbers. It is eminent that by incrementing the iterations, the cost function diminishes to zero. The convergency criteria of the approximated solutions are portrayed using Figs. 4, 5, 6 and 7. For the purpose of attaining the exact solutions, the iterations in the figures have to be increased. (1, 3, 4)  x 10 4

Number of iterations
The cost function The exact solution for the given problem can be stated as: This constrained is resolved by utilizing the technique of neural network suggested in this context, assuming x 0 = −0.5, x 1 = −2.5, x 2 = −1.5, η = 3 × 10 −2 and γ = 3 × 10 −2 .
The approximation results are depicted in Table 2. The precision level of the solutions x 0 (t), x 1 (t) and x 2 (t) are shown in Fig. 8, t implies the number of iterations.

Concluding remarks
This research introduces a new methodology in order to find a FIP which interpolates the fuzzy data (x j , y j ) (for j = 0, . . . , n). To achieve this goal, a FNN equivalent to FIP was built, and a fast learning algorithm was defined for approximating the crisp unknown coefficients of the given polynomial. The proposed method was based on approximating FNN and the MATLAB software is used for the simulations. The innovative method was validated with two examples. The simulation results clearly illustrated the efficiency and computational advantages of the proposed technique. In particular, the error of approximation is minute. y = −4x 2 − 5x − 3.  Fig. 8 The error between the approximate solution and the exact solution