- Research
- Open Access
- Published:
Recurrence relations for orthogonal polynomials for PDEs in polar and cylindrical geometries
SpringerPlus volume 5, Article number: 1567 (2016)
Abstract
This paper introduces two families of orthogonal polynomials on the interval (−1,1), with weight function \(\omega (x)\equiv 1\). The first family satisfies the boundary condition \(p(1)=0\), and the second one satisfies the boundary conditions \(p(-1)=p(1)=0\). These boundary conditions arise naturally from PDEs defined on a disk with Dirichlet boundary conditions and the requirement of regularity in Cartesian coordinates. The families of orthogonal polynomials are obtained by orthogonalizing short linear combinations of Legendre polynomials that satisfy the same boundary conditions. Then, the three-term recurrence relations are derived. Finally, it is shown that from these recurrence relations, one can efficiently compute the corresponding recurrences for generalized Jacobi polynomials that satisfy the same boundary conditions.
Background
When mapping PDEs to polar or cylindrical geometries to rectangular domains using polar coordinates, it makes sense to use spectral methods (Shen 1997). Numerous algorithms based on spectral-collocation and spectral-tau methods already exist. See, for example, Canuto et al. (1987), Eisen et al. (1991), Fornberg (1995), Gottlieb and Orszag (1977), Huang and Sloan (1993).
After applying separation of variables in polar coordinates, the resulting PDEs that depend on the radial coordinate r and time t can be solved numerically using a Legendre-Galerkin formulation similar to that used for the steady-state problem (Shen 1997). It is natural to use bases of polynomials that satisfy the boundary conditions for each PDE, and these can easily be obtained by taking short linear combinations of Legendre polynomials.
Unlike Legendre polynomials, the bases used in Shen (1997) are not orthogonal with respect to the weight function \(\omega (x)\equiv 1\). In Shen (2003) orthogonal bases were introduced that also satisfy these same boundary conditions. They are generalized Jacobi polynomials (GJPs) with indices \(\alpha ,\beta \le -1\), orthogonal with respect to the weight function \(\omega ^{\alpha ,\beta }(x)\equiv (1-x)^\alpha (1+x)^\beta \). GJPs corresponding to specific indices \((\alpha ,\beta )\) were introduced in Shen (2003) for the purpose of solving differential equations of odd higher order. Generalization to other (non-integer) indices was carried out in Guo et al. (2009) to obtain families of orthogonal polynomials for Chebyshev spectral methods or problems with singular coefficients. However, although these GJPs can be described in terms of short linear combinations of Legendre polynomials, at least for certain index pairs of interest (Guo et al. 2009; Shen 2003), the three-term recurrence relations characteristic of families of orthogonal polynomials have not been developed in these cases.
In this paper, we use the bases from Shen (1997) to develop families of polynomials that are orthogonal with respect to \(\omega (x)\equiv 1\) and satisfy the requisite boundary conditions, to facilitate transformation between physical and frequency space without using functions such as the Legendre polynomials that lie outside of the solution space. These families can also be efficiently modified to work with alternative weight functions, thus leading to the development of new numerical methods. In particular, it is demonstrated that these new families can be used to obtain three-term recurrence relations for the GJPs that satisfy the same boundary conditions.
The outline of the paper is as follows. In section “Variational formulation”, we provide context for these families of polynomials by adapting the variational formulation employed in Shen (1997) to the time-dependent PDE (1)–(3). In section “The case m = 0” we develop orthogonal polynomials with unit weight function satisfying the boundary conditions \(p(1)=0\). In section “The case m ≠ 0” we do the same for the boundary conditions \(p(-1)=p(1)=0\). In section “Recurrence relations for generalized jacobi polynomials” we describe how these families of orthogonal polynomials can be efficiently modified to obtain three-term recurrence relations for GJPs as described in Guo et al. (2009), Shen (2003). Concluding remarks and directions for future work are given in section “Conclusions”.
Variational formulation
In this section, we describe one possible context in which the sequences of orthogonal polynomials discussed in this paper can be applied.
Conversion to polar coordinates
We consider the reaction-diffusion equation on a unit disk
where \(\alpha \) is a constant.
Following the approach used in Shen (1997) for a steady-state problem, we can convert the IBVP in (1)–(3) to polar coordinates by applying the polar transformation \(x=r\cos \theta ,\) \(y=r\sin \theta \) and letting \(u\left( r,\theta \right) =U\left( r\cos \theta ,r\sin \theta \right) ,\) \(f\left( r,\theta \right) =F\left( r\cos \theta ,r\sin \theta \right) .\) The resulting problem in polar coordinates is as follows:
The solution is represented using the Fourier series
The Fourier coefficients \(u_{1,m}(r,t)\), \(u_{2,m}(r,t)\) must satisfy the boundary conditions \(u_{1,m}(1,t) = u_{2,m}(1,t) = 0\) for \(m=0,1,2,\ldots \) Due to the singularity at the pole \(r=0,\) we must impose additional pole conditions on (5) to have regularity in Cartesian coordinates. For \(u(r,\theta ,t)\) to be infinitely differentiable in the Cartesian plane, the additional pole conditions are Shen (1997)
By substituting the series (5) into (4) and applying the pole conditions in (6), we obtain the following ODEs, for each nonnegative integer m:
where u and f are now generic functions.
Weighted formulation
We will extend (7) to the interval \(\left( -1,1\right) \) using a coordinate transformation as in Shen (1997). Using the coordinate transformation \(r=\frac{s+1}{2}\) in (7) and setting \(v(s)=u\left( \frac{s+1}{2}\right) \), we obtain
where \(g\left( s\right) =f\left( \frac{s+1}{2}\right) .\) To formulate a weighted variational formula for (8), we must find \(v\in X\left( m\right) \) such that
where \(X\left( m\right) =H_{0,\omega }^{1}\left( I\right) \) if \(m\ne 0,\) \(X(0)=\left\{ v\in H_{\omega }^{1}\left( I\right) : u\left( 1,t\right) =0\right\} \) and \(\omega \) is a weight function.
Legendre-Galerkin method
To approximate (9) using the Legendre-Galerkin method, we let \(\omega =1\) and we have to find \(v_{N}\in X_{N}\left( m\right) \) such that \(\forall w\in X_{N}\left( m\right) \),
where \(I_{N}\) is the interpolation operator based on the Legendre–Gauss–Lobatto points. That is, \(\left( I_{N}g\right) \left( t_{i}\right) =g\left( t_{i}\right) ,\) \(i=0,1,\ldots ,N,\) where \(\left\{ t_{i}\right\} \) are the roots of \(\left( 1-t^{2}\right) L_{N}^{\prime }\left( t\right) \) and \(L_N\) is the Legendre polynomial of degree N.
The case \(m=0\)
In the case where \(m=0\), (10) reduces to
As before, we let \(L_{k}\left( t\right) \) be the \(k\mathrm {th}\)-degree Legendre polynomial, and define \(X_{N}(0)\) to be the space of all polynomials of degree less than or equal to N that vanish at 1. This space can be described as Shen (1997)
where \(\phi _{i}\left( t\right) \) is the ith basis function. By applying the Gram-Schmidt process (Burden, Faires 2005) to these basis functions, \(\phi _{i}\left( t\right) \), we can obtain a new set of orthogonal polynomials that will be denoted by \(\tilde{\phi }_{i}\), \(i=0,1,2,\ldots \), where the degree of \(\phi _{i}\) and \(\tilde{\phi }_{i}\) is \(i+1\). The new basis functions, \(\tilde{\phi }_{i}\), can be found by computing
Fortunately, for \(0\le k\le i-2\),
due to the orthogonality of the Legendre polynomials, thus greatly simplifying the computation of \(\tilde{\phi }_i\).
To start the sequence \(\{ \tilde{\phi }_i \}\), we let
so then
and
The first several polynomials \(\tilde{\phi }_0, \tilde{\phi }_1,\ldots , \tilde{\phi }_4\) are shown in Fig. 1.
Now, comparing \(\phi _{1}\) with \(\tilde{\phi }_{1}\) and \(\phi _{2}\) with \(\tilde{\phi }_{2}\), we can find a general formula for the \(\tilde{\phi _{i}}\) in terms of \(\phi _i\). By subtracting \(\phi _{i}\) from \(\tilde{\phi }_{i}\), we obtain
and
This suggests a simple recurrence relation for \(\tilde{\phi }_i\) in terms of \(\phi _i\). Before we prove that this relation holds in general, we need the following result.
Lemma 1
Let \(N_{k}=\left\langle \tilde{\phi }_{k},\tilde{\phi }_{k}\right\rangle \). Then
\(\forall k\ge 0.\)
Proof
We proceed by induction. For the base case, we have
For the induction step, we assume that there is a \(k > 0,\) such that \(N_{k-1}=\frac{2\left( k+1\right) ^{2}}{k^{2}\left( 2k+1\right) }\). We must show that the formula found in Eq. (15) is true for k. Given \(\tilde{\phi }_{k}=\phi _{k}+\left( \frac{k}{k+1}\right) ^{2}\tilde{\phi }_{k-1},\) and using
we have
\(\square \)
We can now establish the pattern seen in (13), (14).
Theorem 1
If \(\tilde{\phi }_0(x) = 1-x\) and \(\tilde{\phi }_i\) is obtained by orthogonalizing \(\phi _i = L_{i+1} - L_i\) against \(\tilde{\phi }_0, \tilde{\phi }_0, \ldots , \tilde{\phi }_{i-1}\), then
for \(i=1,2,\ldots ,\) where \(c_{i}=\left( \frac{i}{i+1}\right) ^{2}\).
Proof
Again we proceed by induction. For the base case, we will show that the theorem holds when \(i=1\):
Note that Eq. (18) is equivalent to Eq. (13). For the induction step, we assume that there is a \(j \ge 0\), such that
We show that (17) holds when \(i=j+1.\) We have
Therefore, using Lemma 1 and (16), we obtain
\(\square \)
We now prove a converse of Theorem 1.
Theorem 2
If \(\tilde{\phi }_0(x) = 1-x\) and \(\tilde{\phi }_i\) is defined as in (17) for \(i=1,2,\ldots ,\) then \(\left\langle \tilde{\phi }_{k},\tilde{\phi }_{j}\right\rangle =0\) when \(j<k.\)
Proof
Case 1: \(j<k-1\)
Case 2: \(j=k-1\)
\(\square \)
All orthogonal polynomials satisfy a general three-term recurrence relation that has the form
where \(\alpha _{j}\), \(\beta _{j}\) and \(\gamma _j\) are constants. By enforcing orthogonality, we obtain the formulas
First, we will find the value of \(\alpha _{j}\).
Theorem 3
Let \(\alpha _j\) be defined as in (21). Then \(\alpha _{j}=-\frac{1}{\left( j+1\right) \left( j+2\right) },\; \forall j\ge 0.\)
Proof
Base case: When \(j=0\), we use (21) to obtain
For the induction hypothesis, we assume there is a \(j > 0\) such that \(\alpha _{j-1}=-\frac{1}{j\left( j+1\right) }\). From \(\alpha _{j}=\frac{\left\langle \tilde{\phi }_{j},x\tilde{\phi }_{j}\right\rangle }{\left\langle \tilde{\phi }_{j},\tilde{\phi }_{j}\right\rangle }\) and \(\tilde{\phi }_{j}=\phi _{j}+c_{j}\tilde{\phi }_{j-1}\), where \(c_{j}=\left( \frac{j}{j+1}\right) ^{2}\), we obtain
Now, from the recurrence relation for Legendre polynomials, we obtain
and
To calculate the middle term in Eq. (24) we will multiply \(2c_{j}\) by the result from Eq. (26):
We rearrange the formula for \(\alpha _{j-1}\) to obtain the following:
Therefore,
Now we can use the results from Eqs. (25)–(28) to determine the numerator of \(\alpha _{j}.\)
Hence,
\(\square \)
Now, we will find the value of \(\beta _{k}\).
Theorem 4
Let \(\beta _j\) be defined as in (22). Then \(\beta _{j}=\frac{j+2}{2j+3},\; \forall j\ge 0.\)
Proof
For the base case, we consider \(j=0\):
For the induction step, we assume there is a \(j\ge 0\) such that \(\beta _{j-1}=\frac{j+1}{2j+1}\). From \(\beta _{j}=\frac{\left\langle \tilde{\phi }_{j+1},x\tilde{\phi }_{j}\right\rangle }{\left\langle \tilde{\phi }_{j+1},\tilde{\phi }_{j+1}\right\rangle }\) and \(\tilde{\phi }_{j}=\phi _{j}+c_{j}\tilde{\phi }_{j-1}\) where \(c_{j}=\left( \frac{j}{j+1}\right) ^{2},\) we obtain
Using the recurrence relation for Legendre polynomials, we obtain
and
We then have
The last term in (29) is obtained as follows:
We then have
We rearrange the formula for \(\beta _{j-1}\) to obtain the following:
Therefore,
Now we can use the results from Eqs. (30)–(33) to determine the numerator of \(\beta _{j}.\)
Hence,
\(\square \)
Using the same approach as in the preceding proof, we obtain
In summary, the polynomials \(\{ \tilde{\phi }_i \}\) satisfy the recurrence relation
We can rewrite Eq. (19) as \(\tilde{\phi }_{j}-c_{j}\tilde{\phi }_{j-1}=\phi _{j}\). In matrix form, we have
where \( \varPhi =\left[ \begin{array}{cccc} \phi _{0}(\mathbf {x})&\varphi _{1}(\mathbf {x})&\cdots&\phi _{n}(\mathbf {x}) \end{array}\right] \) and \(\tilde{\varPhi }=\left[ \begin{array}{cccc}\tilde{\phi }_{0}(\mathbf {x})&\tilde{\varphi }_{1}(\mathbf {x})&\cdots&\tilde{\varphi }_{n}(\mathbf {x})\end{array}\right] \), with \(\mathbf{x}\) being a vector of at least \(n+2\) Legendre–Gauss–Lobatto points. This ensures that the columns of \(\tilde{\varPhi }\) are orthogonal.
Then, given \(f \in X_{n+1}(0)\), we can obtain the coefficients \(\tilde{f}_i\) in
by simply computing \(\tilde{f}_i = \langle \tilde{\phi }_i, f \rangle / N_i\), where \(N_i\) is as defined in (15). Then the coefficients \(f_i\) in
can be obtained by solving the system \(C\mathbf{f} = \tilde{\mathbf{f}}\) using back substitution, where C is as defined in (36). These coefficients can be used in conjunction with the discretization used in Shen (1997), which makes use of the basis \(\{ \phi _i \}\).
The case \(m\ne 0\)
In the case where \(m\ne 0\), we work with the space
As discussed in Shen (1997), this space can easily be described in terms of Legendre polynomials:
Applying the Gram-Schmidt process to the basis functions \(\{ \phi _i \}\), we obtain a new set of orthogonal polynomials that will be denoted as \(\{ \hat{\phi }_{i} \}\). These basis functions are obtained in the same way as in Eq. (11). First, we let
and
Then, we have
and
The graphs of the first several members of the sequence \(\{ \hat{\phi }_i \}\) are shown in Fig. 2.
Again, we will compare \(\phi _{2}\) with \(\hat{\phi }_{2}\) and \(\phi _{3}\) with \(\hat{\phi }_{3}\) to find a general formula for the values of \(\hat{\phi _{i}.}\) We obtain the following formula
and
These results suggest a simple recurrence relation for \(\hat{\phi }_i\) in terms of \(\phi _i\) and \(\hat{\phi }_{i-2}\), in which the coefficient of \(\hat{\phi }_{i-2}\) is a ratio of triangular numbers \(d_i = i(i-1)/[(i+1)(i+2)]\). We therefore define
with initial conditions
To prove that these polynomials are actually orthogonal, we first need this result.
Lemma 2
Let \(\hat{\phi }_j(x)\) be defined as in (38), (39), and let \(N_{j}=\left\langle \hat{\phi }_{j},\hat{\phi }_{j}\right\rangle \), \(\forall \quad j\ge 2.\) Then
\(\forall j\ge 2\) and
Proof
For the base case, we compute \(N_0\) and \(N_1\) directly. We have
and
For the induction step, we assume there is a \(j>2\) such that \(N_{j-2}=\frac{2j\left( j+3\right) }{j\left( j+1\right) \left( 2j+3\right) }.\) Now, we must show that the formula (40) is true for j. We have
\(\square \)
Theorem 5
Let \(\hat{\phi }_i\) be obtained by orthogonalizing \(\phi _i\) against \(\hat{\phi }_0, \hat{\phi }_1, \ldots \) Then \(\hat{\phi }_0 = \phi _0\), \(\hat{\phi }_1 = \phi _1\), and
where \(d_{j}=\frac{j\left( j-1\right) }{\left( j+1\right) \left( j+2\right) }\).
Proof
For the base case, we first show that \(\hat{\phi }_1 = \phi _1\) and \(\hat{\phi }_0 = \phi _0\) are already orthogonal. We have
Next, we show directly that the theorem holds when \(j=2\):
For the induction step, we assume that \(\hat{\phi }_{0},\ldots ,\hat{\phi }_{j-1}\) are all orthogonal, where \(j\ge 2\), and that
where \(d_{j}=\frac{j(j-1)}{(j+1)(j+2)}\). Then
Using Lemma 2, we obtain
\(\square \)
We now confirm that the polynomials defined using the recurrence (41) are orthogonal.
Theorem 6
Let \(\hat{\phi }_{k}\) be defined as follows:
Then \(\left\langle \hat{\phi }_{k},\hat{\phi }_{j}\right\rangle =0\) for \(j\ne k\).
Proof
We will show that for each \(k\ge 0,\) \(\left\langle \hat{\phi }_{k},\hat{\phi }_{j}\right\rangle =0\) for \(0\le j<k.\) The case \(k=1\) was handled in the proof of Theorem 5. Proceeding by induction, we assume \(\hat{\phi }_{0},\ldots ,\hat{\phi }_{k-1}\) are all orthogonal, and show that \(\left\langle \hat{\phi }_{k},\hat{\phi }_{j}\right\rangle =0\) for \(j=0,1,\ldots ,k-1.\)
Case 1: \(j<k-2\)
Case 2: \(j=k-2\)
Case 3: \(j=k-1\). If \(k \ge 3\), then we have
If \(k=2\), then the steps are the same, except that the term with \(\hat{\phi }_{k-3}\) is not present. \(\square \)
Like all families of orthogonal polynomials, the \(\{ \hat{\phi }_k \}\) satisfy the recurrence relation
By analogy with (21), (22) and (23), we have
Because \(\hat{\phi }_j\) contains only terms of odd degree if j is odd and of even degree if j is even, just like the Legendre polynomials, it is easily shown that \(\alpha _j = 0\) for \(j=1,2,\ldots \) We will now find the values of \(\beta _{j}\) and \(\gamma _j\).
Theorem 7
Let \(\beta _j\) be defined as in (45). Then \(\beta _{j}=\frac{j+3}{2j+5},\quad \forall j\ge 0.\)
Proof
We show the base case \(j=0\) directly:
For the induction step, we assume there is a \(j\ge 0\) such that \(\beta _{j-1}=\frac{j+2}{2j+3}\).
Then, using (45), we have \(\beta _{j}=\frac{\left\langle \hat{\phi }_{j+1},x\hat{\phi }_{j}\right\rangle }{\left\langle \hat{\phi }_{j+1},\hat{\phi }_{j+1}\right\rangle }\) and \(\hat{\phi }_{j}=\phi _{j}+d_{j}\hat{\phi }_{j-2}\) where \(d_{j}=\frac{j\left( j-1\right) }{\left( j+1\right) \left( j+2\right) }.\) For the numerator, we have
We now compute each part of this numerator as follows:
Then
For the third term in (47), we have
and therefore
We rearrange the formula for \(\beta _{j-2}\) to obtain the following:
Therefore,
Now we can use the results from Eqs. (48)–(51) to determine the numerator of \(\beta _{j}.\)
Thus,
\(\square \)
From (46), (52), and Lemma 2, we obtain
In summary, we have
Equation (41) can be rewritten as \(\phi _{j}=\hat{\phi }_{j}-d_{j}\hat{\phi }_{j-2}\). Now, we have the system
where \( \Phi =\left[ \begin{array}{cccc} \phi _{0}(\mathbf {x})&\phi _{1}(\mathbf {x})&\cdots&\phi _{n}(\mathbf {x}) \end{array}\right] \) and \(\hat{\Phi }=\left[ \begin{array}{cccc}\hat{\phi }_{0}(\mathbf {x})&\hat{\phi }_{1}(\mathbf {x})&\cdots&\hat{\phi }_{n}(\mathbf {x})\end{array}\right] \), with \(\mathbf{x}\) being a vector of at least \(n+3\) Legendre-Gauss-Lobatto points. This ensures that the columns of \(\hat{\Phi }\) are orthogonal.
Then, given \(f \in X_{n+2}(m)\), we can obtain the coefficients \(\tilde{f}_i\) in
by simply computing \(\hat{f}_i = \langle \hat{\phi }_i, f \rangle / N_i\), where \(N_i\) is as defined in (40). Then the coefficients \(f_i\) in
can be obtained by solving the system \(D\mathbf{f} = \hat{\mathbf{f}}\) using back substitution, where D is as defined in (55). These coefficients can be used in conjunction with the discretization used in Shen (1997), which makes use of the basis \(\{ \phi _i \}\).
Recurrence relations for generalized jacobi polynomials
The families of orthogonal polynomials developed in the preceding two sections are orthogonal with respect to the weight function \(\omega (x)\equiv 1\). In Guo et al. (2009), Shen (2003), families of generalized Jacobi polynomials/functions (GJP/Fs) are defined in such a way as to satisfy specified boundary conditions, such as the ones employed in this paper. These functions are orthogonal with respect to a weight function that is determined by the boundary conditions. However, it can be seen from (10) that an alternative weight function may be preferable when solving certain PDEs. In this section, we discuss the modification of sequences of orthonormal polynomials and their three-term recurrence relations as a consequence of changes in the underlying weight function.
Let \(J_n\) be the \(n\times n\) Jacobi matrix consisting of the recursion coefficients corresponding to a sequence of polynomials \(p_j(t)\), \(j=0,1,\ldots , n-1\) that is orthonormal with respect to the inner product
where \(d\lambda (t) = \omega (t)\,dt\), and let \(\tilde{J}_n\) be the \(n\times n\) Jacobi matrix for a sequence of polynomials \(\tilde{p}_j(t)\), \(j=0,1,\ldots , n-1\) that is orthonormal with respect to the inner product
where the measure \(d\tilde{\lambda }(t) = \tilde{\omega }(t)\,dt\) is a modification of \(d\lambda (t)\) by some factor. The following procedures can be used to generate \(\tilde{J}_n\) from \(J_n\):
-
Multiplying by a linear factor: In the case \(d\tilde{\lambda }(t) = (t-c)\,d\lambda (t)\), we have
$$\begin{aligned} \tilde{J}_n = L^T L + c I + \left( \frac{\delta _{n-1}}{\ell _{nn}} \right) ^2 \mathbf{e}_n \mathbf{e}_n^T, \end{aligned}$$(56)where \(\omega(J_n - c I) = LL^T\) is the Cholesky factorization (Gautschi 2002; Golub and Kautsky 1983).
-
Dividing by a linear factor: In the case \(d\tilde{\lambda }(t) = (t-c)^{-1}\, d\lambda (t)\), where c is near or on the boundary of the interval of integration, the inverse Cholesky (IC) procedure (Elhay and Kautsky 1994) can be used to obtain \(\tilde{J}_n\). We have
$$\begin{aligned} \tilde{J}_n = L^{-1} J_n L - cI + \frac{\delta _{n-1}}{\ell _{nn}} \mathbf{e}_n \mathbf{c}^T, \end{aligned}$$where \(I = (J_n - cI)LL^T + \mathbf{e}_n \mathbf{d}^T\) and \(\mathbf{c}\) and \(\mathbf{d}\) are vectors that need not be computed if one is content with only computing \(\tilde{J}_{n-1}\).
In either case, the original and modified polynomials are related by L:
where \(\mathbf{p}(t) = \left[ \begin{array}{ccc} p_0(t)&\ldots&p_{n-1}(t) \end{array} \right] ^T\) and \(\tilde{\mathbf{p}}(t) = \left[ \begin{array}{ccc} \tilde{p}_0(t)&\ldots&\tilde{p}_{n-1}(t) \end{array} \right] ^T\).
While three-term recurrence relations for the Jacobi polynomials are well-known, we are not aware of similar recurrence relations for GJPs. We now present efficient algorithms for modifying either of the families of polynomials \(\{ \tilde{\phi }_j \}\), \(\{ \hat{\phi }_j \}\) to obtain such recurrences.
Boundary condition \(p(1)=0\)
We first demonstrate how the polynomials \(\{ \tilde{\phi }_j \}\) from section “The case m = 0” can be modified to obtain the three-term recurrence for the GJPs
which are orthogonal on \((-1,1)\) with respect to the weight function \((1-x)^{-1}\) (Guo et al. 2009; Shen 2003). Like the \(\{ \tilde{\phi }_j \}\), these polynomials satisfy the boundary condition \(\varphi _j(1) = 0\).
Let
be the matrix of recursion coefficients for the \(\{ \tilde{\phi }_j \}_{j=0}^{n-1}\), where \(\alpha _j\), \(\beta _j\) and \(\gamma _j\) are as defined in (21), (22), (23), respectively. First, we apply a diagonal similarity transformation to symmetrize \(J_n\), which yields
where \(\delta _j = \sqrt{\gamma _j\beta _j}\) for \(j=0,1,\ldots , n-2\).
Let \(\hat{J}_n\) be the Jacobi matrix for the polynomials \(\varphi _j(x)\). Since its measure is a modification of that of \(J_n\) and \(\tilde{J}_n\) by dividing by a linear factor, certainly the IC algorithm can be used to compute \(\hat{J}_{n-1}\) directly from \(\tilde{J}_n\), but this requires \(O(n^3)\) arithmetic operations, which exceeds the cost of computing the entries of \(\hat{J}_n\) directly as inner products using the Rodriguez formula (57).
Alternatively, we can invert the procedure described above for handling modification by multiplying by a linear factor. First, we let \(T_n = I - \tilde{J}_n\), in view of the modification \(d\tilde{\lambda }(t) = (1-t)^{-1} d\lambda (t)\). Then, we solve the (n, n) entry of the matrix equation
for \(\ell _{nn}^2\), where L is lower triangular. As this equation is quadratic in \(\ell _{nn}^2\), we choose the larger root. The entry \(\hat{\delta }_{n-1}\) of \(\hat{J}_n\) can be computed using (57).
Next, we compute the factorization
which amounts to performing a Cholesky factorization “in reverse”, as reversing the order of the rows and columns of this matrix equation leads to a Cholesky factorization. Finally, we obtain
This matrix actually differs from the correct \(\hat{J}_n\) in the (n, n) entry. Therefore, deleting the last row and column yields the correct \(\hat{J}_{n-1}\). The entire procedure can be carried out in only O(n) arithmetic operations, due to the fact that L is actually lower bidiagonal.
Boundary conditions \(p(-1)=p(1)=0\)
We now show how to efficiently obtain recursion coefficients for the GJPs
which are orthogonal on \((-1,1)\) with respect to the weight function \((1-x)^{-1}(1+x)^{-1}\). Like the \(\{ \hat{\phi }_j \}\) from section “The case m ≠ 0”, these polynomials satisfy the boundary conditions \(\varphi _j(-1) = \varphi _j(1) = 0\).
Let \(J_n\), \(\tilde{J}_n\) be defined as in (58), (59), except that \(\alpha _j\), \(\beta _j\) and \(\gamma _j\) are as defined in (44), (45), (46), respectively, and let \(\hat{J}_n\) be the Jacobi matrix for the polynomials \(\hat{\varphi }_j(x)\). Since its measure is a modification of that of \(J_n\) and \(\tilde{J}_n\) by dividing by two distinct linear factors, the IC algorithm can be applied twice to compute \(\hat{J}_{n-2}\) directly from \(\tilde{J}_n\), but as before, we seek a more efficient approach.
The main idea is to apply the process from section “Boundary condition \(p(1)=0\) ” twice. In this case, however, it is more complicated because we do not have all of the information we need. As an intermediate step, let \(\bar{J}_n\) be the Jacobi matrix for polynomials \(\bar{\varphi }_j(x)\) that are orthonormal with respect to the weight function \(\bar{\omega }(x) = (1-x)^{-1}\). The goal is to first obtain \(\bar{J}_{n-1}\) from \(\tilde{J}_n\), and then obtain \(\hat{J}_{n-2}\) from \(\bar{J}_{n-1}\).
As before, we let \(T_n = I - \tilde{J}_n\). We then need to solve the (n, n) entry of the matrix equation
for \(\ell _{nn}^2\), where \(\bar{\delta }_{n-1} = \langle x\bar{\varphi }_{n-2}, \bar{\varphi }_{n-1} \rangle _{\bar{\omega }}\). However, unlike in section “Boundary condition \(p(1)=0\) ”, the value of \(\bar{\delta }_{n-1}\) is unknown. For now, we leave it as a variable and describe the remainder of the procedure.
Proceeding as before, we compute the factorization
and then obtain \(\bar{J}_n = I - LL^T\). As this differs from the true \(\bar{J}_n\) in the (n, n) entry, we delete the last row and column to obtain \(\bar{J}_{n-1}\).
To accomplish the modification of the weight function by dividing by \((1+x)\), we can proceed in a similar manner. We set \(\bar{T}_{n-1} = I + \bar{J}_{n-1}\), and then solve the \((n-1,n-1)\) entry of the matrix equation
for \(\ell _{n-1,n-1}^2\), where \(\hat{\delta }_{n-2}\) can be computed using (60).
After computing the factorization
we finally obtain
and delete the last row and column to obtain \(\hat{J}_{n-2}\).
To overcome the obstacle that \(\bar{\delta }_{n-1}\) is unknown, we note that correct value of the \((n-2,n-2)\) entry of \(\hat{J}_{n-2}\) is known; its value can be obtained using (60) but in this case, it can be determined using properties of even and odd functions that its value must be zero. We therefore solve the nonlinear equation
where \(F(\delta )\) is the \((n-2,n-2)\) entry of \(\hat{J}_{n-2}\) obtained from \(\tilde{J}_n\) using the above procedure, with \(\bar{\delta }_{n-1} = \delta \).
This equation can be solved using various root-finding methods, such as the secant method. By applying the quadratic formula in solving (61), it can be determined that the solution must lie in (0, 1 / 2]. Choosing initial guesses close to the upper bound of 1 / 2 yields rapid convergence. To improve efficiency, it should be noted that it is not necessary to compute any of the matrices in this algorithm in their entirety to obtain the \((n-2,n-2)\) entry of \(\hat{J}_{n-2}\); only a select few entries from the lower right corner of each matrix are needed. As such, it is possible to solve for \(\bar{\delta }_{n-1}\) in O(1) arithmetic operations, and compute \(\hat{J}_{n-2}\) in O(n) operations overall.
Conclusions
We have obtained recurrence relations for generating orthogonal polynomials on the interval \((-1,1)\) that satisfy the boundary conditions (1) \(p(1)=0\) and (2) \(p(-1)=p(1)=0\). These families of orthogonal polynomials can be used to easily implement transformation matrices between physical and frequency space for function spaces of interest for solving PDEs in polar and cylindrical geometries.
While these polynomials are orthogonal with respect to the weight function \(\omega (s)\equiv 1\), it has been shown that they can easily be modified to be orthogonal with respect to the other weight functions. When modified as such to obtain GJPs, recursion coefficients can be obtained with far greater efficiency than by computing the required inner products directly.
Future work includes the development of numerical methods that make use of these families of orthogonal polynomials, or modifications thereof. This includes the adaptation of Krylov subspace spectral methods (Palchak et al. 2015) to polar and cylindrical geometries (Richardson and Lambers 2017).
References
Burden RL, Faires JD (2005) Numerical analysis, Thomson brooks/cole, pp 500–501
Canuto C, Hussaini MY, Quarteroni A, Zang TA (1987) Spectral methods in fluid dynamics. Springer, Berlin
Eisen H, Heinrichs W, Witsch K (1991) Spectral collocation methods and polar coordinate singularities. J Comput Phys 96:241–257
Elhay S, Kautsky J (1994) Jacobi matrices for measures modified by a rational factor. Numer Algorithms 6(2):205–227
Fornberg B (1995) A pseudospectral approach for polar and spherical geometries. SIAM J Sci Comput 16:1071–1081
Gautschi W (2002) The Interplay between classical analysis and (numerical) linear algebra-a tribute to Gene H. Golub. Electr Trans Numer Anal 13:119–147
Golub G, Kautsky J (1983) Calculation of Gauss quadratures with multiple free and fixed knots. Numer Math 41:147–163
Gottlieb D, Orszag SA (1977) Numerical analysis of spectral methods theory applications. SIAM-CBMS, Philadelphia, PA
Guo B-Y, Shen J, Wang L-L (2009) Generalized Jacobi Polynomials/functions and their applications. Appl. Numer. Math. 59:1011–1028
Huang W, Sloan DM (1993) Pole condition for singular problems: the pseudospectral approximation. J. Comput. Phys. 107:254–261
Palchak EM, Cibotarica A, Lambers JV (2015) Solution of time-dependent PDE through rapid estimation of block gaussian quadrature nodes. Linear Algebra Appl 468:233–259
Richardson M, Lambers JV (2017) Krylov subspace spectral methods for PDEs in polar and cylindrical geometries. In preparation
Shen J (1997) Efficient Spectral-Galerkin methods III: polar and cylindrical geometries. SIAM J. Sci. Comput. 18:1583–1604
Shen J (2003) A new dual-Petrov-Galerkin method for third and higher odd-order differential equations: Application to the KDV equation. SIAM J. Numer. Anal. 41(5):1595–1619
Authors' contributions
The first author carried out all mathematical manipulations and drafted the manuscript. The second author determined the mathematical tasks to be performed, provided guidance in their completion, and made revisions to verbiage and either statements or proofs of theoretical results as needed. Both authors have given final approval of this version of the manuscript to be published, and agree to be accountable for all aspects of this work. Both authors read and approved the final manuscript.
Acknowledgements
The authors would like to thank their department chair, Bernd Schroeder, for his support, and also the anonymous referees for their feedback that led to substantial improvement of this paper.
Competing interests
Both authors declare that they have no competing interests.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Richardson, M., Lambers, J.V. Recurrence relations for orthogonal polynomials for PDEs in polar and cylindrical geometries. SpringerPlus 5, 1567 (2016). https://doi.org/10.1186/s40064-016-3217-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s40064-016-3217-y
Keywords
- Spectral-Galerkin
- Polar coordinates
- Legendre polynomials