## SpringerPlus

Impact Factor 0.982

Open Access

# Implementing a seventh-order linear multistep method in a predictor-corrector mode or block mode: which is more efficient for the general second order initial value problem

SpringerPlus20143:447

DOI: 10.1186/2193-1801-3-447

Accepted: 12 August 2014

Published: 20 August 2014

## Abstract

A Seventh-Order Linear Multistep Method (SOLMM) is developed and implemented in both predictor-corrector mode and block mode. The two approaches are compared by measuring their total number of function evaluations and CPU times. The stability property of the method is examined. This SOLMM is also compared with existing methods in the literature using standard numerical examples.

65L05; 65L06

### Keywords

General second order; Initial value problems; Block form

## Introduction

Linear multistep methods (LMMs) of the form
$\sum _{j=0}^{k}{\alpha }_{j}{y}_{n+j}={h}^{2}\sum _{j=0}^{k}{\beta }_{j}\phantom{\rule{0.3em}{0ex}}{f}_{n+j},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}k\ge 2,$
(1)
have been extensively applied to solve the special second order initial value problem (IVP)
${y}^{\mathrm{\prime \prime }}=f\left(t,y\right),\phantom{\rule{1em}{0ex}}y\left({t}_{0}\right)={y}_{0},\phantom{\rule{1em}{0ex}}{y}^{\prime }\left({t}_{0}\right)={y}_{0}^{\prime },\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\mathrm{t\epsilon }\left[\phantom{\rule{0.3em}{0ex}}{t}_{0},{t}_{N}\right]\phantom{\rule{1em}{0ex}}$
(2)
on the discrete set of points ${t}_{n}={t}_{0}+\mathit{\text{nh}},n=0,\dots ,N,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}h=\frac{{t}_{N}-{t}_{0}}{N},$ (see Lambert and Watson (1976), Ramos and Vigo-Aguiar (2005), Ixaru and Berghe (2004). Despite the successful application of (1) to solving problems of the form (2), fewer methods of the form (1) have been proposed for solving the general second order IVP
${y}^{\mathrm{\prime \prime }}=f\left(t,y,{y}^{\prime }\right),\phantom{\rule{1em}{0ex}}y\left({t}_{0}\right)={y}_{0},\phantom{\rule{1em}{0ex}}{y}^{\prime }\left({t}_{0}\right)={y}_{0}^{\prime }.$
(3)

Some of the methods available for directly solving (3) are due to Awoyemi (2001) and Ramos and Vigo-Aguiar (2006). These methods are generally implemented in a step-by-step fashion in a predictor-corrector mode.

In this paper, we construct the continuous form of (1) which has ability to generate several methods which are combined and implemented in block form to solve (3) directly (see Jator and Li (2009) and Jator (2012, 2010, 2007).

The paper is organized as follows. In Section ‘SOLMM’, we derive a continuous approximation which is used to obtain the discrete methods that are combined to form the block method. The analysis and computational aspects of the SOLMM is given in Section ‘Implementation of the SOLMM’. Numerical examples are given in Section ‘Numerical examples’ to show the accuracy and efficiency of the method. Finally, the conclusion of the paper is discussed in Section ‘Conclusion’.

## SOLMM

### Continuous form

On interval t n ttn + 6, the exact solution to (3) is approximated by the continuous form of the SOLMM
$u\left(t\right)=\sum _{j=0}^{1}{\alpha }_{j}\left(t\right){y}_{n+j}+{h}^{2}\sum _{j=0}^{6}{\beta }_{j}\left(t\right){f}_{n+j},$
(4)
whose first derivative is given by
${u}^{\prime }\left(t\right)=\frac{d}{\mathit{\text{dt}}}\left\{\sum _{j=0}^{1}{\alpha }_{j}\left(t\right){y}_{n+j}+{h}^{2}\sum _{j=0}^{6}{\beta }_{j}\left(t\right){f}_{n+j}\right\},$
(5)

where α0(t), α1(t), and β j (t), j = 0,1,2 are continuous coefficients that are uniquely determined. We assume that yn + j is the numerical approximation to the analytical solution y(tn + j), y n + j′ is an approximation to y(tn + j), and fn + j = f(tn + j,yn + j,y n + j′), j = 0,1,…,6 is supplied by the differential equation. The coefficients of the method (4) are specified by the following theorem.

#### Theorem1

In order to obtain the coefficients of the continuous method (4), a nine by nine system is solved with the aid of Mathematica by demanding that the following conditions are satisfied
$u\left({t}_{n+j}\right)={y}_{n+j},j=0,1,$
${u}^{\mathrm{\prime \prime }}\left({t}_{n+j}\right)={f}_{n+j},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}j=0,1,\dots ,6.$
After some algebraic manipulations, the equivalent form (6) produces the coefficients of (4) whose first derivative is given by (7),
$u\left(t\right)=\sum _{j=0}^{8}\frac{det\left({W}_{j}\right)}{det\left(W\right)}{P}_{j}\left(t\right),$
(6)
${u}^{\prime }\left(t\right)=\frac{d}{\mathit{\text{dt}}}\left(\sum _{j=0}^{8}\frac{det\left({W}_{j}\right)}{det\left(W\right)}{P}_{j}\left(t\right)\right),$
(7)
where we define the matrix W as
$W=\left(\begin{array}{ccc}{P}_{0}\left({t}_{n}\right)& \cdots & {P}_{8}\left({t}_{n}\right)\\ {P}_{0}\left({t}_{n+1}\right)& \cdots & {P}_{8}\left({t}_{n+1}\right)\\ {P}_{0}^{\mathrm{\prime \prime }}\left({t}_{n}\right)& \cdots & {P}_{8}^{\mathrm{\prime \prime }}\left({t}_{n}\right)\\ {P}_{0}^{\mathrm{\prime \prime }}\left({t}_{n+1}\right)& \cdots & {P}_{8}^{\mathrm{\prime \prime }}\left({t}_{n+1}\right)\\ ⋮\hfill & \phantom{\rule{3em}{0ex}}⋮\\ {P}_{0}^{\mathrm{\prime \prime }}\left({t}_{n+6}\right)& \cdots & {P}_{8}^{\mathrm{\prime \prime }}\left({t}_{n+6}\right)\end{array}\right),$

and W j is obtained by replacing the j t h column of W by V; P j (t) = t j ,j = 0,…,8 are basis functions, and V is a vector given by V = (y n ,yn + 1,f n ,fn + 1,…,fn + 6) T . We note that T is the transpose.

#### Proof

See Jator (2012).

### Discrete by-products

The following methods which are used to construct the block form are obtained by evaluating (4) and (5) at t = {tn + 2,tn + 3,tn + 4,tn + 5,tn + 6} and t = {t n ,tn + 1,tn + 2,tn + 3,tn + 4,tn + 5,tn + 6} respectively.
$\begin{array}{l}\left\{\begin{array}{c}{y}_{n+2}-2{y}_{n+1}+{y}_{n}=\frac{{h}^{2}}{60480}\left(4315{f}_{n}+53994{f}_{n+1}-2307{f}_{n+2}+7948{f}_{n+3}-4827{f}_{n+4}\\ \phantom{\rule{12em}{0ex}}+1578{f}_{n+5}-221{f}_{n+6}\right)\hfill \\ {y}_{n+3}-3{y}_{n+1}+2{y}_{n}=\frac{{h}^{2}}{20160}\left(2803{f}_{n}+37950{f}_{n+1}+14913{f}_{n+2}+7108{f}_{n+3}\phantom{\rule{0.3em}{0ex}}-3147{f}_{n+4}\\ \phantom{\rule{12em}{0ex}}+990{f}_{n+5}-137{f}_{n+6}\right)\hfill \\ {y}_{n+4}-4{y}_{n+1}+3{y}_{n}=\frac{{h}^{2}}{10080}\left(2089{f}_{n}+28878{f}_{n+1}\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}16383{f}_{n+2}\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}13828{f}_{n+3}-1257{f}_{n+4}\\ \phantom{\rule{12em}{0ex}}+654{f}_{n+5}-95{f}_{n+6}\right)\hfill \\ {y}_{n+5}-5{y}_{n+1}+4{y}_{n}=\frac{{h}^{2}}{6048}\left(1669{f}_{n}+23250{f}_{n+1}+15207{f}_{n+2}+15004{f}_{n+3}\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}4371{f}_{n+4}\\ \phantom{\rule{12em}{0ex}}+1074{f}_{n+5}-95{f}_{n+6}\right)\hfill \\ {y}_{n+6}-6{y}_{n+1}+5{y}_{n}=\frac{{h}^{2}}{4032}\left(1375{f}_{n}+19554{f}_{n+1}+13401{f}_{n+2}+15004{f}_{n+3}\phantom{\rule{0.3em}{0ex}}+\phantom{\rule{0.3em}{0ex}}6177{f}_{n+4}\\ \phantom{\rule{12em}{0ex}}+4770{f}_{n+5}+199{f}_{n+6}\right)\hfill \end{array}\right\\end{array}$
(8)
The derivatives are given by
$\begin{array}{l}\left\{\begin{array}{c}h{y}_{n}^{\prime }=-{y}_{n}+{y}_{n+1}+\frac{{h}^{2}}{120960}\left(-28549{f}_{n}-57750{f}_{n+1}+51453{f}_{n+2}-42484{f}_{n+3}\\ \phantom{\rule{12em}{0ex}}+23109{f}_{n+4}-7254{f}_{n+5}+995{f}_{n+6}\right)\hfill \\ h{y}_{n+1}^{\prime }=-{y}_{n}+{y}_{n+1}+\frac{{h}^{2}}{120960}\left(9625{f}_{n}+72474{f}_{n+1}-41469{f}_{n+2}+32524{f}_{n+3}\\ \phantom{\rule{12em}{0ex}}-17313{f}_{n+4}+5370{f}_{n+5}-731{f}_{n+6}\right)\hfill \\ h{y}_{n+2}^{\prime }=-{y}_{n}+{y}_{n+1}+\frac{{h}^{2}}{40320}\left(2633{f}_{n}+40910{f}_{n+1}+17503{f}_{n+2}+4{f}_{n+3}\\ \phantom{\rule{12em}{0ex}}-905{f}_{n+4}+398{f}_{n+5}-63{f}_{n+6}\right)\hfill \\ h{y}_{n+3}^{\prime }=-{y}_{n}+{y}_{n+1}+\frac{{h}^{2}}{120960}\left(8441{f}_{n}+117210{f}_{n+1}+114147{f}_{n+2}+75020{f}_{n+3}\\ \phantom{\rule{12em}{0ex}}-16257{f}_{n+4}+4410{f}_{n+5}-571{f}_{n+6}\right)\hfill \\ h{y}_{n+4}^{\prime }=-{y}_{n}+{y}_{n+1}+\frac{{h}^{2}}{120960}\left(8059{f}_{n}+120426{f}_{n+1}+100605{f}_{n+2}+150028{f}_{n+3}\\ \phantom{\rule{12em}{0ex}}+45381{f}_{n+4}-1110{f}_{n+5}-29{f}_{n+6}\right)\hfill \\ h{y}_{n+5}=-{y}_{n}+{y}_{n+1}+\frac{{h}^{2}}{40320}\left(2867{f}_{n}+38750{f}_{n+1}+38401{f}_{n+2}+39172{f}_{n+3}\\ \phantom{\rule{12em}{0ex}}+46453{f}_{n+4}+16382{f}_{5}-585{f}_{n+6}\right)\hfill \\ h{y}_{n+6}^{\prime }=-{y}_{n}+{y}_{n+1}+\frac{{h}^{2}}{120960}\left(6875{f}_{n}+128874{f}_{n+1}+74781{f}_{n+2}+192524{f}_{n+3}\\ \phantom{\rule{12em}{0ex}}+46437{f}_{n+4}+179370{f}_{n+5}+36419{f}_{n+6}\right)\hfill \end{array}\right\\end{array}$
(9)

### Block form

The methods (8) and (9) are combined and expressed in the form
${A}_{1}{\mathbf{\text{Y}}}_{\mu +1}={A}_{0}{\mathbf{\text{Y}}}_{\mu }+{h}^{2}{B}_{0}{\mathbf{\text{F}}}_{\mu }+{h}^{2}{B}_{1}{\mathbf{\text{F}}}_{\mu +1},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\mu =0,1,\dots \phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}},$
(10)
where
${\mathbf{\text{Y}}}_{\mu +1}={\left(\phantom{\rule{0.3em}{0ex}}{y}_{n+1},\dots ,{y}_{n+6},h{y}_{n+1}^{\prime },\dots ,h{y}_{n+6}^{\prime }\right)}^{T},$
${\mathbf{\text{F}}}_{\mu +1}={\left({f}_{n+1},\dots ,{f}_{n+6},h{f}_{n+1}^{\prime },\dots ,h{f}_{n+6}^{\prime }\right)}^{T},$
${\mathbf{\text{Y}}}_{\mu }={\left(\phantom{\rule{0.3em}{0ex}}{y}_{n-5},{y}_{n-4},\dots ,{y}_{n},\dots ,h{y}_{n-5}^{\prime },h{y}_{n-4}^{\prime },\dots ,h{y}_{n}^{\prime }\right)}^{T},$
${\mathbf{\text{F}}}_{\mu }={\left({f}_{n-5},{f}_{n-4},\dots ,{f}_{n},h{f}_{n-5}^{\prime },h{f}_{n-4}^{\prime },\dots ,h{f}_{n}^{\prime }\right)}^{T},$

and A0, A1, B0, and B1 are matrices of dimension 12 whose entries denoted by α j = αi,j, β j = βi,j,i = 1,…,12 are given by the coefficients of (8) and (9).

### Order and local truncation error

Define the local truncation error of (10) as
$\text{Ł}\left[z\left(t\right);h\right]={\mathbf{\text{Z}}}_{\mu +1}-{A}_{1}^{-1}\left[{A}_{0}{\mathbf{\text{Z}}}_{\mu }+{h}^{2}{B}_{0}{\overline{\mathbf{\text{F}}}}_{\mu }+{h}^{2}{B}_{1}{\overline{\mathbf{\text{F}}}}_{\mu +1}\right]$
(11)
where
${\mathbf{\text{Z}}}_{\mu +1}=\left({\left(y\left({t}_{n+1}\right),\dots ,y\left({t}_{n+6}\right),h{y}^{\prime }\left({t}_{n+1}\right),\dots ,h{y}^{\prime }\left({t}_{n+6}\right)\right)}^{T},$
${\overline{\mathbf{\text{F}}}}_{\mu +1}=\left(f\left({t}_{n+1},y\left({t}_{n+1}\right),{y}^{\prime }\left({t}_{n+1}\right)\right),\dots ,f\left({t}_{n+6},y\left({t}_{n+6}\right),{y}^{\prime }\left({t}_{n+6}\right)\right),$
$h{f}^{\prime }\left({t}_{n+1},y\left({t}_{n+1}\right),{y}^{\prime }\left({t}_{n+1}\right)\right),\dots ,h{f}^{\prime }{\left({t}_{n+6},y\left({t}_{n+6}\right),{y}^{\prime }\left({t}_{n+6}\right)\right)}^{T},$
${\mathbf{\text{Z}}}_{\mu }={\left(y\left({t}_{n-5}\right),y\left({t}_{n-4}\right),\dots ,y\left({t}_{n+6}\right),h{y}^{\prime }\left({t}_{n-5}\right),h{y}^{\prime }\left({t}_{n-4}\right),\dots ,h{y}^{\prime }\left({t}_{n+6}\right)\right)}^{T},$
${\overline{\mathbf{\text{F}}}}_{\mu }=\left(f\left({t}_{n-5},y\left({t}_{n-5}\right),{y}^{\prime }\left({t}_{n-5}\right)\right),f\left({t}_{n-4},y\left({t}_{n-4}\right),{y}^{\prime }\left({t}_{n-4}\right)\right),\dots ,f\left({t}_{n},y\left({t}_{n}\right),{y}^{\prime }\left({t}_{n}\right)\right),$
$h{f}^{\prime }\left({t}_{n-5},y\left({t}_{n-5}\right),{y}^{\prime }\left({t}_{n-5}\right)\right),h{f}^{\prime }\left({t}_{n-4},y\left({t}_{n-4}\right),{y}^{\prime }\left({t}_{n-4}\right)\right),\dots ,h{f}^{\prime }{\left({t}_{n},y\left({t}_{n}\right),{y}^{\prime }\left({t}_{n}\right)\right)}^{T},$

and Ł[ z(t);h] = (Ł1[ z(t);h],…,Ł6[ z(t);h],Ł7[h z(t);h],…,Ł12[h z(t);h]) T is a linear difference operator.

Assuming that z(t) is sufficiently differentiable, we can expand the terms in (4) as a Taylor series about the point t to obtain the expression for the local truncation error
$L\left[\phantom{\rule{0.3em}{0ex}}z\left(x\right);h\right]={C}_{0}z\left(x\right)+{C}_{1}h{z}^{\prime }\left(x\right)+\dots +{C}_{q}{h}^{q}{z}^{\left(q\right)}\left(x\right)+\dots$
(12)
where the constant coefficients C q = (C1,q,C2,q,…,C12,q) T , q = 0,1,… are given as follows:
$\begin{array}{l}\begin{array}{c}{C}_{0}=\sum _{j=0}^{6}{\alpha }_{j}\hfill \\ {C}_{1}=\sum _{j=1}^{6}j{\alpha }_{j}\hfill \\ \phantom{\rule{2.5em}{0ex}}⋮\hfill \\ {C}_{q}=\frac{1}{q!}\left[\sum _{j=1}^{6}{j}^{q}{\alpha }_{j}-q\left(q-1\right)\sum _{j=1}^{6}{j}^{q-2}{\beta }_{j}\right]\end{array}\end{array}$

#### Definition1

Let p j ,p j′,j = 1,…,6 be positive integers, then, the block method (10) has algebraic order p = m i n{p1,…,p6,p 1′,…,p 6′}, p>1, provided there exists a corresponding constant Cp + 2 such that the Local Truncation Error E μ satisfies
$\parallel {E}_{\mu }\parallel ={C}_{p+2}{h}^{p+2}+O\left({h}^{p+3}\right)$

where · is the maximum norm.

#### Definition2.

The block method (10) is said to be consistent if it has order at least one.

The block method (10) has order and error constant given by the vector p=6 and ${C}_{p+2}=\parallel {\left(\frac{19}{6048},\frac{349}{60480},\frac{127}{15120},\frac{349}{30240},\frac{349}{30240},-\frac{6031}{907200},\frac{8563}{1814400},\frac{6163}{1814400},\frac{6163}{1814400},\frac{1649}{907200},\frac{8563}{1814400},-\frac{6031}{907200}\right)}^{T}\parallel$.

### Linear stability of the SOLMM

The linear-stability of the SOLMM is discussed by applying the method to the test equation y′′ = λ y, where λ is expected to run through the (negative) eigenvalues of the Jacobian matrix $\frac{\mathrm{\partial f}}{\mathrm{\partial y}}$ (see Sommeijer (1993)). Letting q = λ h2, it is easily shown that the application of (10) to the test equation yields
${Y}_{\mu +1}=M\left(q\right){Y}_{\mu }\phantom{\rule{1em}{0ex}},M\left(q\right)={\left({A}_{1}-{\mathit{\text{qB}}}_{1}\right)}^{-1}\left({A}_{0}+q{B}_{0}\right)$
(13)

where the matrix M(q) is the amplification matrix which determines the stability of the method.

#### Definition3.

The interval [ -q0,0] is the stability interval, if in this interval ρ(q)≤1, where ρ(q) is the spectral radius of M(q) and q0 is the stability boundary (see Sommeijer (1993)).

#### Remark1

We found that ρ(q)≤1 if q ε [-4.552,0], hence, for the SOLMM, q0 = 4.552.

## Implementation of the SOLMM

The SOLMM was implemented in both block mode and predictor-corrector mode using a written code in PERL programming language and executed on a laptop computer with AMD Quad-Core A10-4600M Processor, 8GB of RAM and Windows 8.1 OS. The total program running time was acceptable, as shown in Tables 1, 2 and 3. The computational time complexity and space complexity of the algorithms for both modes of SOLMM used for the examples in this paper are polynomial. Details of the block mode implementation is given in Jator (2012) and the predictor-corrector implementation is discussed next.

### Predictor-corrector mode algorithm

The initial block was used to start predictor-corrector algorithm, after which the predictor (14) and corrector (15) were used in a step-by-step fashion to provide the numerical solution from the second block to the end of the interval.

Predictors. The following predictors are derived via Theorem 2.1 by deleting the last row and column of matrix W.
(14)
Correctors. The last members of (8) and (9) are used as correctors.
(15)

## Numerical examples

### Example1.

We consider the IVP given by
${y}^{\mathrm{\prime \prime }}-4{y}^{\prime }+8y={t}^{3},\phantom{\rule{1em}{0ex}}y\left(0\right)=2,\phantom{\rule{1em}{0ex}}{y}^{\prime }\left(0\right)=4,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\left[0,1\right],$
$\text{Exact}:y\left(t\right)={e}^{2t}\left(2cos\left(2t\right)-\frac{3}{64}sin\left(2t\right)\right)+\frac{3}{32}t+\frac{3}{16}{t}^{2}+\frac{1}{8}{t}^{3}$

### Example2.

We consider the given Bessel’s IVP solved on [ 1,8] (see Vigo-Aguiar and Ramos (2006)).
${t}^{2}{y}^{\mathrm{\prime \prime }}+t{y}^{\prime }+\left({t}^{2}-0.25\right)y=0,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}y\left(1\right)=\sqrt{\frac{2}{\pi }}sin1\simeq 0.6713967071418031$
${y}^{\prime }\left(1\right)=\left(2cos1-sin1\right)/\sqrt{2\pi }\simeq 0.0954005144474746$
$\text{Exact}:y\left(t\right)={J}_{1/2}\left(t\right)=\sqrt{\frac{2}{\mathrm{\pi t}}}sint$

The theoretical solution at t=8 is $y\left(8\right)=\sqrt{\frac{2}{8\pi }}sin\left(8\right)\simeq 0.279092789108058969$.

### Example3.

We consider the nonlinear Fehlberg problem which was also solved in Sommeijer (1993).
$\begin{array}{l}{y}_{1}^{\mathrm{\prime \prime }}=-4{t}^{2}{y}_{1}-\frac{2}{\sqrt{{y}_{1}^{2}+{y}_{2}^{2}}}{y}_{2},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{y}_{2}^{\mathrm{\prime \prime }}=\frac{2}{\sqrt{{y}_{1}^{2}+{y}_{2}^{2}}}{y}_{1}-4{t}^{2}{y}_{2}\\ {y}_{1}\left(\sqrt{\frac{\pi }{2}}\right)=0,{y}_{1}^{\prime }\left(\sqrt{\frac{\pi }{2}}\right)=-2\sqrt{\frac{\pi }{2}},{y}_{2}\left(\sqrt{\frac{\pi }{2}}\right)=1,\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{y}_{2}^{\prime }\left(\sqrt{\frac{\pi }{2}}\right)=0,\\ {y}_{1}\left(t\right)=cos\left({t}^{2}\right),\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{y}_{2}\left(t\right)=sin\left({t}^{2}\right).\end{array}$

### Comparison of block mode and predictor-corrector mode

The SOLMM is implemented in both predictor-corrector and block modes. The two approaches are compared by measuring their total number of function evaluations (NFEs) and CPU times in seconds. The block mode implementation is shown to be superior to the predictor-corrector mode implementation in terms of accuracy and the number of function evaluations. However, the predictor-corrector mode implementation uses less time than the block implementation. Details of the numerical examples are displayed in Tables 1, 2 and 3.

### Comparison of block method with other methods

The theoretical solution at t = 8 is $y\left(8\right)=\sqrt{\frac{2}{8\pi }}sin\left(8\right)\simeq 0.279092789108058969$. The errors in the solution were obtained at t = 8 using the SOLMM of order 7 and compared the the errors in (Vigo-Aguiar and Ramos 2006) which is based on the variable- step Falker method of order eight (VAR (8)) implemented in the predictor-corrector mode. The results given in Table 4 show that the SOLMM is more accurate than the method in (Vigo-Aguiar and Ramos 2006).
The maximum norm of the global error for the y-component is given in the form 10-C D, where CD denotes the the number correct decimal digits at the endpoint (see (Sommeijer 1993)). This problem has also been solved in (Sommeijer 1993) using the eighth-order, eight-stage RKN (H8) method constructed by Hairer (1977). We have chosen to compare this method of order 8 with our method of order 7, because the orders of the methods are very close. The results obtained using the H 8 are reproduced in Table 5 and compared with the results given by our method. It is seen from Table 5 that our method performs better than those in (Sommeijer 1993) in terms of accuracy (smaller errors) and efficiency (smaller NFEs).

## Conclusion

A SOLMM is proposed and implemented in both predictor-corrector and block modes. It is shown that the block mode algorithm is superior to the predictor-corrector mode algorithm in terms of accuracy and the number of function evaluations. However, the predictor-corrector mode implementation uses less time that the block implementation. the Details of the comparison of the numerical examples are displayed in Tables 1, 2, 3, 4 and 5. Our future research will be focus on developing a variable step version of the SOLMM in both modes.

## Authors’ Affiliations

(1)
Department of Mathematics and Statistics, Austin Peay State University
(2)
Department of Computer Science and Information Technology, Austin Peay State University

## References

1. Awoyemi DO: A new sixth-order algorithm for general second order ordinary differential equation. Int J Comput Math 2001, 77: 117-124. 10.1080/00207160108805054
2. Hairer E: Méthodes de Nyström pour l’équation différentielle y′′ = f ( x , y ). Numer Math 1977, 25: 283-300.Google Scholar
3. Ixaru L, Berghe GV: Exponential fitting. Kluwer, Dordrecht, Netherlands; 2004.
4. Jator SN: A continuous two-step method of order 8 with a Block Extension for y′′ = f ( x , y , y). Appl Math Comput 2012, 219: 781-791. 10.1016/j.amc.2012.06.027
5. Jator SN: Solving second order initial value problems by a hybrid multistep method without predictors. Appl Math Comput 2010, 217: 4036-4046. 10.1016/j.amc.2010.10.010
6. Jator SN, Li J: A self-starting linear multistep method for a direct solution of the general second order initial value problem. Intern J Comput Math 2009, 86: 827-836. 10.1080/00207160701708250
7. Jator SN: A sixth order linear multistep method for the direct Solution of y′′ = f ( x , y , y). Intern J Pure Appl Math 2007, 40: 457-472.Google Scholar
8. Lambert JD, Watson A: Symmetric multistep method for periodic initial value problem. J Instit Math Appl 1976, 18: 189-202. 10.1093/imamat/18.2.189
9. Ramos H, Vigo-Aguiar J: Variable stepsize Stôrmer-Cowell methods. Math Comp Mod 2005, 42: 837-846. 10.1016/j.mcm.2005.09.011
10. Sommeijer BP: Explicit, high-order Runge-Kutta-Nyström methods for parallel computers. Appl Numer Math 1993, 13: 221-240. 10.1016/0168-9274(93)90145-H
11. Vigo-Aguiar J, Ramos H: Variable stepsize implementation of multistep methods for y′′ = f ( x , y , y). J Comput Appl Math 2006, 192: 114-131. 10.1016/j.cam.2005.04.043