../_images/book_cover.jpg

This notebook contains an excerpt from the Python Programming and Numerical Methods - A Guide for Engineers and Scientists, the content is also available at Berkeley Python Numerical Methods.

The copyright of the book belongs to Elsevier. We also have this interactive book online for a better learning experience. The code is released under the MIT license. If you find this content useful, please consider supporting the work on Elsevier or Amazon!

< 22.4 Numerical Error and Instability | Contents | 22.6 Python ODE Solvers >

Predictor-Corrector and Runge Kutta Methods

Predictor-Corrector Methods

Given any time and state value, the function, \(F(t, S(t))\), returns the change of state \(\frac{dS(t)}{dt}\). Predictor-corrector methods of solving initial value problems improve the approximation accuracy of non-predictor-corrector methods by querying the \(F\) function several times at different locations (predictions), and then using a weighted average of the results (corrections) to update the state. Essentially, it uses two formulas: the predictor and corrector. The predictor is an explicit formula and first estimates the solution at \(t_{j+1}\), i.e. we can use Euler method or some other methods to finish this step. After we obtain the solution \(S(t_{j+1})\), we can apply the corrector to improve the accuracy. Using the found \(S(t_{j+1})\) on the right-hand side of an otherwise implicit formula, the corrector can calculate a new, more accurate solution.

The midpoint method has a predictor step:

\[ S\left(t_{j} + \frac{h}{2}\right) = S(t_j) + \frac{h}{2}F(t_j, S(t_j)), \]

which is the prediction of the solution value halfway between \(t_j\) and \(t_{j+1}\).

It then computes the corrector step:

\[ S(t_{j+1}) = S(t_j) + hF\left(t_j + \frac{h}{2}, S\left(t_{j} + \frac{h}{2}\right)\right) \]

which computes the solution at \(S(t_{j+1})\) from \(S(t_j)\) but using the derivative from \(S\left(t_{j} + \frac{h}{2}\right)\).

Runge Kutta Methods

Runge Kutta (RK) methods are one of the most widely used methods for solving ODEs. Recall that the Euler method uses the first two terms in Taylor series to approximate the numerical integration, which is linear: \(S(t_{j+1}) = S(t_j + h) = S(t_j) + h \cdot S'(t_j)\).

We can greatly improve the accuracy of numerical integration if we keep more terms of the series in

\[S(t_{j+1}) = S(t_j + h) = S(t_j) + S'(t_j)h + \frac{1}{2!}S''(t_j)h^2 + \cdots + \frac{1}{n!}S^{(n)}(t_j)h^n \label{eq:taylor} \tag{1}\]

In order to get this more accurate solution, we need to derive the expressions of \(S''(t_j), S'''(t_j), \cdots, S^{(n)}(t_j)\). This extra work can be avoided using the RK methods, which is based on truncated Taylor series, but not require computation of these higher derivatives.

Second order Runge Kutta method

Let us first derive the second order RK method. Let \(\frac{dS(t)}{dt} = F(t,S(t))\), then we can assume an integration formula the form of

\[ S(t + h) = S(t) + c_1F(t, S(t))h + c_2F[t+ph, S(t)+qhF(t, S(t))]h \label{eq:integration} \tag{2}\]

We can attempt to find these parameters \(c_1, c_2, p, q\) by matching the above equation to the second-order Taylor series, which gives us

\[ S(t + h) = S(t) + S'(t)h + \frac{1}{2!}S''(t)h^2 = S(t) + F(t, S(t))h + \frac{1}{2!}F'(t, S(t))h^2 \label{eq:matching} \tag{3}\]

Noting that $\(F'(t, s(t)) = \frac{\partial F}{\partial t} + \frac{\partial F}{\partial S}\frac{\partial S}{\partial t} = \frac{\partial F}{\partial t} + \frac{\partial F}{\partial S}F\)$

Therefore, equation \(\eqref{eq:matching}\) can be written as:

\[ S(t + h) = S + Fh + \frac{1}{2!}\Big(\frac{\partial F}{\partial t} + \frac{\partial F}{\partial S}F\Big)h^2 \label{eq:rewrite-matching} \tag{4}\]

In equation \(\eqref{eq:integration}\), we can rewrite the last term by applying Taylor series in several variables, which gives us:

\[F[t+ph, S+qhF)] = F + \frac{\partial F}{\partial t}ph + qh\frac{\partial F}{\partial S}F\]

thus equation \(\eqref{eq:integration}\) becomes:

\[ S(t + h) = S + (c_1+c_2)Fh + c_1\Big[ \frac{\partial F}{\partial t}p + q\frac{\partial F}{\partial S}F \Big]h^2 \label{eq:rewrite-integration} \tag{5}\]

Comparing equation \(\eqref{eq:rewrite-matching}\) and \(\eqref{eq:rewrite-integration}\), we can easily obtain:

\[c_1 + c_2 = 1, \space c_2p=\frac{1}{2}, \space c_2q=\frac{1}{2} \label{eq:result} \tag{6}\]

Because \(\eqref{eq:result}\) has four unknowns and only three equations, we can assign any value to one of the parameters and get the rest of the parameters. One popular choice is:

\[c_1 =\frac{1}{2}, \space c_2 =\frac{1}{2}, \space p =1, \space q=1\]

We can also define: $$

\[\begin{eqnarray*} k_1 & = & F(t_j,S(t_j))\\ k_2 & = & F\left(t_j+ph, S(t_j)+qhk_1\right)\\ \end{eqnarray*}\]
\[ \begin{align}\begin{aligned}where we will have:\\$$S(t_{j+1}) = S(t_j) + \frac{1}{2}(k_1+k_2)h\end{aligned}\end{align} \]

Fourth-order Runge Kutta method

A classical method for integrating ODEs with a high order of accuracy is the Fourth Order Runge Kutta (RK4) method. It is obtained from the Taylor series using similar approach we just discussed in the second-order method. This method uses four points \(k_1, k_2, k_3\), and \(k_4\). A weighted average of these is used to produce the approximation of the solution. The formula is as follows.

\[\begin{split} \begin{eqnarray*} k_1 & = & F(t_j,S(t_j))\\ k_2 & = & F\left(t_j+\frac{h}{2},S(t_j)+\frac{1}{2}k_1h\right)\\ k_3 & = & F\left(t_j+\frac{h}{2},S(t_j)+\frac{1}{2}k_2h\right)\\ k_4 & = & F(t_j+h,S(t_j)+k_3h) \end{eqnarray*} \end{split}\]

Therefore, we will have:

\[ S(t_{j+1}) = S(t_j) + \frac{h}{6}\left(k_1 + 2k_2 + 2k_3 + k_4\right). \]

As indicated by its name, the RK4 method is fourth-order accurate, or \(O(h^4)\).

< 22.4 Numerical Error and Instability | Contents | 22.6 Python ODE Solvers >