12) Stability of multi-step schemes#

Last time:

  • Validation and Verification

  • Non constant coefficients

  • Manufactured solutions for variable coefficients

  • Discretizing in conservative form

  • Sparsity

Today:

  1. Stability of multi-step schemes: Introduction

  2. Multi-step schemes

  3. Stability for the Leapfrog scheme

  4. Initialization of the Leapfrog scheme

  5. Parasidic modes of the Leapfrog scheme

1. Stability of multi-step schemes: Introduction#

In your Assignment 3, you have seen that Crank-Nicolson is a second-order scheme in both space and time and unconditionally stable.

  • We have also had a discussion on boundary conditions.

  • For Finite Difference schemes we must both respect physical boundary conditions as well as (sometimes) introduce additional numerical boundary conditions (see Assignment 2).

  • The implementation of these boundary conditions affect both the order of accuracy, and stability of the scheme.

  • For one-step schemes we have seen the CFL condition gives us a necessary condition for stability

    \[ | a r | \leq 1 \]

Theorem: Von Neumann Stability

A one-step finite difference scheme (with constant coefficients) is stable in a stability region \(\Lambda\) if and only if there is a constant \(K\) (independent of \(\theta\), \(\Delta t\), and \(\Delta x\)) such that

\[ |g(\theta,\Delta t,\Delta x)| \le 1 + K \Delta t, \]

with \((\Delta t,\Delta x)\in\Lambda\). If \(g(\theta,\Delta t,\Delta x)\) is independent on \(\Delta t\) and \(\Delta x\), the stability condition can be replaced with the restricted stability condition

\[ |g(\theta)| \le 1. \]

Now, we want to extend this analysis to multi-step schemes. Let’s start from the Leapfrog scheme and the same analysis can be applied to other multi-step schemes.

2. Multi-step schemes#

Note

Terminology

A note on terminology of multi-step schemes. Some references, w.g., Wikipedia define a multi-step scheme or method to be one that uses information from the previous \(s\) steps to calculate the next value (i.e., not including the next value in the definition).

Some numerical PDEs textbooks, like the reference textbooks for this course, e.g., Thomas and Strikwerda, include the new time step in the number of levels of the scheme. Hence, for the Leapfrog scheme considered below, they would refer to it as a two-step or three-time-level method (as it uses information at the time levels \(n\) and \(n-1\) to calculate the new value at \(n+1\).)

3. Stability for the Leapfrog scheme#

\[ \begin{array}{rll} \displaystyle \frac{u_{k}^{n+1} - u_{k}^{n-1}}{2\Delta t} + a \frac{ u_{k+1}^{n} - u_{k-1}^{n} }{ 2\Delta x} = 0 && \hbox{ Leapfrog (C.T.C.S.)} \end{array} \]

We set \(u_{k}^{n} \leadsto g^{n}e^{i kh\xi}\) (with \(h \equiv \Delta x\)) (from application of the Fourier inversion formula), and eliminate common factors (here, the lowest time level is \(g^{n-1}e^{i kh\xi}\), so we divide by it):

\[\begin{split} \begin{align*} \frac{u_{k}^{n+1} - u_{k}^{n-1}}{2\Delta t} + a \frac{ u_{k+1}^{n} - u_{k-1}^{n} }{ 2\Delta x} &= 0 \\ \frac{g^{n+1}e^{i kh\xi} - g^{n-1}e^{i kh\xi}}{2 \Delta t} + a \frac{g^{n}e^{i( k+1)h\xi} - g^{n}e^{i(k-1)h\xi}}{2 \Delta x} &= 0 \end{align*} \end{split}\]

Eliminate the common factor \( g^{n-1}e^{i kh\xi}\), and use \(\theta = h \xi\):

\[ \begin{align*} \frac{g^2 -1}{2 \Delta t} +a \frac{g \left(e^{i\theta} - e^{-i \theta} \right)}{2 \Delta x} &= 0 \end{align*} \]

Define \(R = a \Delta t / \Delta x\), then:

\[ G(\theta) = g ^2 - 1 + 2i R\sin(\theta)g =0 \]
  • We started with a 2-step scheme or 3-time-level method and we reached a quadratic polynomial \(G(\theta)\), called the amplification/growth polynomial. In this case is quadratic, hence, it has two roots or solutions:

\[ g_{\pm}(\theta) = -i R\sin(\theta) \pm \sqrt{ 1 - (R)^2\sin^2(\theta)} \]
  • When \(g_{+}\not=g_{-}\), the solution is given by

\[ \hat{u}^{n}(\xi) = A_{+}(\xi)g_{+}(h\xi)^{n} + A_{-}(\xi) g_{-}(h\xi)^{n}, \]

and \(A_{\pm}(\xi)\) are determined by initial conditions.

Sometimes it is useful to rewrite this in the following form

\[ \hat{u}^n(\xi) = A(\xi) g_{+}(h\xi)^n + B(\xi) \left[ \frac{ g_{-}(h\xi)^{n} - g_{+}(h\xi)^{n} }{ g_{-}(h\xi) - g_{+}(h\xi) } \right], \]

where \(A(\xi)\) and \(B(\xi)\) are determined by initial conditions.

  • When \({g_{+}=g_{-}} = g\), the solution is given by

\[ \hat{u}^{n}(\xi) = A(\xi)g(h\xi)^{n} + n\cdot B(\xi) g(h\xi)^{n-1}, \]

where \(A(\xi)\), and \(B(\xi)\) are related to \(\hat{u}^{0}(\xi)\), and \(\hat{u}^{1}(\xi)\) by

\[\begin{split} \begin{array}{rcl} A(\xi) &=& \hat{u}^{0}(\xi) \nonumber \\ B(\xi) &=& \hat{u}^{1}(\xi) - \hat{u}^{0}(\xi)g(h\xi). \end{array} \end{split}\]

Now, with this setup we get:

  • with \(|R|\le1\) we have that

\[ |g_{\pm}|^{2} = 1 - R^2\sin^2(\theta) + R^2\sin^2(\theta) = 1, \]
  • when \(|R|>1\), we get

    \[ |g_{-}(\pi/2)| = |R| + \sqrt{ R^2 - 1} \ge |R| > 1. \]

  • Hence, \(|R|\le 1\) is a necessary condition for stability.

But… We’re not done yet. We must also look at the case \(g_{+} = g_{-}\).

This equality holds only when \(| R|=1\), and \(\theta=\pm\pi/2\).

For these two values we get \(g=\pm i\), and the solutions

\[ \hat{u}^n\left(\pm\pi/2h\right) = A\left(\pm\pi/2h\right) (\mp i)^n + n\cdot B\left(\pm\pi/2h\right) (\mp i)^{n-1}. \]

Since this term grows linearly in \(n\), the leapfrog scheme is unstable for \(|R|=1\).

  • Hence, the leapfrog scheme is stable \(\Leftrightarrow\) \(|R|<1\).

4. Initialization of the Leapfrog scheme#

The Leapfrog scheme (and other two-step schemes) require that in addition to the initial values \(u_{k}^0\), the first time level \(u_k^1\) must also be initialized.

  • Any consistent one-step scheme, even an unstable one, can be used to initialize \(u_k^1\). Since the unstable scheme is applied only once, the error growth is minimal.

  • Furthermore, if the grid parameter \(R\) is constant, then the initialization scheme can be accurate of one order less than that of the two-step scheme, without degrading the overall accuracy of the scheme.

  • Thus, we have found a potential use for the unstable forward-time centred-space (F.T.C.S.) scheme for the one-way wave equation; as an initializer for the leap-frog scheme.

5. Parasitic modes of the Leapfrog scheme#

From the expressions

\[\begin{split} \begin{array}{rcl} \hat{u}^{n}(\xi) &=& A_{+}(\xi)g_{+}(h\xi)^{n} + A_{-}(\xi) g_{-}(h\xi)^{n}, \\[1ex] g_{\pm}(\theta) &=& -i R\sin(\theta) \pm \sqrt{ 1 - R^2\sin^2(\theta)}, \end{array} \end{split}\]

we see that the solution of the leapfrog scheme consists of two parts, associated with \(g_{+}(\theta)\), and \(g_{-}(\theta)\).

  • We note that \(g_{+}(0)=1\), and \(g_{-}(0)=-1\).

Let’s see how these two parts contribute to the solution.

If we use the forward-time centred-space (F.T.C.S.) scheme to initialize the leapfrog scheme

\[ \hat{u}^1(\xi) = (1-i R\sin(\theta))\hat{u}^0(\xi). \]

Based on taking the first step using the F.T.C.S. scheme, and Taylor expanding the square roots in the expressions for \(g_{\pm}(\theta)\):

\[\begin{split} \begin{array}{rcl} g_{+}(\theta) &=& 1-i R\sin(\theta) - \frac{1}{2}R^2 \sin^{2}(\theta) + O({h^4}),\\ g_{-}(\theta) &=& -1-i R\sin(\theta) + \frac{1}{2}R^2 \sin^{2}(\theta) + O({h^4}), \end{array} \end{split}\]

Using the definition of \(B(\xi)\) above, we have

\[ B(\xi) = \left[ \frac{1}{2}R^2\sin^2(\theta)+ O({\theta^4})\right] \hat{u}^{0}(\xi). \]
  • For \(|\theta|\) small, \(|B(\xi)|=O({\theta^2})\), i.e., small, the scheme behaves like a one-step scheme with amplification factor \(g_{+}(\theta)\).

  • When \(|\theta|\) is not small, \(B(\xi)\) is not necessarily small, and the effect of the second amplification factor \(g_{-}(\theta)\) is felt.

  • The portion of the solution associated with \(g_{-}(\theta)\) is called the parasitic mode. Since \(g_{-}(0)=-1\), the parasitic mode induces rapid oscillations in time.

  • The parasitic mode travels in the wrong direction.

    • When \(a\) is positive, the parasitic mode travels to the left, and when \(a\) is negative it travels to the right.

  • In general, when the amplification/growth polynomial has roots with multiplicity \(>1\) things get more complicated.