15) Applied Finite Elements#
Last time:
Linear Finite Elements
Galerkin method for 1D Poisson equation
Abstractions
Today:
Advection-diffusion (time independent)
Artificial diffusion and Streamline Upwinding for stabilization
1D time-independent advection-diffusion Example
Variational Multiscale Method
Time-dependent problems
Stabilized methods for advection-diffusion
General workflow for time dependent problems
7.1 Collocated quadratureSUPG solver in Julia
Finite element interfaces: deal.ii
1. Advection-diffusion (time independent)#
Recall the (time independent) advection-diffusion problem
with a source term \(s\).
This can be rewritten in the weak form
or, bringing everything to the left-hand side:
We can think of the advection-diffusion (or convection-diffusion) equation as a protptype of the Navier-Stokes Equations that describe the motion of viscous fluids and can be subcategorized in two categories: incompressible Navier-Stokes or compressible Navier-Stokes.
Observations#
The finite element (FE) numerical computation of incompressible Navier–Stokes (NS) equations suffers from two main sources of numerical instabilities arising from the associated Galerkin problem.
Equal order finite elements for pressure and velocity, do not satisfy the inf-sup condition and leads to instability on the discrete pressure (also called spurious pressure).
The advection term in the Navier–Stokes equations can produce oscillations in the velocity field (also called spurious velocity). Such spurious velocity oscillations become more evident for advection-dominated (i.e., high Reynolds number \(Re\) flows).
For “low” \(Re\), viscous forces dominate and we are in the viscous regime situation (also named Laminar Flow).
For “high” \(Re\), inertial forces prevail and a slightly viscous fluid with high velocity emerges (also named Turbulent Flow).
2. Artificial diffusion and Streamline Upwinding for stabilization#
To control instabilities arising from inf-sup condition and advection-dominated problems, pressure-stabilizing Petrov–Galerkin (PSPG) stabilization along with Streamline-Upwind Petrov-Galerkin (SUPG) stabilization can be added to the NS Galerkin formulation.
The Streamline-Upwind Petrov-Galerkin (SUPG) method was designed during 80s for convection dominated-flows for the incompressible Navier–Stokes equations by Brooks and Hughes (legends in Finite Element Analysis).
This is a modification at the equation level. Meaning, we are going to manipulate the original equation to work with a formulation that is more stable when discretized.
Let’s add the following stabilizing term to the equation
where \(\tau\) is a stabilization parameter (one more knob to turn - I know…).
Hence, we reach
Observation:
The residual is large where diffusion is needed.
Let’s examine what this does to advection#
From vector calculus, we have the identity
where, if we are in 2D for example, \(\mathbf w = (w_1, w_2)\) and
What does \(\mathbf w \otimes \mathbf w\) do? For any vector \(\mathbf z\), it extracts the components of \(\mathbf z\) in the direction of \(\mathbf w\) and then returns a vector aligned with \(\mathbf w\). So it is effectively a directional projection operator onto \(\mathbf w\).
Differential geometry interpretation:
If we decompose the gradient as the sum of its parallel and perpendicular components
we have that \( \left( \nabla u \right)_{\parallel} \) is along \(\mathbf w\) and \(\left( \nabla u \right)_{\perp}\) is perpendicular to it.
Then, by definition of gradient, we have
where \(D_{\mathbf w} u\) is the directional derivative in the direction of \(\mathbf w\). We have that advection transports information along streamlines and also instabilities (oscillations) occur along those directions.
Hence, the stabilization term we add extracts the components only along the streamlines of the flow. The perpendicular direction is unaltered, so the sharp features across them are preserved.
This is “pencil” shaped diffusion, only along the streamlines (a family of curves whose tangent vectors constitute the velocity vector field of the flow).
With general diffusion, the term \(\nabla\cdot(\kappa\nabla u) \) (and any varations of this, such as in hyperdiffusion or artificial diffusion, \(\kappa \nabla^4 u\)) introduces isotropic, non-selective dissipation, symmetric in all directions.
If \(\tau^e\) is chosen appropriately, this will be enough diffusion to get a Peclet number of about 1 when it needs it.
Recall that we have introduced the cell Peclet number in our Fourier and von Neumann stability analyses Lecture. I report it here for convenience
Optimal stabilization#
A nodally exact solution for 1D advection is obtained with the following stabilization parameter \(\tau^e\):
In general, several stabilization parameters can be found in the literature, but typically, \(\tau^e\) (which depends on mesh size \(h\), advection speed, \(\mathbf w\), and Peclet number) is chosen to reflect the local balance of transient effects, advection, and diffusion.
3. 1D time-independent advection-diffusion Example#
For the steady 1D advection-diffusion problem with source function \(s = 1\), we have
the standard Galerkin weak form is
Using the streamline-upwind stabilization, we use a modified test function
hence we want to add the term based on the strong form residual
Expanding:
Distributing the stabilization:
We now use a common simplified SUPG interpretation where the extra stabilization is represented as an added streamline-diffusion term only.
In other words, we simplify this stabilization, by only considering the dominant stabilization term only:
Therefore the stabilized weak form becomes
Comparing with the FEM assembly form
we identify
Let’s see this example in code.
using SparseArrays
using FastGaussQuadrature
import NLsolve: nlsolve
using LinearAlgebra
using Plots
using LaTeXStrings
default(linewidth=4, legendfontsize=12)
# utility functions from our previous lecture
function my_spy(A)
cmax = norm(vec(A), Inf)
s = max(1, ceil(120 / size(A, 1)))
spy(A, marker=(:square, s), c=:diverging_rainbow_bgymr_45_85_c67_n256, clims=(-cmax, cmax))
end
# build global coordinates for all DOFs from element boundary points
function xnodal(x, P)
xn = Float64[]
xref, _ = gausslobatto(P)
for i in 1:length(x)-1
xL, xR = x[i:i+1]
append!(xn, (xL+xR)/2 .+ (xR-xL)/2 * xref[1+(i>1):end])
end
xn
end
# build the reference-element basis functions evaluated at quadrature points
function febasis(P, Q)
x, _ = gausslobatto(P)
q, w = gausslegendre(Q)
Pk, _ = vander_legendre_deriv(x)
Bp, Dp = vander_legendre_deriv(q, P)
B = Bp / Pk
D = Dp / Pk
x, q, w, B, D
end
# 1D finite-element mesh topology and local-to-global connectivity operator used later in assembly
function fe1_mesh(P, nelem)
x = LinRange(-1, 1, nelem+1)
rows = Int[]
cols = Int[]
for i in 1:nelem
append!(rows, (i-1)*P+1:i*P)
append!(cols, (i-1)*(P-1)+1:i*(P-1)+1)
end
x, sparse(cols, rows, ones(nelem*P))'
end
function vander_legendre_deriv(x, k=nothing)
if isnothing(k)
k = length(x) # Square by default
end
m = length(x)
Q = ones(m, k)
dQ = zeros(m, k)
Q[:, 2] = x
dQ[:, 2] .= 1
for n in 1:k-2
Q[:, n+2] = ((2*n + 1) * x .* Q[:, n+1] - n * Q[:, n]) / (n + 1)
dQ[:, n+2] = (2*n + 1) * Q[:,n+1] + dQ[:,n]
end
Q, dQ
end
function fe_residual(u_in, fe, fq; bci=[1], bcv=[1.])
u = copy(u_in) # make a copy of the initial global vector of unknown coefficients
v = zero(u)
u[bci] = bcv # put the boundary condition values in the proper boundary condition indices of the solution vector
for e in 1:fe.nelem # for all elements
q, w, E, dXdx = fe_element(fe, e)
B, D = fe.B, fe.D # B is the "basis evaluator" operator (evaluates basis functions at quadrature points); D stores their derivatives
ue = E * u # restrict/extract the solution on each element e
uq = B * ue # uq, u at quadrature points
Duq = dXdx .* (D * ue) # Duq, ∇u at quadrature points (converting the reference coord element derivative to the physical space derivative)
f0, f1 = fq(q, uq, Duq) # fq is the user-defined QFunction, which represents the weak form of the physics/model we want to solve
ve = B' * (w .* f0) + D' * (dXdx .* w .* f1) # local residual vector on the element
v += E' * ve # assemble global vector using the transpose of the element restriction operator
end
v[bci] = u_in[bci] - u[bci] # make sure global vector satisfies the prescribed BCs
#println("residual")
v
end
# Extract out what we need for element e
function fe_element(fe, e)
xL, xR = fe.x[e:e+1]
q = (xL+xR)/2 .+ (xR-xL)/2*fe.q
w = (xR - xL)/2 * fe.w
E = fe.Et[:, (e-1)*fe.P+1:e*fe.P]'
dXdx = ones(fe.Q) * 2 / (xR - xL)
q, w, E, dXdx
end
struct FESpace
P::Int
Q::Int
nelem::Int
x::Vector
xn::Vector
Et::SparseMatrixCSC{Float64, Int64}
q::Vector
w::Vector
B::Matrix
D::Matrix
function FESpace(P, Q, nelem)
x, E = fe1_mesh(P, nelem)
xn = xnodal(x, P)
_, q, w, B, D = febasis(P, Q)
new(P, Q, nelem, x, xn, E', q, w, B, D)
end
end
# Stabilized advection-diffusion
wind = 1000; k = 1
fq(q, u, Du) = wind .* Du -one.(u), k * Du + tau * wind.^2 .* Du # two components, the first is f_0 (multiplying v), second is f_1 (multiplying ∇v)
n = 30; h = 2 / n; Pe = abs(wind) * h / (2 * k)
tau = h / (2 * abs(wind)) * (coth(Pe) - 1 / Pe)
fe = FESpace(2, 2, n)
u0 = zero(fe.xn)
N = length(fe.xn)
sol = nlsolve(u -> fe_residual(u, fe, fq; bci=[1, N], bcv=[0, 0]), zero(fe.xn), method=:newton)
plot(fe.xn, sol.zero, marker=:auto, legend=:none, xlabel = "x", ylabel = "u", title = "Stabilized advection-diffusion")
Compare with what we saw in our Upwind methods section.
# another example without stabilization
wind = 500
fq(q, u, Du) = wind .* Du -one.(u), 1 * Du # two components, the first is f_0 (multiplying v), second is f_1 (multiplying ∇v)
fe = FESpace(5, 5, 10) # higher order polynomial basis
u0 = zero(fe.xn)
N = length(fe.xn)
sol = nlsolve(u -> fe_residual(u, fe, fq; bci=[1, N], bcv=[0, 0]), zero(fe.xn), method=:newton)
plot(fe.xn, sol.zero, marker=:auto, legend=:none, xlabel = "x", ylabel = "u", title = "Without stabilization")
It is a lot more oscillatory!
4. Variational Multiscale Method#
The Variational Multiscale Method (VMS) is a technique used for deriving models and numerical methods for multiscale phenomena.
Stabilized methods are getting increasing attention in computational fluid dynamics because they are designed to solve drawbacks typical of the standard Galerkin method: advection-dominated flows problems and problems in which an arbitrary combination of interpolation functions may yield to unstable discretized formulations.
VMS was introduced by Hughes (one of the fathers of the SUPG method) in 1995.
Broadly speaking, VMS is a technique used to get mathematical models and numerical methods which are able to catch multiscale phenomena.
It is usually adopted for problems with huge scale ranges, which are separated into a number of scale groups (e.g., turbulent flows with energy cascades).
5. Time-dependent problems#
Time-dependent advection-diffusion#
Suppose we have the strong and weak forms
where \(F\) is the spatial (only) differential operator and
is a (nonlinear) semilinear form (linear in \(v\), but nonlinear in \(u\)).
For advection-diffusion,
To obtain the weak form, we multiply by a test function \(v\) and integrate over the domain \(\Omega\):
Integrating the diffusion term by parts gives
Hence the weak form is
After applying the boundary conditions (\(v=0\) on \(\partial \Omega\)), this is written as
Generally, we can write equations in residual form
then the weak form is:
next, we want to consider the linearization of our spatial terms (of course, this is particularly useful for nonlinear problems, but we outline the main key ideas here as well).
The derivative of the weak form spatial terms \(A\) with respect to \(u\), in a given direction \(w\), is obtained by applying \(L\) to \(w\), then testing it with \(v\).
where
So, in this case for advection-diffusion,
Variational Multiscale key ideas#
Decompose the solution into an overlapping sum decomposition of a resolved and oscillatory part, \(u = \bar u + \tilde u\) and similarly for \(v = \bar v + \tilde v\), where \(\bar u \) represents the coarse (resolved) scales and \(\tilde u\) the fine (subgrid or unresolved) scales
We approximate \(A\) via first order Taylor series
where we prefer the forms in which \(L\) or its adjoint, \(L^*\), is applied to the smoother of the two functions.
So the fine-scale equation becomes approximately
Since \( A(\tilde v, \bar u )\) is the weak action of the coarse operator,
This becomes
Because this holds for all fine-scale test functions \(\tilde v\), the fine scales satisfy, in weak form,
where the right-hand side is the residual of the coarse-scale solution.
Let’s define the coarse residual
Then the fine-scale equation can be rewritten as
which can be summarized as
the unresolved scales are driven by the residual of the resolved scales.
Subgrid ansatz#
Ideally, we would want to solve the latter equation for the fine-scale solution \(\tilde u\). But because \(\tilde u\) lives in the unresolved space, it would be too computationally expensive to directly solve for it (effectively making it resolved).
Ideally, \(\tilde u\) can be obtained by solving
Assuming quasistatic fluctuations \(\tilde u_t = 0\), the ansatz
gives
and thus the stabilized form is entirely written in terms of resolved scales.
6. Stabilized methods for advection-diffusion#
Variational Multiscale (1998 paper)#
Stabilization is not symmetric
Involves diffusion operator applied to test function, but basis functions are only \(C^0\) (discontinuous derivatives).
Properly applied as a space-time method, not method of lines (so doesn’t work with off-the-shelf ODE integrators).
Explains preferred \(L\) vs \(L^*\) for hyperbolic systems with nonsymmetric flux Jacobian.
Galerkin Least-Squares 1991 paper#
SPD stabilization (convenient for theory)
A space-time method when done properly.
Less accurate than VMS in practice.
SUPG 1982 paper#
Streamline upwind Petrov-Galerkin can be misleading (it is a variational multiscale method whose algebraic form happens to look like Petrov-Galerkin)
Easy to implement with method of lines (so it can work with off-the-shelf ODE integrators)
SU variant even simpler; only first order.
7. General workflow for time dependent problems#
Start with a time dependent problem in strong form
\[ u_t + \nabla\cdot (\mathbf w u - \kappa \nabla u) = 0 .\]Multiply by a test function \(v\) and integrate
\[ \int_\Omega \Big[ v u_t + \nabla v\cdot \big(\kappa \nabla u - \mathbf w u \big) - vs \Big] = 0, \forall v.\]Discretize and assemble
\[ M u_t + A u - s = 0\]Convert to explicit ODE form
\[ u_t = M^{-1} (-A u + s). \]
Mass matrix \(M\) has the same sparsity pattern as the matrix describing the “physics”, \(A\) – direct solve costs the same.
Question: Finite element methods must be explicit?
\(M\) is usually much better conditioned than \(A\); solve in less than 10 Conjugate Gradient (CG) iterations with Jacobi preconditioning.
Idea for better computational complexity: replace \(M\) with a diagonal approximation. Different choices:
\(\operatorname{diag}(M)\) is inconsistent
row sums = “lumping”; which gives rise to the lumped Mass matrix
use collocated Lobatto quadrature, which naturally gives a diagonal Mass matrix, with no need for further manipulations
7.1 Collocated quadrature: when the quadrature points and basis interpolation nodes coincide#
Generally, the interpolation nodes, \(x_i\), differ from the quadrature nodes, \(x_q\). In the case of collocation or collocated quadrature, we pick them to be identical:
Recall that the test functions satisfy the Kroneker delta property:
So the basis evaluation matrix (denoted by \(B\) in our Abstractions section) becomes
That means:
This has the important consequence that the Mass matrix is diagonal:
In fact, recall that the mass matrix \(M\) is defined as
and, in terms of the basis applicator matrix \(B\), we have that
If \(B = I\), then
where the \(w_j\) are the quadrature weights.
A diagonal Mass matrix can be achieved even without collocation (it is called a lumped Mass matrix, usually obtained by substituting diagonal entries by row sums). But in the case of collocated quadrature, it happens naturally.
dfq_mass(q, u, du, Du, Ddu) = du, 0*Ddu
# Jacobian assembly
function fe_jacobian(u_in, fe, dfq; bci=[1], bcv=[1.])
u = copy(u_in) # store a copy of the initial solution vector
u[bci] = bcv # put the boundary condition values in the proper boundary condition indices of the solution vector
rows, cols, vals = Int[], Int[], Float64[]
for e in 1:fe.nelem # for all elements
q, w, E, dXdx = fe_element(fe, e)
B, D, P = fe.B, fe.D, fe.P
ue = E * u # restrict/extract the solution on each element e
uq = B * ue; Duq = dXdx .* (D * ue) # uq, u at quadrature points; Duq, ∇u at quadrature points
K = zeros(P, P) # initialize the local element Jacobian matrix, one P×P block per element
for j in 1:fe.P # for each column of the local Jacobian matrix, we perturb the local solution in the direction of the j-th local basis function
du = B[:,j] # j-th local basis function evaluated at quadrature points
Ddu = dXdx .* D[:,j] # its (physical) derivative at quadrature points
df0, df1 = dfq(q, uq, du, Duq, Ddu) # returns the linearized integrand pieces df_0 and df_1 for the perturbation direction (Gateaux or directional derivative of the residual integrand)
K[:,j] = B' * (w .* df0) + D' * (dXdx .* w .* df1) # action of the weak-form Jacobian against all test functions
end
inds = rowvals(E')
append!(rows, kron(ones(P), inds))
append!(cols, kron(inds, ones(P)))
append!(vals, vec(K))
end
A = sparse(rows, cols, vals)
A[bci, :] .= 0; A[:, bci] .= 0
A[bci,bci] = diagm(ones(length(bci)))
A
end
fe_jacobian (generic function with 1 method)
# modified febasis function and FESpace struct to account for different types of quadratures
function febasis(P, Q, quadrature=gausslegendre)
x, _ = gausslobatto(P)
q, w = quadrature(Q)
Pk, _ = vander_legendre_deriv(x)
Bp, Dp = vander_legendre_deriv(q, P)
B = Bp / Pk
D = Dp / Pk
x, q, w, B, D
end
struct FESpace
P::Int
Q::Int
nelem::Int
x::Vector
xn::Vector
Et::SparseMatrixCSC{Float64, Int64}
q::Vector
w::Vector
B::Matrix
D::Matrix
function FESpace(P, Q, nelem, quadrature=gausslegendre)
x, E = fe1_mesh(P, nelem)
xn = xnodal(x, P)
_, q, w, B, D = febasis(P, Q, quadrature)
new(P, Q, nelem, x, xn, E', q, w, B, D)
end
end
fe = FESpace(6, 6, 4, gausslobatto) # gausslobatto quadrature nodes, i.e, with collocated quadrature, since we had used the Lobatto interpolation nodes in the febasis and xnodal functions
u0 = zero(fe.xn)
J1 = fe_jacobian(u0, fe, dfq_mass, bci=[], bcv=[])
fe = FESpace(6, 6, 4, gausslegendre) # gausslegendre quadrature nodes, i.e., without collocated quadrature
J2 = fe_jacobian(u0, fe, dfq_mass, bci=[], bcv=[])
@show norm(J1 - diagm(diag(J1))) # check that it is already diagonal
@show norm(J2 - diagm(diag(J2))) # not diagonal
@show norm(sum(J2, dims=2) - diag(J1)) # we can make it a diagonal "lumped" mass matrix, summing the row entries
my_spy(J2)
norm(J1 - diagm(diag(J1))) = 8.961917579818835e-17
norm(J2 - diagm(diag(J2))) = 0.07988743228160049
norm(sum(J2, dims = 2) - diag(J1)) = 2.6207532627416455e-16
SUPG for time-dependent problems#
For our advection-diffusion problem
There is a \(u_t\) term in the stabilization, tested by a gradient of the test function. This means we can’t create a simple explicit form
\[u_t = M^{-1} F(u)\]amenable to be used with off-the-shelf Method of Lines time-integrators.Some ad-hoc methods treat this term explicitly, which effectively lags this term. It can work, but limits the choice of time integrator and affects order of accuracy in time.
One can use fully implicit methods with this formulation, usually written as \(G(\dot u, u, t) = 0\). Generalized \(\alpha\) (a second order scheme that can move continuously between midpoint and BDF2, which is L-stable) methods are popular.
There is a strong form
\[\nabla\cdot(\kappa\nabla u)\]appearing in stabilization.For linear elements, this is typically zero on each element.
Ignore the term (popular) or reconstruct it using neighbor cells (implementation complexity) and/or projection (extra work/communication).
High order elements
If \(\kappa\) is constant, \(\kappa \nabla\cdot\nabla u\) can be implemented using second derivatives of the basis functions.
Or via projection
Should \(\tau^e\) be constant or variable over the element?
8. SUPG solver in Julia#
General framework#
In the following example, we are going to use our general residual/Jacobian framework seen in Abstractions. Although this framework is technically not needed in this example, because we use explicit time stepping and all our terms are linear in \(u\), we want to show how this unified framework is used.
The approach we use here is general, and works for all types of problems, including:
implicit time stepping
nonlinear PDEs
stabilized methods (SUPG, VMS)
coupled systems
For this residual/Jacobian framework, we start from the strong form:
The weak residual is:
After integration by parts, we write:
where \( f_0, f_1 \) are the weak-form components encoded in fq in the code.
We use the Finite Element expansion, by letting:
where now we notice that the coefficients \(\{u_j(t)\}\) (the unknown nodal values to be determined) depend on time and we will find an evolution equation (ODE) for them instead of a symple linear system as we have seen in the previous examples (see, e.g., the Poisson problem).
From the time derivative term:
We define the mass matrix:
From the spatial terms:
where \( K \) includes now advection, diffusion, and stabilization contributions.
We then obtain the semi-discrete formulation in matrix form
This is now a linear ODE and can be solved exactly. Define the matrix \(A\) as:
Then
has the exact solution in terms of the matrix exponential:
and the initial condition \(u(t=0; x)\).
In the case in which we want to use an implicit time integrator or we have nonlinear terms in our PDE, we first need to linearize our problem to obtain the linear operator \(A\) needed for the ODE (since matrix exponential needs a linear matrix \(A\)).
Here is where our generalized residual/Jacobian framwork come into place.
With the abstraction we use, the matrices \(M\) and \(K\) are not assembled directly (as bilinear forms), but are obtained by differentiating the weak residual: \(J = \frac{\partial R}{\partial u}\)
dfq_supg_massin the code encodes the linearized mass matrix \(M\)dfq_supgin the code encodes the linearized spatial terms matrix \(K\)
Example: 1D time-dependent advection–diffusion#
In this example, we consider the 1D time-dependent homogeneous (no source) advection–diffusion equation
We use a modified test function:
and derive a simplified SUPG stabilized weak form (as we have seen above, for the spatial terms, we retain the stabilization only for the dominant advection part, not the diffusion part):
where, again, the SUPG perturbation only affects the advection-direction part (not the diffusion term).
Now, let’s focus on the first integral in which the terms multiply the time derivative \(u_t\):
In our FEM Assembly lecture, we have learned how to assembly the residual and Jacobian.
For the Jacobian, we linearize with respect to a perturbation \(\delta u\). Since \(u_t\) is linear, its variation is just \(\delta u\). So the linearized weak form becomes:
Now, let’s compare with the assembly framework we have seen in code, we need the general form
where we identify
for the linearized stabilized mass matrix (dfq_supg_mass in the code).
Now for the spatial operator terms,
we expand
We then proceed by linearizing with respect to \(\delta u\). Since the spatial operator is linear, we have that \(u_x\) varies as \(\delta u_x\). Thus
where we now identify
for the linearized stabilized spatial terms (dfq_supg in the code).
wind = 1; k = 0 # no diffusion
n = 100; h = 2 / n; Pe = abs(wind) * h / (2 * k)
tau = h / (2 * abs(wind)) * (coth(Pe) - 1 / Pe)
dfq_supg_mass(q, u, du, Du, Ddu) = du, tau * wind * du # two components, the first is df_0, second is df_1
dfq_supg(q, u, du, Du, Ddu) = wind * Ddu, k * Ddu + tau * wind * (wind * Ddu) # two components, the first is df_0, second is df_1
function supg_setup()
fe = FESpace(2, 2, n); per(x) = (((1 + x) % 2) + 2) % 2 - 1
exact(x, t) = exp(-((per(x-wind*t))/.15)^2) # periodic gaussian hump
u0 = exact.(fe.xn, 0)
M = fe_jacobian(u0, fe, dfq_supg_mass, bci=[], bcv=[])
J = fe_jacobian(u0, fe, dfq_supg, bci=[], bcv=[])
P = spdiagm(n+1, n, ones(n)); P[end, 1] = 1 # Periodicity
A = -(P' * M * P) \ Matrix(P' * J * P) # -(P' * M * P)^{-1} P' * J * P
fe, P' * u0, A, P, exact
end
fe, u0, A, P, exact = supg_setup();
tfinal = 40
u = exp(A * tfinal) * u0 # linear ODE, exact sol: u(t) = u_0 * exp(At)
plot(fe.xn, [exact.(fe.xn, tfinal) P*u], label=["exact" "SUPG"], legend=:topleft)
In the code above, we have constructed the semi-discrete form
using instead the stabilized SUPG semi-discrete system in matrix form
where I have now switched from \(K\) to the letter \(J\), to match the J in the code.
We can then isolate the time derivative:
and identify the linearized matrix \(A = - M_{\textrm{SUPG}} J_{\textrm{SUPG}}\) needed for the ODE solution.
Furthermore, in this example we have introduced periodicity in the system by letting:
and
Then,
with \(u_{n+1}\equiv u_1\).
Spectrum of the operator#
lam = eigvals(A)
scatter(real.(lam), imag.(lam), legend=:none)
Activity#
What do we learn from the spectrum?
We haven’t really talked about stability of ODEs in this class - you can review this COMP 526 Lecture
How would the unstabilized spectrum look like?
A challenging problem#
function testfunc(x)
max(1 - 4*abs.(x+2/3),
abs.(x) .< .2,
(2*abs.(x-2/3) .< .5) * cospi(2*(x-2/3)).^2
)
end
fe, _, A, P = supg_setup();
u0 = P' * testfunc.(fe.xn)
tfinal = 0.1
u = exp(A * tfinal) * u0
plot(fe.xn, P*u, label="SUPG", legend=:topleft)
plot!(fe.xn, P*u0, label= "initial")
Let’s see how this compares to a non-stabilized version:
n = 100; h = 2 / n; Pe = abs(wind) * h / (2 * k)
tau = h / (2 * abs(wind)) * (coth(Pe) - 1 / Pe) * 0 # kill stabilization
fe, u0, A, P = supg_setup();
u0 = P' * testfunc.(fe.xn)
tfinal = 0.1
u = exp(A * tfinal) * u0
plot(fe.xn, P*u, label="Galerkin", legend=:topleft)
plot!(fe.xn, P*u0, label= "initial")
9. Finite element interfaces: deal.ii#
Deal.II step-7 tutorial
for e in elems:
fe_values.reinit()
for q in q_points:
for i in test_functions:
for j in trial_functions
K_e[i,j] += ...
f_e[i] += ...
for f in e.faces:
if f.at_boundary():
fe_face_values.reinit()
for q in q_points:
...