LINEBURG


<< . .

 7
( 23)



. . >>

or
point is updated as the zero isocontour crosses over it. Here we discuss a
δ(φ) (’S(φo )(| φ| ’ 1) + »δ(φ)| φ|) dx = 0, discrete algorithm that mimics this approach by marching out from the
(7.10)
„¦i,j interface calculating the signed distance function at each grid point.
Suppose that all the grid points adjacent to the interface are initialized
using equation (7.8) and the fact that H (φ) = δ(φ) (see equation (1.18)).
with the exact values of signed distance. We will discuss methods for ini-
A separate »i,j is de¬ned in each cell using equation (7.10) to obtain
tializing this band of cells later. Starting from this initial band, we wish
δ(φ) (’S(φo )(| φ| ’ 1)) dx to march outward, updating each grid point with the appropriate value of
„¦i,j
»i,j = ’ , (7.11)
δ 2 (φ)| φ| dx signed distance. Here we describe the algorithm for marching in the nor-
„¦i,j
mal direction to construct the positive distance function, noting that the
or
method for marching in the direction opposite the normal to construct the
φn+1 ’φn
δ(φ) dx negative distance function is applied in the same manner. In fact, if the val-
t
„¦i,j
»i,j = ’ , (7.12) ues in the initial band are multiplied by ’1, the positive distance function
δ 2 (φ)| φ| dx
„¦i,j
construction can be used to ¬nd positive distance values in „¦’ that can
where equation (7.4) is used to compute φn+1 from φn . In summary, ¬rst then be multiplied by ’1 to obtain appropriate negative distance values in
equation (7.4) is used to update φn in time using, for example, an RK this region.
method. Then equation (7.12) is used to compute a »i,j for each grid In order to march out from the initial band, constructing the distance
cell. Sussman and Fatemi in [158] used a nine-point quadrature formula function as we go, we need to decide which grid point to update ¬rst. This
to evaluate the integrals in two spatial dimensions. Finally, the initial should be the one that the zero isocontour would cross ¬rst in the crossing-
guess for φn+1 obtained from equation (7.4) is replaced with a corrected time method, i.e., the grid point with the smallest crossing time or smallest
φn+1 + t»δ(φ)| φ|. It is shown in [158] that this speci¬c discretization value of distance. So, for each grid point adjacent to the band, we calculate a
exactly cancels out a ¬rst order error term in the previous formulation. This tentative value for the distance function. This is done using only the values
70 7. Constructing Signed Distance Functions 7.5. The Fast Marching Method 71

of φ that have already been accepted into the band; i.e., tentative values ordering based on tentative distance values. In general, tentative values
do not use other tentative values in this calculation. Then we choose the should only decrease, implying that the updated point may have to be
point with the smallest tentative value to add to the band of accepted grid moved up the tree. However, numerical error could occasionally cause a
points. Since the signed distance function is created with characteristics tentative distance value to increase (if only by round-o¬ error) in which
that ¬‚ow out of the interface in the normal direction, this chosen point case the point may need to be moved down lower in the tree. Tentative
does not depend on any of the other tentative grid points that will have distance values are calculated at each new adjacent grid point that was not
larger values of distance. Thus, the tentative value of distance assigned already in the tree, and these points are added to the tree. The algorithm
to this grid point is an acceptable approximation of the signed distance is O(N log N ), where N is the total number of grid points.
function. This algorithm was invented by Tsitsiklis in a pair of papers, [166] and
Now that the band of accepted values has been increased by one, we [167]. The most novel part of the algorithm is the extension of Dijkstra™s
repeat the process. Most of the grid points in the tentative band already method for computing the taxicab metric to an algorithm for computing
have good tentative approximations to the distance function. Only those Euclidean distance. In these papers, φi,j,k is chosen to be as small as pos-
adjacent to the newly added point need modi¬cation. Adjacent tentative sible by obtaining the correct solution in the sense of ¬rst arrival time.
grid points need their tentative values updated using the new information First, each quadrant is independently considered to ¬nd the characteris-
gained by adding the chosen point to the band. Any other adjacent grid tic direction θ = (θ1 , θ2 , θ3 ), where each θs > 0 and θs = 1, that gives
point that did not yet have a tentative value needs to have a tentative the smallest value for φi,j,k . Then the values from all the quadrants are
value assigned to it using the values in the band of accepted points. Then compared, and the smallest of these is chosen as the tentative guess for
we choose the smallest tentative value, add it to the band, and repeat φi,j,k . That is, the characteristic direction is found by ¬rst ¬nding the best
the algorithm. Eventually, every grid point in „¦+ gets added to the band, candidate in each quadrant and then comparing these (maximum of eight)
completing the process. As noted above, the grid points in „¦’ are updated candidates to ¬nd the best global candidate.
with a similar process. In [166] and [167], the minimum value of φi,j,k in a particular quadrant
The slowest part of this algorithm is the search through all the tentative is found by minimizing
grid points to ¬nd the one with the smallest value. This search can be
φi,j,k = „ (θ) + θ1 φ1 + θ2 φ2 + θ3 φ3 (7.13)
accelerated using a binary tree to store all the tentative values. The tree
is organized so that each point has a tentative distance value smaller than over all directions θ, where
the two points located below it in the tree. This means that the smallest
2 2 2
tentative point is always conveniently located at the top of the tree. New „ (θ) = (θ1 x1 ) + (θ2 x2 ) + (θ3 x3 ) (7.14)
points are added to the bottom of the tree, where we note that the method
is the distance traveled and θs φs is the starting point in the particular
works better if the tree is kept balanced. If the newly added point has
quadrant. There are eight possible quadrants, with starting points deter-
a smaller value of distance than the point directly above it, we exchange
mined by φ1 = φi±1,j,k , φ2 = φi,j±1,k , and φ3 = φi,j,k±1 . If any of the arms
the location of these two points in the tree. Recursively, this process is
of the stencil is not in the band of updated points, this arm is simply ig-
repeated, and the newly added point moves up the tree until it either sits
nored. In the minimization formalism, this is accomplished by setting points
below a point with a smaller tentative distance value or it reaches the
outside the updated band to ∞ and using the convention that 0 · ∞ = 0.
top of the tree. We add points to the bottom of the tree as opposed to
This sets the corresponding θs φs term in equation (7.13) to zero for any
the top, since newly added points tend to be farther from the interface
φ = ∞ not in the band of updated points simply by setting θs = 0, i.e., by
with larger distance values than those already in the tree. This means that
ignoring that direction of the stencil.
fewer comparisons are generally needed for a newly added point to ¬nd an
A Lagrange multiplier » is added to equation (7.13) to obtain
appropriate location in the tree.
The algorithm proceeds as follows. Remove the point from the top of the
φi,j,k = „ (θ) + θ1 φ1 + θ2 φ2 + θ3 φ3 + » 1 ’ θs , (7.15)
tree and add it to the band. The vacated space in the tree is ¬lled with the
smaller of the two points that lie below it. Recursively, the holes opened up
where 1 ’ θs = 0. For each θs , we take a partial derivative and set it to
by points moving upward are ¬lled with the smaller of the two points that
zero, obtaining
lie below until the bottom of the tree is reached. Next, any tentative values
θs ( x s )2
adjacent to the added point are updated by changing their tentative values.
+ φs ’ » = 0 (7.16)
These then need to be moved up or down the tree in order to preserve the „ (θ)
72 7. Constructing Signed Distance Functions 7.5. The Fast Marching Method 73

in standard fashion. Solving equation (7.16) for each φs and plugging the proceeds. When there are two nonzero terms, equation (7.19) becomes
results into equation (7.13) yields (after some cancellation) φi,j,k = »; i.e., 2 2
φi,j,k ’ φs1 φi,j,k ’ φs2
» is our minimum value. To ¬nd », we rewrite equation (7.16) as + = 1, (7.21)
x s1 x s2
2
θ s ( x s )2
2
» ’ φs where s1 and s2 represent di¬erent spatial dimensions. This quadratic equa-
= (7.17)
tion can have zero, one, or two solutions. While this theoretically should not
xs „ (θ)2
happen, it can be caused by poor initial data or numerical errors. De¬ning
and sum over all spatial dimensions to obtain 2 2
φi,j,k ’ φs1 φi,j,k ’ φs2
P (φi,j,k ) = + (7.22)
2 2 2
x s1 x s2
» ’ φ1 » ’ φ2 » ’ φ3
+ + = 1, (7.18)
x1 x2 x3 allows us to write P (max{φs1 , φs2 }) ¤ 1 as a necessary and su¬-
cient condition to ¬nd an adequate solution φi,j,k of equation (7.21). If
using equation (7.14) to reduce the right-hand side of equation (7.18) to 1. P (max{φs1 , φs2 }) > 1, then φi,j,k < max{φs1 , φs2 } if a solution φi,j,k ex-
In summary, [166] and [167] compute the minimum value of φi,j,k in each ists. This implies that something is wrong (probably due to poor initial data
quadrant by solving the quadratic equation or numerical error), since larger values of φ should not be contributing to
smaller values. In order to obtain the best solution under the circumstances,
2 2 2
φi,j,k ’ φ1 φi,j,k ’ φ2 φi,j,k ’ φ3
we discard the term with the larger φs and proceed with equation (7.20).
+ + = 1. (7.19)
x y z Otherwise, when P (max{φs1 , φs2 }) ¤ 1, equation (7.21) has two solutions,
and we use the larger one, corresponding to the “+” sign in the quadratic
Then the ¬nal value of φi,j,k is taken as the minimum over all the quadrants.
formula. Similarly, when all three terms are present, we de¬ne
Equation (7.19) is a ¬rst-order accurate approximation of | φ|2 = 1, i.e.,
2 2 2
the square of | φ| = 1. φi,j,k ’ φ1 φi,j,k ’ φ2 φi,j,k ’ φ3
P (φi,j,k ) = + + (7.23)
The ¬nal minimization over all the quadrants is straightforward. For x y z
example, with φ2 and φ3 ¬xed, the smaller value of φi,j,k is obtained as
and take the larger solution, corresponding to the “+” sign, when
φ1 = min(φi’1,j,k , φi+1,j,k ), ruling out four quadrants. The same considera-
P (max{φs }) ¤ 1. Otherwise, when P (max{φs }) > 1 we omit the term
tions apply to φ2 = min(φi,j’1,k , φi,j+1,k ) and φ3 = min(φi,j,k’1 , φi,j,k+1 ).
with the largest φs and proceed with equation (7.22).
Equation (7.19) is straightforward to solve using these de¬nitions of φ1 ,
Consider the initialization of the grid points in the band about the inter-
φ2 , and φ3 . This is equivalent to using either the forward di¬erence or the
face. The easiest approach is to consider each of the coordinate directions
backward di¬erence to approximate each derivative of φ. If these de¬ni-
independently. If φ changes sign in a coordinate direction, linear interpola-
tions give φs = ∞, than neither the forward nor the backward di¬erence
tion can be used to locate the interface crossing and determine a candidate
is de¬ned since both the neighboring points in that spatial dimension are
value of φ. Then φ is initialized using the candidate with the smallest mag-
not in the accepted band. In this instance, we set θs = 0, which according
nitude. Both this initialization routine and the marching algorithm itself
to equation (7.17) is equivalent to dropping the troublesome term out of
are ¬rst-order accurate. For this reason, the reinitialization equation is of-
equation (7.19) setting it to zero.
ten a better choice, since it is highly accurate in comparison. On the other
Each of φ1 , φ2 , and φ3 can potentially be equal to ∞ if there are no
hand, reinitialization is signi¬cantly more expensive and does not work well
neighboring accepted band points in the corresponding spatial dimension.
when φ is not initially close to signed distance. Thus, in many situations
If one of these quantities is equal to ∞, the corresponding term vanishes
this optimal O(N log N ) algorithm is preferable.
from equation (7.19) as we set the appropriate θs = 0. Since there is always
Although the method described above was originally proposed by Tsitsik-
at least one adjacent point in the accepted band, at most two of the three
lis in [166] and [167], it was later rediscovered by the level set community;
terms can vanish, giving
see, for example, Sethian [148] and Helmsen, Puckett, Colella, and Dorr
2 [85], where it is popularly referred to as the fast marching method (FMM).
φi,j,k ’ φs
= 1, (7.20) In [149], Sethian pointed out that higher-order accuracy could be achieved
xs
by replacing the ¬rst-order accurate forward and backward di¬erences used
which can be solved to obtain φi,j,k = φs ± xs . The larger term, denoted by [166] and [167] in equation (7.19) with second-order accurate forward
by the “+” sign, is the one we use, since distance increases as the algorithm and backward di¬erences whenever there are enough points in the updated
74 7. Constructing Signed Distance Functions

band to evaluate these higher-order accurate di¬erences. The second-order
accurate versions of equations (1.3) and (1.4) are
φi+1 ’ φi φi+2 ’ 2φi+1 + φi φi+2 ’ 4φi+1 + 3φi
‚φ x
8
≈ + =
( x)2
‚x x 2 2x
(7.24)
Extrapolation in the Normal Direction
and
φi ’ φi’1 φi ’ 2φi’1 + φi’2 3φi ’ 4φi’1 + φi’2
‚φ x
≈ + = ,
( x)2
‚x x 2 2x
(7.25)
respectively. This lowers the local truncation error whenever more accepted
band points are available. As pointed out in [149], higher-order accurate
(higher than second order) forward and backward di¬erences could be used
as well. One di¬culty with obtaining higher-order accurate solutions is that
the initial band adjacent to the interface is constructed with ¬rst-order
accurate linear interpolation. In [49], Chopp proposed using higher-order
accurate interpolants to initialize the points adjacent to the interface. This
leads to a set of equations that are not trivial to solve, and [49] used a
variant of Newton iteration to ¬nd an approximate solution. When the
iteration method (occasionally) fails, [49] uses the lower-order accurate lin-
8.1 One-Way Extrapolation
ear interpolation to initialize the problematic points. Overall, the iteration
scheme converges often enough to signi¬cantly improve upon the results
In the last chapter we constructed signed distance functions by following
obtained using the lower-order accurate linear interpolation everywhere.
characteristics that ¬‚ow outward from the interface. Similar techniques can
be used to propagate information in the direction of these characteristics.
For example,

St + N · S=0 (8.1)

is a Hamilton-Jacobi equation (in S) that extrapolates S normal to the
interface, i.e. so that S is constant on rays normal to the interface. Since
H( S) = N · S, we can solve this equation with the techniques presented
in Chapter 5 using H1 = n1 , H2 = n2 , and H3 = n3 .
While central di¬erencing can be used to compute the normal, it is usu-
ally advantageous to use upwind di¬erencing here, since this equation is
propagating information along these characteristics. At a given point xi,j,k
where the level set function is φi,j,k , we determine φx by considering both
φi’1,j,k and φi+1,j,k . If either of these values is smaller than φi,j,k , we use
the minimum of these two values to compute a one-sided di¬erence. On the
other hand, if both of these values are larger than φi,j,k , we set φx = 0;
noting that no S information should be ¬‚owing into this point along the x
direction. After computing φy and φz in the same fashion, equation (1.2)
can be used to de¬ne the normal direction.
Suppose that S is initially de¬ned only in „¦’ and we wish to extend its
values across the interface from „¦’ into „¦+ . Solving equation (8.1) in „¦+
76 8. Extrapolation in the Normal Direction 8.3. Fast Marching Method 77

using the values of S in „¦’ as boundary conditions extrapolates S across for S. Then we ¬nd the next smallest value of φ, compute S, and continue in
the interface constant in the normal direction. This was done by Fedkiw, this fashion. The tentative values play no role, since we already know φ and
Aslam, Merriman, and Osher in [63] to solve multiphase ¬‚ow problems with thus the characteristic directions, i.e., assuming that φ is a signed distance
a ghost ¬‚uid method. We can extrapolate S in the opposite direction from function.
„¦+ to „¦’ by solving At each grid point, S is determined using the values of S at the neigh-
boring points in a fashion dictated by the neighboring values of φ. Since S
St ’ N · S=0 (8.2)
should be constant normal to the interface, we set N · S = 0. Or equiv-
in „¦’ using the values of S in „¦+ as boundary conditions. Of course, the alently, since N and φ point in the same direction, we set φ · S = 0,
where the derivatives in φ are computed in the fashion outlined above for
upwind normal should be computed using the larger neighboring values
computing normals. Then
of φ instead of the smaller neighboring values.
φ x Sx + φ y Sy + φ z Sz = 0 (8.4)
is discretized using φ to determine the upwind direction. That is, we use
8.2 Two-Way Extrapolation Sx = D’ Si,j,k when φx > 0 and Sx = D+ Si,j,k when φx < 0. When φx = 0,
the neighboring values of φ are larger than φi,j,k , and no S information can
Just as we combined equations (7.2) and (7.3) to obtain equation (7.4),
be obtained from either of the neighboring nodes. In this case, we drop the
equations (8.1) and (8.2) can be combined to obtain
Sx term from equation (8.4). The Sy and Sz terms are treated in a similar
St + S(φ)N · fashion. If φ is a signed distance function, this method works well. For more
S=0 (8.3)
details on the fast marching alternative to equation (8.1), see Adalsteinsson
to extrapolate S away from the interface. Here the upwind version of N and Sethian [1].
is computed using smaller values of φ in „¦+ and larger values of φ in „¦’ .
Equation (8.3) can be applied to any value S that needs to be smoothed
normal to the interface. For example, if this equation is solved indepen-
dently for each component of the velocity ¬eld, i.e., S = u, S = v, and
S = w, we obtain a velocity ¬eld that is constant normal to the interface.
Velocity ¬elds of this type have a tendency to preserve signed distance
functions, as discussed by Zhao, Chan, Merriman, and Osher in [175]. This
velocity extrapolation is also useful when the velocity is known only near
the interface. For example, in [43], Chen, Merriman, Osher, and Smereka
computed the velocity ¬eld for grid cells adjacent to the interface using
local information from both sides of the interface. Then the velocity values
in this band were held ¬xed, and equation (8.3) was used to extend each
component of the velocity ¬eld outward from this initial band.



8.3 Fast Marching Method
As in the construction of signed distance functions, the fast marching
method can be used to extrapolate S in an e¬cient manner. For exam-
ple, consider a fast marching method alternative to equation (8.1). The
normal is computed using the smaller neighboring values of φ (as above).
The binary heap structure can be precomputed using all the points in „¦+ ,
as opposed to just using the points adjacent to the initialized band, since φ
is already de¬ned throughout „¦+ . Once the points are ordered, we choose
the point with the smallest value of φ and compute an appropriate value
80 9. Particle Level Set Method


2




1.5



9 1




Particle Level Set Method 0.5




0




y
’0.5




’1




’1.5




’2
’2 ’1.5 ’1 ’0.5 0 0.5 1 1.5 2
x



Figure 9.1. Initial square interface location and converging velocity ¬eld.

9.1 Eulerian Versus Lagrangian Representations
2
The great success of level set methods can in part be attributed to the
role of curvature in regularizing the level set function such that the proper
1.5
vanishing viscosity solution is obtained. It is much more di¬cult to ob-
tain vanishing viscosity solutions with Lagrangian methods that faithfully
1
follow the characteristics. For these methods one usually has to delete (or
add) characteristic information “by hand” when a shock (or rarefaction) is
0.5
detected. This ability of level set methods to identify and delete merging
characteristics is clearly seen in a purely geometrically driven ¬‚ow where
a curve is advected normal to itself at constant speed, as shown in Fig- 0




y
ures 9.1 and 9.2. In the corners of the square, the ¬‚ow ¬eld has merging
characteristics that are appropriately deleted by the level set method. We ’0.5


demonstrate the di¬culties associated with a Lagrangian calculation of
this interface motion by initially seeding some marker particles interior to ’1

the interface, as shown in Figure 9.3 and passively advecting them with
xt = V (x, t); where the velocity ¬eld V (x, t) is determined from the level ’1.5

set solution. Figure 9.4 illustrates that a number of particles incorrectly
escape from inside the level set solution curve in the corners of the square ’2
’2 ’1.5 ’1 ’0.5 0 0.5 1 1.5 2
where the characteristic information (represented by the particles them- x


selves) needs to be deleted so that the correct vanishing viscosity solution Figure 9.2. Square interface location at a later time correctly computed by the
can be obtained. level set method.
When using level set methods to model ¬‚uid ¬‚ows, one is usually
concerned with preserving mass (or volume for incompressible ¬‚ow). Unfor-
9.1. Eulerian Versus Lagrangian Representations 81 82 9. Particle Level Set Method

2
tunately, level set methods have a tendency to lose mass in underresolved
regions of the ¬‚ow. Attempts to improve mass (or volume) conservation
in level set methods have led to a variety of reinitialization techniques, as
1.5


discussed in Chapter 7. On the other hand, despite a lack of explicit en-
forcement of mass (or volume) conservation, Lagrangian schemes are quite
1

successful in conserving mass, since they preserve material characteristics
for all time; i.e., they are not regularized out of existence to obtain vanish-
0.5

ing viscosity solutions. The di¬culty stems from the fact that the level set
method cannot accurately tell whether characteristics merge, separate, or
0
y

<< . .

 7
( 23)



. . >>

Copyright Design by: Sunlight webdesign