LINEBURG

 << Пред. стр. страница 7(всего 23)ОГЛАВЛЕНИЕ След. стр. >>
or
point is updated as the zero isocontour crosses over it. Here we discuss a
Оґ(П†) (в€’S(П†o )(| П†| в€’ 1) + О»Оґ(П†)| П†|) dx = 0, discrete algorithm that mimics this approach by marching out from the
(7.10)
в„¦i,j interface calculating the signed distance function at each grid point.
Suppose that all the grid points adjacent to the interface are initialized
using equation (7.8) and the fact that H (П†) = Оґ(П†) (see equation (1.18)).
with the exact values of signed distance. We will discuss methods for ini-
A separate О»i,j is deп¬Ѓned in each cell using equation (7.10) to obtain
tializing this band of cells later. Starting from this initial band, we wish
Оґ(П†) (в€’S(П†o )(| П†| в€’ 1)) dx to march outward, updating each grid point with the appropriate value of
в„¦i,j
О»i,j = в€’ , (7.11)
Оґ 2 (П†)| П†| dx signed distance. Here we describe the algorithm for marching in the nor-
в„¦i,j
mal direction to construct the positive distance function, noting that the
or
method for marching in the direction opposite the normal to construct the
П†n+1 в€’П†n
Оґ(П†) dx negative distance function is applied in the same manner. In fact, if the val-
t
в„¦i,j
О»i,j = в€’ , (7.12) ues in the initial band are multiplied by в€’1, the positive distance function
Оґ 2 (П†)| П†| dx
в„¦i,j
construction can be used to п¬Ѓnd positive distance values in в„¦в€’ that can
where equation (7.4) is used to compute П†n+1 from П†n . In summary, п¬Ѓrst then be multiplied by в€’1 to obtain appropriate negative distance values in
equation (7.4) is used to update П†n in time using, for example, an RK this region.
method. Then equation (7.12) is used to compute a О»i,j for each grid In order to march out from the initial band, constructing the distance
cell. Sussman and Fatemi in  used a nine-point quadrature formula function as we go, we need to decide which grid point to update п¬Ѓrst. This
to evaluate the integrals in two spatial dimensions. Finally, the initial should be the one that the zero isocontour would cross п¬Ѓrst in the crossing-
guess for П†n+1 obtained from equation (7.4) is replaced with a corrected time method, i.e., the grid point with the smallest crossing time or smallest
П†n+1 + tО»Оґ(П†)| П†|. It is shown in  that this speciп¬Ѓc discretization value of distance. So, for each grid point adjacent to the band, we calculate a
exactly cancels out a п¬Ѓrst order error term in the previous formulation. This tentative value for the distance function. This is done using only the values
70 7. Constructing Signed Distance Functions 7.5. The Fast Marching Method 71

of П† that have already been accepted into the band; i.e., tentative values ordering based on tentative distance values. In general, tentative values
do not use other tentative values in this calculation. Then we choose the should only decrease, implying that the updated point may have to be
point with the smallest tentative value to add to the band of accepted grid moved up the tree. However, numerical error could occasionally cause a
points. Since the signed distance function is created with characteristics tentative distance value to increase (if only by round-oп¬Ђ error) in which
that п¬‚ow out of the interface in the normal direction, this chosen point case the point may need to be moved down lower in the tree. Tentative
does not depend on any of the other tentative grid points that will have distance values are calculated at each new adjacent grid point that was not
larger values of distance. Thus, the tentative value of distance assigned already in the tree, and these points are added to the tree. The algorithm
to this grid point is an acceptable approximation of the signed distance is O(N log N ), where N is the total number of grid points.
function. This algorithm was invented by Tsitsiklis in a pair of papers,  and
Now that the band of accepted values has been increased by one, we . The most novel part of the algorithm is the extension of DijkstraвЂ™s
repeat the process. Most of the grid points in the tentative band already method for computing the taxicab metric to an algorithm for computing
have good tentative approximations to the distance function. Only those Euclidean distance. In these papers, П†i,j,k is chosen to be as small as pos-
adjacent to the newly added point need modiп¬Ѓcation. Adjacent tentative sible by obtaining the correct solution in the sense of п¬Ѓrst arrival time.
grid points need their tentative values updated using the new information First, each quadrant is independently considered to п¬Ѓnd the characteris-
gained by adding the chosen point to the band. Any other adjacent grid tic direction Оё = (Оё1 , Оё2 , Оё3 ), where each Оёs > 0 and Оёs = 1, that gives
point that did not yet have a tentative value needs to have a tentative the smallest value for П†i,j,k . Then the values from all the quadrants are
value assigned to it using the values in the band of accepted points. Then compared, and the smallest of these is chosen as the tentative guess for
we choose the smallest tentative value, add it to the band, and repeat П†i,j,k . That is, the characteristic direction is found by п¬Ѓrst п¬Ѓnding the best
the algorithm. Eventually, every grid point in в„¦+ gets added to the band, candidate in each quadrant and then comparing these (maximum of eight)
completing the process. As noted above, the grid points in в„¦в€’ are updated candidates to п¬Ѓnd the best global candidate.
with a similar process. In  and , the minimum value of П†i,j,k in a particular quadrant
The slowest part of this algorithm is the search through all the tentative is found by minimizing
grid points to п¬Ѓnd the one with the smallest value. This search can be
П†i,j,k = П„ (Оё) + Оё1 П†1 + Оё2 П†2 + Оё3 П†3 (7.13)
accelerated using a binary tree to store all the tentative values. The tree
is organized so that each point has a tentative distance value smaller than over all directions Оё, where
the two points located below it in the tree. This means that the smallest
2 2 2
tentative point is always conveniently located at the top of the tree. New П„ (Оё) = (Оё1 x1 ) + (Оё2 x2 ) + (Оё3 x3 ) (7.14)
points are added to the bottom of the tree, where we note that the method
is the distance traveled and Оёs П†s is the starting point in the particular
works better if the tree is kept balanced. If the newly added point has
a smaller value of distance than the point directly above it, we exchange
mined by П†1 = П†iВ±1,j,k , П†2 = П†i,jВ±1,k , and П†3 = П†i,j,kВ±1 . If any of the arms
the location of these two points in the tree. Recursively, this process is
of the stencil is not in the band of updated points, this arm is simply ig-
repeated, and the newly added point moves up the tree until it either sits
nored. In the minimization formalism, this is accomplished by setting points
below a point with a smaller tentative distance value or it reaches the
outside the updated band to в€ћ and using the convention that 0 В· в€ћ = 0.
top of the tree. We add points to the bottom of the tree as opposed to
This sets the corresponding Оёs П†s term in equation (7.13) to zero for any
the top, since newly added points tend to be farther from the interface
П† = в€ћ not in the band of updated points simply by setting Оёs = 0, i.e., by
with larger distance values than those already in the tree. This means that
ignoring that direction of the stencil.
fewer comparisons are generally needed for a newly added point to п¬Ѓnd an
A Lagrange multiplier О» is added to equation (7.13) to obtain
appropriate location in the tree.
The algorithm proceeds as follows. Remove the point from the top of the
П†i,j,k = П„ (Оё) + Оё1 П†1 + Оё2 П†2 + Оё3 П†3 + О» 1 в€’ Оёs , (7.15)
tree and add it to the band. The vacated space in the tree is п¬Ѓlled with the
smaller of the two points that lie below it. Recursively, the holes opened up
where 1 в€’ Оёs = 0. For each Оёs , we take a partial derivative and set it to
by points moving upward are п¬Ѓlled with the smaller of the two points that
zero, obtaining
lie below until the bottom of the tree is reached. Next, any tentative values
Оёs ( x s )2
adjacent to the added point are updated by changing their tentative values.
+ П†s в€’ О» = 0 (7.16)
These then need to be moved up or down the tree in order to preserve the П„ (Оё)
72 7. Constructing Signed Distance Functions 7.5. The Fast Marching Method 73

in standard fashion. Solving equation (7.16) for each П†s and plugging the proceeds. When there are two nonzero terms, equation (7.19) becomes
results into equation (7.13) yields (after some cancellation) П†i,j,k = О»; i.e., 2 2
П†i,j,k в€’ П†s1 П†i,j,k в€’ П†s2
О» is our minimum value. To п¬Ѓnd О», we rewrite equation (7.16) as + = 1, (7.21)
x s1 x s2
2
Оё s ( x s )2
2
О» в€’ П†s where s1 and s2 represent diп¬Ђerent spatial dimensions. This quadratic equa-
= (7.17)
tion can have zero, one, or two solutions. While this theoretically should not
xs П„ (Оё)2
happen, it can be caused by poor initial data or numerical errors. Deп¬Ѓning
and sum over all spatial dimensions to obtain 2 2
П†i,j,k в€’ П†s1 П†i,j,k в€’ П†s2
P (П†i,j,k ) = + (7.22)
2 2 2
x s1 x s2
О» в€’ П†1 О» в€’ П†2 О» в€’ П†3
+ + = 1, (7.18)
x1 x2 x3 allows us to write P (max{П†s1 , П†s2 }) в‰¤ 1 as a necessary and suп¬ѓ-
cient condition to п¬Ѓnd an adequate solution П†i,j,k of equation (7.21). If
using equation (7.14) to reduce the right-hand side of equation (7.18) to 1. P (max{П†s1 , П†s2 }) > 1, then П†i,j,k < max{П†s1 , П†s2 } if a solution П†i,j,k ex-
In summary,  and  compute the minimum value of П†i,j,k in each ists. This implies that something is wrong (probably due to poor initial data
quadrant by solving the quadratic equation or numerical error), since larger values of П† should not be contributing to
smaller values. In order to obtain the best solution under the circumstances,
2 2 2
П†i,j,k в€’ П†1 П†i,j,k в€’ П†2 П†i,j,k в€’ П†3
we discard the term with the larger П†s and proceed with equation (7.20).
+ + = 1. (7.19)
x y z Otherwise, when P (max{П†s1 , П†s2 }) в‰¤ 1, equation (7.21) has two solutions,
Then the п¬Ѓnal value of П†i,j,k is taken as the minimum over all the quadrants.
formula. Similarly, when all three terms are present, we deп¬Ѓne
Equation (7.19) is a п¬Ѓrst-order accurate approximation of | П†|2 = 1, i.e.,
2 2 2
the square of | П†| = 1. П†i,j,k в€’ П†1 П†i,j,k в€’ П†2 П†i,j,k в€’ П†3
P (П†i,j,k ) = + + (7.23)
The п¬Ѓnal minimization over all the quadrants is straightforward. For x y z
example, with П†2 and П†3 п¬Ѓxed, the smaller value of П†i,j,k is obtained as
and take the larger solution, corresponding to the вЂњ+вЂќ sign, when
П†1 = min(П†iв€’1,j,k , П†i+1,j,k ), ruling out four quadrants. The same considera-
P (max{П†s }) в‰¤ 1. Otherwise, when P (max{П†s }) > 1 we omit the term
tions apply to П†2 = min(П†i,jв€’1,k , П†i,j+1,k ) and П†3 = min(П†i,j,kв€’1 , П†i,j,k+1 ).
with the largest П†s and proceed with equation (7.22).
Equation (7.19) is straightforward to solve using these deп¬Ѓnitions of П†1 ,
Consider the initialization of the grid points in the band about the inter-
П†2 , and П†3 . This is equivalent to using either the forward diп¬Ђerence or the
face. The easiest approach is to consider each of the coordinate directions
backward diп¬Ђerence to approximate each derivative of П†. If these deп¬Ѓni-
tions give П†s = в€ћ, than neither the forward nor the backward diп¬Ђerence
tion can be used to locate the interface crossing and determine a candidate
is deп¬Ѓned since both the neighboring points in that spatial dimension are
value of П†. Then П† is initialized using the candidate with the smallest mag-
not in the accepted band. In this instance, we set Оёs = 0, which according
nitude. Both this initialization routine and the marching algorithm itself
to equation (7.17) is equivalent to dropping the troublesome term out of
are п¬Ѓrst-order accurate. For this reason, the reinitialization equation is of-
equation (7.19) setting it to zero.
ten a better choice, since it is highly accurate in comparison. On the other
Each of П†1 , П†2 , and П†3 can potentially be equal to в€ћ if there are no
hand, reinitialization is signiп¬Ѓcantly more expensive and does not work well
neighboring accepted band points in the corresponding spatial dimension.
when П† is not initially close to signed distance. Thus, in many situations
If one of these quantities is equal to в€ћ, the corresponding term vanishes
this optimal O(N log N ) algorithm is preferable.
from equation (7.19) as we set the appropriate Оёs = 0. Since there is always
Although the method described above was originally proposed by Tsitsik-
at least one adjacent point in the accepted band, at most two of the three
lis in  and , it was later rediscovered by the level set community;
terms can vanish, giving
see, for example, Sethian  and Helmsen, Puckett, Colella, and Dorr
2 , where it is popularly referred to as the fast marching method (FMM).
П†i,j,k в€’ П†s
= 1, (7.20) In , Sethian pointed out that higher-order accuracy could be achieved
xs
by replacing the п¬Ѓrst-order accurate forward and backward diп¬Ђerences used
which can be solved to obtain П†i,j,k = П†s В± xs . The larger term, denoted by  and  in equation (7.19) with second-order accurate forward
by the вЂњ+вЂќ sign, is the one we use, since distance increases as the algorithm and backward diп¬Ђerences whenever there are enough points in the updated
74 7. Constructing Signed Distance Functions

band to evaluate these higher-order accurate diп¬Ђerences. The second-order
accurate versions of equations (1.3) and (1.4) are
П†i+1 в€’ П†i П†i+2 в€’ 2П†i+1 + П†i П†i+2 в€’ 4П†i+1 + 3П†i
в€‚П† x
8
в‰€ + =
( x)2
в€‚x x 2 2x
(7.24)
Extrapolation in the Normal Direction
and
П†i в€’ П†iв€’1 П†i в€’ 2П†iв€’1 + П†iв€’2 3П†i в€’ 4П†iв€’1 + П†iв€’2
в€‚П† x
в‰€ + = ,
( x)2
в€‚x x 2 2x
(7.25)
respectively. This lowers the local truncation error whenever more accepted
band points are available. As pointed out in , higher-order accurate
(higher than second order) forward and backward diп¬Ђerences could be used
as well. One diп¬ѓculty with obtaining higher-order accurate solutions is that
the initial band adjacent to the interface is constructed with п¬Ѓrst-order
accurate linear interpolation. In , Chopp proposed using higher-order
accurate interpolants to initialize the points adjacent to the interface. This
leads to a set of equations that are not trivial to solve, and  used a
variant of Newton iteration to п¬Ѓnd an approximate solution. When the
iteration method (occasionally) fails,  uses the lower-order accurate lin-
8.1 One-Way Extrapolation
ear interpolation to initialize the problematic points. Overall, the iteration
scheme converges often enough to signiп¬Ѓcantly improve upon the results
In the last chapter we constructed signed distance functions by following
obtained using the lower-order accurate linear interpolation everywhere.
characteristics that п¬‚ow outward from the interface. Similar techniques can
be used to propagate information in the direction of these characteristics.
For example,

St + N В· S=0 (8.1)

is a Hamilton-Jacobi equation (in S) that extrapolates S normal to the
interface, i.e. so that S is constant on rays normal to the interface. Since
H( S) = N В· S, we can solve this equation with the techniques presented
in Chapter 5 using H1 = n1 , H2 = n2 , and H3 = n3 .
While central diп¬Ђerencing can be used to compute the normal, it is usu-
ally advantageous to use upwind diп¬Ђerencing here, since this equation is
propagating information along these characteristics. At a given point xi,j,k
where the level set function is П†i,j,k , we determine П†x by considering both
П†iв€’1,j,k and П†i+1,j,k . If either of these values is smaller than П†i,j,k , we use
the minimum of these two values to compute a one-sided diп¬Ђerence. On the
other hand, if both of these values are larger than П†i,j,k , we set П†x = 0;
noting that no S information should be п¬‚owing into this point along the x
direction. After computing П†y and П†z in the same fashion, equation (1.2)
can be used to deп¬Ѓne the normal direction.
Suppose that S is initially deп¬Ѓned only in в„¦в€’ and we wish to extend its
values across the interface from в„¦в€’ into в„¦+ . Solving equation (8.1) in в„¦+
76 8. Extrapolation in the Normal Direction 8.3. Fast Marching Method 77

using the values of S in в„¦в€’ as boundary conditions extrapolates S across for S. Then we п¬Ѓnd the next smallest value of П†, compute S, and continue in
the interface constant in the normal direction. This was done by Fedkiw, this fashion. The tentative values play no role, since we already know П† and
Aslam, Merriman, and Osher in  to solve multiphase п¬‚ow problems with thus the characteristic directions, i.e., assuming that П† is a signed distance
a ghost п¬‚uid method. We can extrapolate S in the opposite direction from function.
в„¦+ to в„¦в€’ by solving At each grid point, S is determined using the values of S at the neigh-
boring points in a fashion dictated by the neighboring values of П†. Since S
St в€’ N В· S=0 (8.2)
should be constant normal to the interface, we set N В· S = 0. Or equiv-
in в„¦в€’ using the values of S in в„¦+ as boundary conditions. Of course, the alently, since N and П† point in the same direction, we set П† В· S = 0,
where the derivatives in П† are computed in the fashion outlined above for
upwind normal should be computed using the larger neighboring values
computing normals. Then
of П† instead of the smaller neighboring values.
П† x Sx + П† y Sy + П† z Sz = 0 (8.4)
is discretized using П† to determine the upwind direction. That is, we use
8.2 Two-Way Extrapolation Sx = Dв€’ Si,j,k when П†x > 0 and Sx = D+ Si,j,k when П†x < 0. When П†x = 0,
the neighboring values of П† are larger than П†i,j,k , and no S information can
Just as we combined equations (7.2) and (7.3) to obtain equation (7.4),
be obtained from either of the neighboring nodes. In this case, we drop the
equations (8.1) and (8.2) can be combined to obtain
Sx term from equation (8.4). The Sy and Sz terms are treated in a similar
St + S(П†)N В· fashion. If П† is a signed distance function, this method works well. For more
S=0 (8.3)
details on the fast marching alternative to equation (8.1), see Adalsteinsson
to extrapolate S away from the interface. Here the upwind version of N and Sethian .
is computed using smaller values of П† in в„¦+ and larger values of П† in в„¦в€’ .
Equation (8.3) can be applied to any value S that needs to be smoothed
normal to the interface. For example, if this equation is solved indepen-
dently for each component of the velocity п¬Ѓeld, i.e., S = u, S = v, and
S = w, we obtain a velocity п¬Ѓeld that is constant normal to the interface.
Velocity п¬Ѓelds of this type have a tendency to preserve signed distance
functions, as discussed by Zhao, Chan, Merriman, and Osher in . This
velocity extrapolation is also useful when the velocity is known only near
the interface. For example, in , Chen, Merriman, Osher, and Smereka
computed the velocity п¬Ѓeld for grid cells adjacent to the interface using
local information from both sides of the interface. Then the velocity values
in this band were held п¬Ѓxed, and equation (8.3) was used to extend each
component of the velocity п¬Ѓeld outward from this initial band.

8.3 Fast Marching Method
As in the construction of signed distance functions, the fast marching
method can be used to extrapolate S in an eп¬ѓcient manner. For exam-
ple, consider a fast marching method alternative to equation (8.1). The
normal is computed using the smaller neighboring values of П† (as above).
The binary heap structure can be precomputed using all the points in в„¦+ ,
as opposed to just using the points adjacent to the initialized band, since П†
is already deп¬Ѓned throughout в„¦+ . Once the points are ordered, we choose
the point with the smallest value of П† and compute an appropriate value
80 9. Particle Level Set Method

2

1.5

9 1

Particle Level Set Method 0.5

0

y
в€’0.5

в€’1

в€’1.5

в€’2
в€’2 в€’1.5 в€’1 в€’0.5 0 0.5 1 1.5 2
x

Figure 9.1. Initial square interface location and converging velocity п¬Ѓeld.

9.1 Eulerian Versus Lagrangian Representations
2
The great success of level set methods can in part be attributed to the
role of curvature in regularizing the level set function such that the proper
1.5
vanishing viscosity solution is obtained. It is much more diп¬ѓcult to ob-
tain vanishing viscosity solutions with Lagrangian methods that faithfully
1
follow the characteristics. For these methods one usually has to delete (or
add) characteristic information вЂњby handвЂќ when a shock (or rarefaction) is
0.5
detected. This ability of level set methods to identify and delete merging
characteristics is clearly seen in a purely geometrically driven п¬‚ow where
a curve is advected normal to itself at constant speed, as shown in Fig- 0

y
ures 9.1 and 9.2. In the corners of the square, the п¬‚ow п¬Ѓeld has merging
characteristics that are appropriately deleted by the level set method. We в€’0.5

demonstrate the diп¬ѓculties associated with a Lagrangian calculation of
this interface motion by initially seeding some marker particles interior to в€’1

the interface, as shown in Figure 9.3 and passively advecting them with
xt = V (x, t); where the velocity п¬Ѓeld V (x, t) is determined from the level в€’1.5

set solution. Figure 9.4 illustrates that a number of particles incorrectly
escape from inside the level set solution curve in the corners of the square в€’2
в€’2 в€’1.5 в€’1 в€’0.5 0 0.5 1 1.5 2
where the characteristic information (represented by the particles them- x

selves) needs to be deleted so that the correct vanishing viscosity solution Figure 9.2. Square interface location at a later time correctly computed by the
can be obtained. level set method.
When using level set methods to model п¬‚uid п¬‚ows, one is usually
concerned with preserving mass (or volume for incompressible п¬‚ow). Unfor-
9.1. Eulerian Versus Lagrangian Representations 81 82 9. Particle Level Set Method

2
tunately, level set methods have a tendency to lose mass in underresolved
regions of the п¬‚ow. Attempts to improve mass (or volume) conservation
in level set methods have led to a variety of reinitialization techniques, as
1.5

discussed in Chapter 7. On the other hand, despite a lack of explicit en-
forcement of mass (or volume) conservation, Lagrangian schemes are quite
1

successful in conserving mass, since they preserve material characteristics
for all time; i.e., they are not regularized out of existence to obtain vanish-
0.5

ing viscosity solutions. The diп¬ѓculty stems from the fact that the level set
method cannot accurately tell whether characteristics merge, separate, or
0
y

 << Пред. стр. страница 7(всего 23)ОГЛАВЛЕНИЕ След. стр. >>