LINEBURG

 << Пред. стр. страница 6(всего 23)ОГЛАВЛЕНИЕ След. стр. >>
П† is constant for all time. Figure 6.1 shows the evolution of a star-shaped
П†o units from the interface. In t units of time the interface will approach
interface as it moves normal to itself in the outward direction.
a t spatial units closer, changing the value of this point to П†o в€’a t, which
When П† is a signed distance function, equation (6.1) reduces to П†t = в€’a,
is exactly the forward Euler time update of this point. The exact interface
and the values of П† either increase or decrease, depending on the sign of a.
crossing time can be identiп¬Ѓed for all points by solving П†o в€’ at = 0 to get
Forward Euler time discretization of this equation gives П†n+1 = П†n в€’ a t.
t = П†o /a. (Similar arguments hold when a < 0, except that the interface
When a > 0, the П† = 0 isocontour becomes the П† = в€’a t isocontour
moves in the opposite direction.)
after one time step. Similarly, the П† = a t isocontour becomes the П† = 0
Here, we see the power of signed distance functions. When П†o is a signed
isocontour. That is, the П† = 0 isocontour moves a t units forward in
distance function, we can write down the exact solution of equation (6.1)
the normal direction to the old position of the old П† = a t isocontour.
as П†(t) = П†o в€’ at. On the other hand, when П†o is not a signed distance
The interface is moving in the normal direction with speed a. Taking the
function, equation (6.1) needs to be solved numerically by treating it as a
gradient of this forward Euler time stepping gives П†n+1 = П†n в€’ (a t).
Hamilton-Jacobi equation, as discussed in the last chapter.
Since a t is spatially constant, (a t) = 0, implying that П†n+1 = П†n .
6.2. Numerical Discretization 57 58 6. Motion in the Normal Direction

6.2 Numerical Discretization viscosity solution. The RF scheme treats the ambiguity associated with up-
winding near sonic points by using central diп¬Ђerencing plus some artiп¬Ѓcial
For instructive purposes, suppose we plug V = aN into equation (4.2) and viscosity.
try a simple upwind diп¬Ђerencing approach. That is, we will attempt to Recall that numerical dissipation can smear out п¬Ѓne solution details
discretize on coarse grids. In order to avoid as much numerical smearing as pos-
sible, we have proposed п¬Ѓve diп¬Ђerent versions (LF, SLF, LLF, SLLF, and
aП†x aП†y aП†z
В·
П†t + , , П†=0 (6.2) LLLF) of the central diп¬Ђerencing plus artiп¬Ѓcial viscosity approach to solving
| П†| | П†| | П†|
Hamilton-Jacobi problems. While the RF method is a better alternative,
with simple upwinding. Consider the п¬Ѓrst spatial term aП†x | П†|в€’1 П†x , where it too resorts to artiп¬Ѓcial dissipation in the vicinity of sonic points where
aП†x | П†|в€’1 is the вЂњvelocityвЂќ in the x-direction. Since upwinding is based ambiguities occur. In order to avoid the addition of artiп¬Ѓcial dissipation,
only on the sign of the velocity, we can ignore the positive | П†| denomina- one must resort to the Godunov scheme.
tor, assuming temporarily that it is nonzero. Then the sign of aП†x can be Let us examine the Godunov method in detail. Again, assume a > 0. If
used to decide whether П†в€’ or П†+ should be used to approximate П†x . When П†x and П†+ are both positive, then extx minimizes H when П†в€’ < П†+ and
в€’
x x x x x
П†в€’ and П†+ have the same sign, it does not matter which of these is plugged maximizes H when П†в€’ > П†+ . In either case, we choose П†в€’ consistent with
x x x x x
into aП†x , since only the sign of this term determines whether we use П†в€’ or upwinding. Similarly, when П†в€’ and П†+ are both negative, extx minimizes
x x x
П†+ . For example, suppose a > 0. Then when П†в€’ > 0 and П†+ > 0, aП†x > 0 H when П†в€’ < П†+ and maximizes H when П†в€’ > П†+ . Again, П†+ is chosen
x x x x x x x x
and П†в€’ should be used in equation (6.2) everywhere П†x appears, including in both instances consistent with upwinding. Now consider the вЂњVвЂќ-shaped
x
the velocity term. On the other hand, when П†в€’ < 0 and П†+ < 0, aП†x < 0 case where П†в€’ < 0 and П†+ > 0, indicating an expansion. Here П†в€’ < П†+ ,
x x x x x x
and П†+ should be used to approximate П†x . so GodunovвЂ™s method minimizes H, achieving this minimum by setting
x
This simple upwinding approach works well as long as П†в€’ and П†+ have П†x = 0. This implies that a region of expansion should have a locally п¬‚at П†
x x
the same sign, but consider what happens when they have diп¬Ђerent signs. with П†x = 0. Instead of adding numerical dissipation to hide the problem,
For example, when П†в€’ < 0 and П†+ > 0, aП†в€’ < 0 (still assuming a > 0), GodunovвЂ™s method chooses the most meaningful solution. Next, consider
x x x
indicating that П†+ should be used, while aП†+ > 0, indicating that П†в€’ the case where П†в€’ > 0 and П†+ < 0, indicating coalescing characteristics.
x x x x x
в€’ +
should be used. This situation corresponds to a вЂњVвЂќ-shaped region where Here П†x > П†x , so GodunovвЂ™s method maximizes H, achieving this max-
imum by setting П†x equal to the larger in magnitude of П†в€’ and П†+ . In
each side of the вЂњVвЂќ should move outward. The diп¬ѓculty in approximating x x
П†x arises because we are in the vicinity of a sonic point, where П†x = 0. The this shock case, information is coming from both directions, and the grid
LLLF interval deп¬Ѓned by П†в€’ and П†+ includes this sonic point since П†в€’ and point feels the eп¬Ђects of the information that gets there п¬Ѓrst. The velocities
x x x
+
are characterized by H1 = aП†x | П†|в€’1 , and the side with the fastest speed
П†x diп¬Ђer in sign. We have to take care to ensure that the expansion takes
place properly. A similar problem occurs when П†в€’ > 0 and П†+ < 0. Here arrives п¬Ѓrst. This is determined by taking the larger in magnitude of П†в€’
x x x
в€’ в€’ +
and П†+ . Again, GodunovвЂ™s method chooses the most meaningful solution,
aП†x > 0, indicating that П†x should be used, while aП†x < 0, indicating that x
П†+ should be used. This upside-down вЂњVвЂќ is shaped like a carrot (or hat) avoiding artiп¬Ѓcial dissipation.
x
and represents the coalescing of information similar to a shock wave. Once GodunovвЂ™s method for equation (6.1) can be summarized as follows for
both positive and negative a. If aП†в€’ and aП†+ are both positive, use П†x =
again caution is needed to ensure that the correct solution is obtained. x x
Simple upwinding breaks down when П†в€’ and П†+ diп¬Ђer in sign. Let П†в€’ . If aП†в€’ and aП†+ are both negative, use П†x = П†+ . If aП†в€’ в‰¤ 0 and
x x x x x x x
aП†+ в‰Ґ 0, treat the expansion by setting П†x = 0. If aП†в€’ в‰Ґ 0 and aП†+ в‰¤ 0,
us examine how the Roe-Fix method works in this case. In order to do x x x
treat the shock by setting П†x to either П†в€’ or П†+ , depending on which gives
this, we need to consider the Hamilton-Jacobi form of the equation, i.e., x x
equation (6.1). Here H1 = aП†x | П†|в€’1 , implying that the simple velocity the largest magnitude for aП†x . Note that when П†в€’ = П†+ = 0 both of the
x x
V = aN we used in equation (6.2) was the correct expression to look at last two cases are activated, and both consistently give П†x = 0. We also
for upwinding. The sign of H1 is independent of the y and z directions, have the following elegant formula due to Rouy and Tourin :
depending only on aП†x . If both П†в€’ and П†+ have the same sign, we choose
П†2 в‰€ max max(П†в€’ , 0)2 , min(П†+ , 0)2
x x
(6.3)
x x x
one or the other depending on the sign of H1 as in the usual upwinding.
However, unlike simple upwinding, Roe-Fix gives a consistent method for when a > 0, and
treating the case where П†в€’ and П†+ diп¬Ђer in sign. In that instance we are in
П†2 в‰€ max min(П†в€’ , 0)2 , max(П†+ , 0)2
x x
(6.4)
x x x
the vicinity of a sonic point and switch to the LLF method, adding some
numerical dissipation to the scheme in order to obtain the correct vanishing when a < 0.
6.3. Adding a Curvature-Dependent Term 59 60 6. Motion in the Normal Direction

6.3 Adding a Curvature-Dependent Term 1 1 1

0.5
0.5 0.5
Most п¬‚ames burn with a speed in the normal direction plus extra heating
0
0 0
and cooling eп¬Ђects due to the curvature of the front. This velocity п¬Ѓeld can
be modeled by setting Vn = a в€’ bОє in the level set equation (4.4) to obtain в€’0.5 в€’0.5 в€’0.5

в€’1 в€’1 в€’1
П†t + a| П†| = bОє| П†|; (6.5) 1
0
в€’1 в€’1 1
0 в€’1 0 1

which has both hyperbolic and parabolic terms. The hyperbolic a| П†| term 1 1 1
can be discretized as outlined above using Hamilton-Jacobi techniques, 0.5 0.5 0.5
while the parabolic bОє| П†| term can be independently discretized using
0 0 0
central diп¬Ђerencing as described in Chapter 4.
в€’0.5 в€’0.5
Once both terms have been discretized, either forward Euler or RK в€’0.5

time discretization can be used to advance the front forward in time. в€’1 в€’1 в€’1
в€’1 1
0 в€’1 0 1 0
в€’1 1
The combined CFL condition for equations that contain both hyperbolic
Hamilton-Jacobi terms and parabolic terms is given by 1 1 1

0.5 0.5 0.5
|H1 | |H2 | |H3 | 2b 2b 2b
t + + + 2+ 2+ < 1, (6.6) 0 0
2 0
x y z ( x) ( y) ( z)
в€’0.5 в€’0.5 в€’0.5
as one might have guessed from equation (4.12).
в€’1 в€’1 в€’1
0 1
в€’1 1
в€’1 0 0
в€’1 1

Figure 6.2. A star-shaped interface being advected by a rigid body rotation as it
6.4 Adding an External Velocity Field moves outward normal to itself.

Equation (6.5) models a п¬‚ame front burning through a material at rest, but
does not account for the velocity of the unburnt material. A more general background velocity and the self-generated velocity in the normal direction
equation is are both moving the front in the same direction, and the upwind direction is
easy to determine. For example, if a, П†в€’ , and П†+ are all positive, the second
П†t + V В· П† + a| П†| = bОє| П†|, (6.7) x x
term in H1 indicates that the self-generated normal velocity is moving the
since it includes the velocity V of the unburnt material. This equation front to the right. Additionally, when u > 0 the external velocity is also
combines an external velocity п¬Ѓeld with motion normal to the interface and moving the front to the right. In this case, both the RF scheme and the
motion by mean curvature. It is the most general form of the G-equation Godunov scheme set П†x = П†в€’ . x
for burning п¬‚ame fronts; see Markstein . As in equation (6.5), the It is more diп¬ѓcult to determine what is happening when u and aП†x | П†|в€’1
parabolic term on the right-hand side can be independently discretized with disagree in sign. In this case the background velocity is moving the front in
central diп¬Ђerencing. The hyperbolic Hamilton-Jacobi part of this equation one direction while the self-generated normal velocity is moving the front
consists of two terms, V В· П† and a| П†|. Figure 6.2 shows the evolution in the opposite direction. In order to upwind, we must determine which of
of a star-shaped interface under the inп¬‚uence of both an externally given these two terms dominates. It helps if П† is a signed distance function, since
rigid-body rotation (a V В· П† term) and a self-generated motion outward we obtain the simpliп¬Ѓed H1 = u + aП†x . If H1 is positive for both П†в€’ and x
П†+ , then both RF and Godunov set П†x = П†в€’ . If H1 is negative for both
normal to the interface (an a| П†| term). x x
П†в€’ and П†+ , then both RF and Godunov set П†x = П†+ . If H1 is negative
In order to discretize the Hamilton-Jacobi part of equation (6.7), we п¬Ѓrst x x x
identify the partial derivatives of H, i.e., H1 = u + aП†x | П†|в€’1 and H2 = for П†в€’ and positive for П†+ , we have an expansion. If П†в€’ < П†+ , GodunovвЂ™s
x x x x
v + aП†y | П†|в€’1 . The п¬Ѓrst term in H1 represents motion of the interface as method chooses the minimum value for H. This relative extremum occurs
when H1 = 0 implying that we set П†x = в€’u/a. If П†в€’ > П†+ GodunovвЂ™s
it is passively advected in the external velocity п¬Ѓeld, while the second term x x
represents the self-generated velocity of the interface as it moves normal method chooses the maximum value for H, which is again obtained by
to itself. If u and aП†x have the same sign for both П†в€’ and П†+ , then the setting П†x = в€’u/a. If H1 is positive for П†в€’ and negative for П†+ , we have a
x x x x
6.4. Adding an External Velocity Field 61

shock. GodunovвЂ™s method dictates setting П†x equal to the value of П†в€’ or x
П†+ that gives the value of H1 with the largest magnitude.
x
When П† is not a signed distance function, the above simpliп¬Ѓcations can-
not be made. In general, H1 = u + aП†x | П†|в€’1 , and we need to consider not
only I x , but also the values of П†y and П†z in I y and I z , respectively. This
can become rather complicated quite quickly. In fact, even the RF method
can become quite complicated in this case, since it is hard to tell when sonic
7
points are nearby and when they are not. In situations like this, the LLF
scheme is ideal, since one merely uses both the values of П†в€’ and П†+ along
x x
with some artiп¬Ѓcial dissipation setting О± as dictated by equation (5.12).
Constructing Signed Distance
At п¬Ѓrst glance, equation (5.12) might seem complicated to evaluate; e.g.,
Functions
one has to determine the maximum value of |H1 |. However, since О± is just
a dissipation coeп¬ѓcient, we can safely overestimate О± and pay the price
of slightly more artiп¬Ѓcial dissipation. In contrast, it is hard to predict how
certain approximations will aп¬Ђect the Godunov scheme. One way to approx-
imate О± is to separate H1 into parts, i.e., using |H1 | < |u| + |aП†x || П†|в€’1
to treat the п¬Ѓrst and second terms independently. Also, when П† is approxi-
mately a signed distance, we can look at |H1 | = |u + aП†x |. This function is
easy to maximize, since the maximum occurs at either П†в€’ or П†+ and the y
x x
and z spatial directions play no role.

7.1 Introduction
As we have seen, a number of simpliп¬Ѓcations can be made when П† is a signed
distance function. For this reason, we dedicate this chapter to numerical
techniques for constructing approximate signed distance functions. These
techniques can be applied to the initial data in order to initialize П† to a
signed distance function.
As the interface evolves, П† will generally drift away from its initialized
value as signed distance. Thus, the techniques presented in this chapter
need to be applied periodically in order to keep П† approximately equal to
signed distance. For a particular application, one has to decide how sensi-
tive the relevant techniques are to П†вЂ™s approximation of a signed distance
function. If they are very sensitive, П† needs to be reinitialized to signed
distance both accurately and often. If they are not sensitive, one can reini-
tialize with a lower-order accurate method on an occasional basis. However,
even if a particular numerical approach doesnвЂ™t seem to depend on how ac-
curately П† approximates a signed distance function, one needs to remember
that П† can develop noisy features and steep gradients that are not amenable
to п¬Ѓnite-diп¬Ђerence approximations. For this reason, it is always advisable
to reinitialize occasionally so that П† stays smooth enough to approximate
its spatial derivatives with some degree of accuracy.
64 7. Constructing Signed Distance Functions 7.3. Crossing Times 65

7.2 Reinitialization creases п¬‚exibility in algorithm design, since diп¬ѓculties away from the П† = 0
isocontour can be ignored.
In their seminal level set paper, Osher and Sethian  initialized their
numerical calculations with П† = 1В±d2 , where d is the distance function and
the вЂњВ±вЂќ sign is negative in в„¦в€’ and positive in в„¦+ . Later, it became clear
7.3 Crossing Times
that the signed distance function П† = В±d, was a better choice for initializ-
ing П†. Mulder, Osher, and Sethian  demonstrated that initializing П† to
One of the diп¬ѓculties associated with the direct computation of signed
a signed distance function results in more accurate numerical solutions than
distance functions is locating and discretizing the interface. This can be
initializing П† to a Heaviside function. While it is obvious that better results
avoided in the following fashion. Consider a point x в€€ в„¦+ . If x does not lie
can be obtained with smooth functions than nonsmooth functions, there
on the interface, we wish to know how far from the interface it is so that
are those who insist on using (slightly smeared out) Heaviside functions,
we can set П†(x) = +d. If we move the interface in the normal direction
or color functions, to track interfaces.
using equation (6.1) with a = 1, the interface eventually crosses over x,
In , Chopp considered an application where certain regions of the
changing the local value of П† from positive to negative. If we keep a time
п¬‚ow had level sets piling up on each other, increasing the local gradient,
history of the local values of П† at x, we can п¬Ѓnd the exact time when П† was
while other regions of the п¬‚ow had level sets separating from each other,
equal to zero using interpolation in time. This is the time it takes the zero
п¬‚attening out П†. In order to reduce the numerical errors caused by both
level set to reach the point x, and we call that time to the crossing time.
steepening and п¬‚attening eп¬Ђects,  introduced the notion that one should
Since equation (6.1) moves the level set normal to itself with speed a = 1,
reinitialize the level set function periodically throughout the calculation.
the time it takes for the zero level set to reach a point x is equal to the
Since only the П† = 0 isocontour has any meaning, one can stop the cal-
distance the interface is from x. That is, the crossing time to is equal to the
culation at any point in time and reset the other isocontours so that П† is
distance d. For points x в€€ в„¦в€’ , the crossing time is similarly determined
again initialized to a signed distance function. The most straightforward
using a = в€’1 in equation (6.1).
way of implementing this is to use a contour plotting algorithm to locate
In a series of papers, , , and , Kimmel and Bruckstein intro-
and discretize the П† = 0 isocontour and then explicitly measure distances
duced the notion of using crossing times in image-processing applications.
from it. Unfortunately, this straightforward reinitialization routine can be
For example,  used equation (6.1) with a = 1 to create shape oп¬Ђsets,
slow, especially if it needs to be done at every time step. In order to ob-
which are distance functions with distance measured from the boundary of
tain reasonable run times,  restricted the calculations of the interface
an image. The idea of using crossing times to solve some general Hamilton-
motion and the reinitialization to a small band of points near the П† = 0
Jacobi equations with Dirichlet boundary conditions was later generalized
isocontour, producing the п¬Ѓrst version of the local level set method. We
and rigorized by Osher .
refer those interested in local level set methods to the more recent works
of Adalsteinsson and Sethian  and Peng, Merriman, Osher, Zhao, and
Kang .
7.4 The Reinitialization Equation
The concept of frequent reinitialization is a powerful numerical tool. In
a standard numerical method, one starts with initial data and proceeds
In , Rouy and Tourin proposed a numerical method for solving | П†| =
forward in time, assuming that the numerical solution stays well behaved
f (x) for a spatially varying function f derived from the intensity of an
until the п¬Ѓnal solution is computed. With reinitialization, we have a less-
image. In the trivial case of f (x) = 1, the solution П† is a signed distance
stringent assumption, since only our П† = 0 isocontour needs to stay well
function. They added f (x) to the right-hand side of equation (6.1) as a
behaved. Any problems that creep up elsewhere are wiped out when the
source term to obtain
level set is reinitialized. For example, Merriman, Bence, and Osher 
proposed numerical techniques that destroy the nice properties of the level
П†t + | П†| = f (x), (7.1)
set function and show that poor numerical solutions are obtained using
these degraded level set functions. Then they show that periodic reini- which is evolved in time until a steady state is reached. At steady state,
tialization to a signed distance function repairs the damage, producing the values of П† cease to change, implying that П†t = 0. Then equation (7.1)
reduces to | П†| = f (x), as desired. Since only the steady-state solution
high-quality numerical results. Numerical techniques need to be eп¬Ђective
only for the П† = 0 isocontour, since the rest of the implicit surface can be is desired,  used an accelerated iteration method instead of directly
repaired by reinitializing П† to a signed distance function. This greatly in- evolving equation (7.1) forward in time.
66 7. Constructing Signed Distance Functions 7.4. The Reinitialization Equation 67

Equation (7.1) propagates information in the normal direction, so infor- does not change if the interface incorrectly crosses over a grid point. This
mation п¬‚ows from smaller values of П† to larger values of П†. This equation was addressed directly by Fedkiw, Aslam, Merriman, and Osher in the
is of little use in reinitializing the level set function, since the interface lo- appendix of , where incorrect interface crossings were identiп¬Ѓed as sign
cation will be inп¬‚uenced by the negative values of П†. That is, the П† = 0 changes in the nodal values of П†. These incorrect interface crossings were
isocontour is not guaranteed to stay п¬Ѓxed, but will instead move around rectiп¬Ѓed by putting the interface back on the correct side of a grid point x,
setting П†(x) = В± , where В± is a small number with the appropriate sign.
as it is inп¬‚uenced by the information п¬‚owing in from the negative values
of П†. One way to avoid this is to compute the signed distance function for In discretizing equation (7.4), the S(П†o )| П†| term is treated as motion in
all the grid points adjacent to the interface by hand. Then the normal direction as described in Chapter 6. Here S(П†o ) is constant for
all time and can be thought of as a spatially varying вЂњaвЂќ term. Numerical
П†t + | П†| = 1 (7.2)
tests indicate that better results are obtained when S(П†o ) is numerically
can be solved in в„¦+ to update П† based on those grid points adjacent to smeared out, so  used
the interface. That is, the grid points adjacent to the interface are not
П†o
updated, but instead used as boundary conditions. Since there is only a S(П†o ) = (7.5)
П†2 + ( x)2
single band of initialized grid cells on each side of the interface, one cannot o
apply higher-order accurate methods such as HJ WENO. However, if a
as a numerical approximation. Later, Peng, Merriman, Osher, Zhao, and
two-grid-cell-thick band is initialized in в„¦в€’ (in addition to the one-grid-
Kang  suggested that
cell-thick band in в„¦+ ), the total size if the band consists of three grid
cells and the HJ WENO scheme can then be used. Alternatively, one could П†
S(П†) = (7.6)
initialize a three-grid-cell-thick band of boundary conditions in в„¦в€’ and П†2 + | П†|2 ( x)2
use these to update every point in в„¦+ including those adjacent to the
interface. Similarly, a three grid cell thick band of boundary conditions can was a better choice, especially when the initial П†o was a poor estimate of
be initialized in в„¦+ and used to update the values of П† in в„¦в€’ by solving signed distance, i.e., when | П†o | was far from 1. In equation (7.6), it is im-
portant to update S(П†) continually as the calculation progresses so that the
П†t в€’ | П†| = в€’1 (7.3)
| П†| term has the intended eп¬Ђect. In contrast, equation (7.5) is evaluated
to steady state. Equations (7.2) and (7.3) reach steady state rather quickly, only once using the initial data. Numerical smearing of the sign function
since they propagate information at speed 1 in the direction normal to the decreases its magnitude, slowing the propagation speed of information near
interface. For example, if t = 0.5 x, it takes only about 10 time steps to the interface. This probably aids the balancing out of the circular depen-
move information from the interface to 5 grid cells away from the interface. dence on the initial data as well, since it produces characteristics that do
In , Sussman, Smereka, and Osher put all this together into a not look as far across the interface for their information. We recommend
reinitialization equation using GodunovвЂ™s method for discretizing the hyperbolic S(П†o )| П†| term.
After п¬Ѓnding a numerical approximation to S(П†o )| П†|, we combine it with
П†t + S(П†o )(| П†| в€’ 1) = 0, (7.4)
the remaining S(П†o ) source term at each grid point and update the resulting
where S(П†o ) is a sign function taken as 1 in в„¦+ , в€’1 in в„¦в€’ , and 0 on the quantity in time with a Runge-Kutta method.
interface, where we want П† to stay identically equal to zero. Using this Ideally, the interface remains stationary during the reinitialization pro-
equation, there is no need to initialize any points near the interface for use cedure, but numerical errors will tend to move it to some degree. In ,
as boundary conditions. The points near the interface in в„¦+ use the points Sussman and Fatemi suggested an improvement to the standard reini-
in в„¦в€’ as boundary conditions, while the points in в„¦в€’ conversely look at tialization procedure. Since their application of interest was two-phase
those in в„¦+ . This circular loop of dependencies eventually balances out, incompressible п¬‚ow, they focused on preserving the amount of material
and a steady-state signed distance function is obtained. As long as П† is in each cell, i.e., preserving the area (volume) in two (three) spatial di-
relatively smooth and the initial data are somewhat balanced across the mensions. If the interface does not move during reinitialization, the area is
interface, this method works rather well. Unfortunately, if П† is not smooth preserved. On the other hand, one can preserve the area while allowing the
or П† is much steeper on one side of the interface than the other, circular interface to move, implying that their proposed constraint is weaker than
dependencies on initial data can cause the interface to move incorrectly it should be. In  this constraint was applied locally, requiring that the
from its initial starting position. For this reason,  deп¬Ѓned S(П†o ) using area be preserved individually in each cell. Instead of using the exact area,
the initial values of П† (denoted by П†o ) so that the domain of dependence the authors used equation (1.15) with f (x) = 1 to approximate the area in
68 7. Constructing Signed Distance Functions 7.5. The Fast Marching Method 69

each cell as procedure is very similar to that used by Rudin, Osher, and Fatemi 
as a continuous in time gradient projection method. In  a set of con-
Ai,j = H(П†) dx, (7.7) straints needs to be preserved under evolution, while in  the evolution
в„¦i,j
is not inherited from gradient descent on a functional to be optimized.
where в„¦i,j is an individual grid cell and H is the smeared-out Heaviside Reinitialization is still an active area of research. Recently, Russo and
function in equation (1.22). In both  and the related  by Sussman, Smereka  introduced yet another method for computing the signed
Fatemi, Smereka, and Osher this local constraint was shown to signiп¬Ѓcantly distance function. This method was designed to keep the stencil from in-
improve the results obtained with the HJ ENO scheme. However, this local correctly looking across the interface at values that should not inп¬‚uence it,
constraint method has not yet been shown to improve upon the results ob- essentially removing the balancing act between the interdependent initial
tained with the signiп¬Ѓcantly more accurate HJ WENO scheme. The concern data across the interface. Their idea replaces equation (7.4) with a com-
is that the HJ WENO scheme might be so accurate that the approximations bination of equations (7.2) and (7.3) along with interpolation to п¬Ѓnd the
made by  could lower the accuracy of the method. interface location. In  marked improvement was shown in using low-
This local constraint is implemented in  by the addition of a order HJ ENO schemes, but the authors did not address whether they can
correction term to the right-hand side of equation (7.4), obtain improved results over the recommended HJ WENO discretization
of equation (7.4). Moreover, implementing a high-order accurate version of
П†t + S(П†o )(| П†| в€’ 1) = О»Оґ(П†)| П†|, (7.8)
the scheme in  requires a number of ghost cells, as discussed above.
Л†
where the multidimensional delta function Оґ = Оґ(П†)| П†| from equa-
tion (1.19) is used, since the modiп¬Ѓcations are needed only near the interface
where Ai,j is not trivially equal to either zero or the volume of в„¦i,j . The
7.5 The Fast Marching Method
constraint that Ai,j in each cell not change, i.e., (Ai,j )t = 0, is equivalent
to
In the crossing-time approach to constructing signed distance functions, the
H (П†)П†t dx = 0, (7.9)
zero isocontour moves in the normal direction, crossing over grid points at
в„¦i,j
times equal to their distance from the interface. In this fashion, each grid
 << Пред. стр. страница 6(всего 23)ОГЛАВЛЕНИЕ След. стр. >>