LINEBURG

 << Пред. стр. страница 2(всего 23)ОГЛАВЛЕНИЕ След. стр. >>
errors might be rather large. Obviously, this means that we would like our
implicit function to be as smooth as possible. In the next chapter we dis-
N
cuss using a signed distance function to represent the surface. This turns
x
out to be a good choice, since steep and п¬‚at gradients as well as rapidly
П† = в€’.8
changing features are avoided as much as possible.
П† =0
Implicit functions make both simple Boolean operations and more ad-
vanced constructive solid geometry (CSG) operations easy to apply. This
П† = 1.15
is important, for example, in computer-aided design (CAD). If П†1 and П†2
are two diп¬Ђerent implicit functions, then П†(x) = min(П†1 (x), П†2 (x)) is
the implicit function representing the union of the interior regions of П†1
and П†2 . Similarly, П†(x) = max(П†1 (x), П†2 (x)) is the implicit function
representing the intersection of the interior regions of П†1 and П†2 . The
Figure 1.3. A few isocontours of our two-dimensional example П†(x) = x2 + y 2 в€’ 1
complement of П†1 (x) can be deп¬Ѓned by П†(x) = в€’П†1 (x). Also, П†(x) =
along with some representative normals.
max(П†1 (x), в€’П†2 (x)) represents the region obtained by subtracting the
interior of П†2 from the interior of П†1 .
The gradient of the implicit function is deп¬Ѓned as including x = 1, where the interface normal is N = 1, and N points to the
left for all x < 0 including x = в€’1, where the interface normal is N = в€’1.
в€‚П† в€‚П† в€‚П†
П†= , , . (1.1)
The normal is undeп¬Ѓned at x = 0, since the denominator of equation (1.2)
в€‚x в€‚y в€‚z
vanishes. This can be problematic in general, but can be avoided with a
The gradient П† is perpendicular to the isocontours of П† and points in the
number of techniques. For example, at x = 0 we could simply deп¬Ѓne N as
direction of increasing П†. Therefore, if xo is a point on the zero isocontour
either N = 1 or N = в€’1. Our two- and three-dimensional examples (above)
of П†, i.e., a point on the interface, then П† evaluated at xo is a vector that
show similar degenerate behavior at x = 0, where all partial derivatives
points in the same direction as the local unit (outward) normal N to the
vanish. Again, a simple technique for evaluating (1.2) at these points is
interface. Thus, the unit (outward) normal is
just to pick an arbitrary direction for the normal. Note that the standard
П† trick of adding a small > 0 to the denominator of equation (1.2) can
N= (1.2)
| П†| be a bad idea in general, since it produces a normal with |N | = 1. In fact,
when the denominator in equation (1.2) is zero, so is the numerator, making
for points on the interface.
N = 0 when a small > 0 is used in the denominator. (While setting N = 0
Since the implicit representation of the interface embeds the interface
is not always disastrous, caution is advised.)
in a domain of one higher-dimension, it will be useful to have as much
On our Cartesian grid, the derivatives in equation (1.2) need to be ap-
information as possible representable on the higher-dimensional domain.
proximated, for example using п¬Ѓnite diп¬Ђerence techniques. We can use a
For example, instead of deп¬Ѓning the unit normal N by equation (1.2) for
п¬Ѓrst-order accurate forward diп¬Ђerence
points on the interface only, we use equation (1.2) to deп¬Ѓne a function N
П†i+1 в€’ П†i
everywhere on the domain. This embeds the normal in a function N deп¬Ѓned в€‚П†
в‰€ (1.3)
on the entire domain that agrees with the normal for points on the interface. в€‚x x
Figure 1.3 shows a few isocontours of our two-dimensional example П†(x) =
abbreviated as D+ П†, a п¬Ѓrst-order accurate backward diп¬Ђerence
x2 + y 2 в€’ 1 along with some representative normals.
Consider the one-dimensional example П†(x) = x2 в€’ 1, where N is deп¬Ѓned П†i в€’ П†iв€’1
в€‚П†
в‰€ (1.4)
by equation (1.2) as N = x/|x|. Here, N points to the right for all x > 0 в€‚x x
1.4. Geometry Toolbox 11 12 1. Implicit Functions

abbreviated as Dв€’ П†, or a second-order accurate central diп¬Ђerence good choice for П† turns out to be the signed distance function discussed in
the next chapter.
П†i+1 в€’ П†iв€’1
в€‚П†
в‰€ (1.5) The mean curvature of the interface is deп¬Ѓned as the divergence of the
в€‚x 2x
normal N = (n1 , n2 , n3 ),
abbreviated as Do П†. (The j and k indices have been suppressed in the
в€‚n1 в€‚n2 в€‚n3
above formulas.) The formulas for the derivatives in the y and z directions В·N =
Оє= + + , (1.6)
в€‚x в€‚y в€‚z
are obtained through symmetry. These simple formulas are by no means
exhaustive, and we will discuss more ways of approximating derivatives so that Оє > 0 for convex regions, Оє < 0 for concave regions, and Оє = 0 for a
later in the text. plane; see Figure 1.4. While one could simply use п¬Ѓnite diп¬Ђerences to com-
When all numerically calculated п¬Ѓnite diп¬Ђerences are identically zero, pute the derivatives of the components of the normal in equation (1.6), it is
the denominator of equation (1.2) vanishes. As in the analytic case, we usually more convenient, compact, and accurate to calculate the curvature
can simply randomly choose a normal. Here, however, randomly choosing directly from the values of П†. Substituting equation (1.2) into equation (1.6)
a normal is somewhat justiп¬Ѓed, since it is equivalent to randomly perturb- gives
ing the values of П† on our Cartesian mesh by values near round-oп¬Ђ error.
П†
These small changes in the values of П† are dominated by the local approx- В·
Оє= , (1.7)
| П†|
imation errors in the п¬Ѓnite diп¬Ђerence formula for the derivatives. Consider
a discretized version of our one-dimensional example П†(x) = x2 в€’ 1, and
so that we can write the curvature as
suppose that grid points exist at xiв€’1 = в€’ x, xi = 0, and xi+1 = x with
Оє = П†2 П†yy в€’ 2П†x П†y П†xy + П†2 П†xx + П†2 П†zz в€’ 2П†x П†z П†xz + П†2 П†xx
exact values of П† deп¬Ѓned as П†iв€’1 = x2 в€’ 1, П†i = в€’1, and П†i+1 = x2 в€’ 1, x y x z
respectively. The forward diп¬Ђerence formula gives Ni = 1, the backward +П†2 П†zz в€’ 2П†y П†z П†yz + П†2 П†yy /| П†|3 (1.8)
y z
diп¬Ђerence formula gives Ni = в€’1, and the central diп¬Ђerence formula can-
not be used, since Do П† = 0 at xi = 0. However, simply perturbing П†i+1 to in terms of the п¬Ѓrst and second derivatives of П†. A second-order accurate
x2 в€’ 1 + for any small > 0 (even round-oп¬Ђ error) gives D o П† = 0 and п¬Ѓnite diп¬Ђerence formula for П†xx , the second partial derivative of П† in the x
direction, is given by
Ni = 1. Similarly, perturbing П†iв€’1 to x2 в€’ 1 + gives Ni = в€’1. Thus,
for any approach that is stable under small perturbations of the data, it is в€‚2П† П†i+1 в€’ 2П†i + П†iв€’1
в‰€ , (1.9)
acceptable to randomly choose N when the denominator of equation (1.2) 2 x2
в€‚x
vanishes. Similarly in our two- and three-dimensional examples, N = x/|x|
abbreviated as Dx Dx П†, or equivalently, Dx Dx П†. Here D+ and Dв€’ are
+в€’ в€’+
everywhere except at x = 0, where equation (1.2) is not deп¬Ѓned and we
deп¬Ѓned as in equations (1.3) and (1.4), respectively, and the x subscript
are free to choose it arbitrarily. The arbitrary normal at the origin in the
indicates that the п¬Ѓnite diп¬Ђerence is evaluated in the x direction. A second-
one-dimensional case lines up with the normals to either the right, where
oo
order accurate п¬Ѓnite diп¬Ђerence formula for П†xy is given by Dx Dy П†, or
N = 1, or to the left, where N = в€’1. Similarly, in two and three spatial di-
oo
equivalently, Dy Dx П†. The other second derivatives in equation (1.8) are
mensions, an arbitrarily chosen normal at x = 0 lines up with other nearby
deп¬Ѓned in a manner similar to either П†xx or П†xy .
normals. This is always the case, since the normals near the origin point
In our one-dimensional example, П†(x) = x2 в€’ 1, Оє = 0 everywhere ex-
outward in every possible direction.
cept at the origin, where equation (1.7) is undeп¬Ѓned. Thus, the origin, is
If П† is a smooth well-behaved function, then an approximation to the
a removable singularity, and we can deп¬Ѓne Оє = 0 everywhere. Interfaces in
value of the normal at the interface can be obtained from the values of N
one spatial dimension are models of planes in three dimensions (assuming
computed at the nodes of our Cartesian mesh. That is, given a point xo
that the unmodeled directions have uniform data). Therefore, using Оє = 0
on the interface, one can estimate the unit outward normal at xo by in-
everywhere is a consistent model. In our two- and three-dimensional ex-
terpolating the values of N from the Cartesian mesh to the point xo . If
1 2
amples above, Оє = |x| and Оє = |x| (respectively) everywhere except at the
one is using forward, backward, or central diп¬Ђerences, then linear (bilinear
origin. Here the singularities are not removable, and Оє в†’ в€ћ as we approach
or trilinear) interpolation is usually good enough. However, higher-order
accurate formulas can be used if desired. This interpolation procedure re- the origin. Moreover, Оє = 1 everywhere on the one-dimensional interface
quires that П† be well behaved, implying that we should be careful in how in two spatial dimensions, and Оє = 2 everywhere on the two-dimensional
we choose П†. For example, it would be unwise to choose an implicit func- interface in three spatial dimensions. The diп¬Ђerence occurs because a two-
tion П† with unnecessary oscillations or steep (or п¬‚at) gradients. Again, a dimensional circle is a cylinder in three spatial dimensions (assuming that
1.5. Calculus Toolbox 13 14 1. Implicit Functions

needed for the boundary. This is easily accomplished by including the
+
в„¦ в„¦в€’ measure-zero boundary set with either the interior or exterior region (as
П† >0 above). Throughout the text we usually include the boundary with the
П† <0 interior region в„¦в€’ where П†(x) < 0 (unless otherwise speciп¬Ѓed).
outside The functions П‡В± are functions of a multidimensional variable x. It
inside
is often more convenient to work with functions of the one-dimensional
variable П†. Thus we deп¬Ѓne the one-dimensional Heaviside function
Оє <0 0 if П† в‰¤ 0,
в€‚в„¦ H(П†) = (1.12)
1 if П† > 0,
П† =0
Оє >0 where П† depends on x, although it is not important to specify this depen-
interface dence when working with H. This allows us to work with H in one spatial
dimension. Note that П‡+ (x) = H(П†(x)) and П‡в€’ (x) = 1 в€’ H(П†(x)) for all x,
so all we have done is to introduce an extra function of one variable H to
Figure 1.4. Convex regions have Оє > 0, and concave regions have Оє < 0. be used as a tool when dealing with characteristic functions.
The volume integral (area or length integral in 2 or 1 , respectively) of
a function f over the interior region в„¦в€’ is deп¬Ѓned as
the unmodeled direction has uniform data). It seems nonsensical to be trou-
bled by Оє в†’ в€ћ as we approach the origin, since this is only a consequence
f (x)П‡в€’ (x) dx, (1.13)
of the embedding. In fact, since the smallest unit of measure on the Carte- в„¦
sian grid is the cell size x, it makes little sense to hope to resolve objects
where the region of integration is all of в„¦, since П‡в€’ prunes out the exterior
smaller than this. That is, it makes little sense to model circles (or spheres)
region в„¦+ automatically. The one-dimensional Heaviside function can be
with a radius smaller than x. Therefore, we limit the curvature so that
used to rewrite this volume integral as
в€’ 1x в‰¤ Оє в‰¤ 1x . If a value of Оє is calculated outside this range, we merely
replace that value with either в€’ 1x or 1x depending on which is closer.
f (x) (1 в€’ H(П†(x))) dx (1.14)
As a п¬Ѓnal note on curvature, one has to use caution when П† is noisy. в„¦
The normal N will generally have even more noise, since it is based on the
representing the integral of f over the interior region в„¦в€’ . Similarly,
derivatives of П†. Similarly, the curvature Оє will be even noisier than the
normal, since it is computed with the second derivatives of П†.
f (x)H(П†(x)) dx (1.15)
в„¦

is the integral of f over the exterior region в„¦+ .
1.5 Calculus Toolbox By deп¬Ѓnition, the directional derivative of the Heaviside function H in
the normal direction N is the Dirac delta function
The characteristic function П‡в€’ of the interior region в„¦в€’ is deп¬Ѓned as
Л† H(П†(x)) В· N ,
Оґ(x) = (1.16)
if П†(x) в‰¤ 0,
1
в€’
П‡ (x) = (1.10)
which is a function of the multidimensional variable x. Note that this dis-
0 if П†(x) > 0
tribution is nonzero only on the interface в€‚в„¦ where П† = 0. We can rewrite
where we arbitrarily include the boundary with the interior region. The equation (1.16) as
characteristic function П‡+ of the exterior region в„¦+ is deп¬Ѓned similarly as
П†(x)
Л†
Оґ(x) = H (П†(x)) П†(x) В· = H (П†(x))| П†(x)| (1.17)
if П†(x) в‰¤ 0,
0 | П†(x)|
П‡+ (x) = (1.11)
1 if П†(x) > 0,
using the chain rule to take the gradient of H, the deп¬Ѓnition of the normal
from equation (1.2), and the fact that П†(x) В· П†(x) = | П†(x)|2 . In one
again including the boundary with the interior region. It is often useful to
have only interior and exterior regions so that special treatment is not spatial dimension, the delta function is deп¬Ѓned as the derivative of the
1.5. Calculus Toolbox 15 16 1. Implicit Functions

one-dimensional Heaviside function: interface width equal to three grid cells when П† is normalized to a signed
distance function with | П†| = 1; see Chapter 2). Then the delta function
Оґ(П†) = H (П†), (1.18)
is deп¬Ѓned according to equation (1.18) as the derivative of the Heaviside
function
where H(П†) is deп¬Ѓned in equation (1.12) above. The delta function Оґ(П†)
пЈ±
is identically zero everywhere except at П† = 0. This allows us to rewrite пЈґ0 П†<в€’ ,
пЈґ
пЈІ
equations (1.16) and (1.17) as ПЂП†
1 1
в€’ в‰¤П†в‰¤ ,
Оґ(П†) = 2 + 2 cos (1.23)
пЈґ
Л† пЈґ
Оґ(x) = Оґ(П†(x))| П†(x)| (1.19) пЈі0 < П†,
using the one-dimensional delta function Оґ(П†).
where is determined as above. This delta function allows us to evaluate
2 1
The surface integral (line or point integral in or , respectively) of
the surface integral in equation (1.21) using a standard sampling technique
a function f over the boundary в€‚в„¦ is deп¬Ѓned as
such as the midpoint rule. Similarly, the smeared-out Heaviside function in
equation (1.22) allows us to evaluate the integrals in equations (1.14) and
Л†
f (x)Оґ(x) dx, (1.20)
(1.15).
в„¦
The reader is cautioned that the smeared-out Heaviside and delta
Л†
where the region of integration is all of в„¦, since Оґ prunes out everything
functions approach to the calculus of implicit functions leads to п¬Ѓrst-
except в€‚в„¦ automatically. The one-dimensional delta function can be used
order accurate methods. For example, when calculating the volume of the
to rewrite this surface integral as
region в„¦в€’ using
f (x)Оґ(П†(x))| П†(x)| dx. (1.21)
(1 в€’ H(П†(x))) dV (1.24)
в„¦
в„¦
Typically, volume integrals are computed by dividing up the interior
with the smeared-out Heaviside function in equation (1.22) (and f (x) =
region в„¦в€’ , and surface integrals are computed by dividing up the bound-
1), the errors in the calculation are O( x) regardless of the accuracy of
ary в€‚в„¦. This requires treating a complex two-dimensional surface in three
the integration method used. If one needs more accurate results, a three-
spatial dimensions. By embedding the volume and surface integrals in
dimensional contouring algorithm such as the marching cubes algorithm can
higher dimensions, equations (1.14), (1.15) and (1.21) avoid the need for
be used to identify the region в„¦в€’ more accurately, see Lorenson and Cline
identifying inside, outside, or boundary regions. Instead, the integrals are
 or the more recent Kobbelt et al. . Since higher-order accurate
taken over the entire region в„¦. Note that dx is a volume element in three
methods can be complex, we prefer the smeared-out Heaviside and delta
spatial dimensions, an area element in two spatial dimensions, and a length
function methods whenever appropriate.
element in one spatial dimension. On our Cartesian grid, the volume of a
three-dimensional cell is x y z, the area of a two-dimensional cell is
x y, and the length of a one-dimensional cell is x.
Consider the surface integral in equation (1.21), where the one-
dimensional delta function Оґ(П†) needs to be evaluated. Since Оґ(П†) = 0
almost everywhere, i.e., except on the lower-dimensional interface, which
has measure zero, it seems unlikely that any standard numerical approxi-
mation based on sampling will give a good approximation to this integral.
Thus, we use a п¬Ѓrst-order accurate smeared-out approximation of Оґ(П†).
First, we deп¬Ѓne the smeared-out Heaviside function
пЈ±
пЈґ0 П†<в€’ ,
пЈґ
пЈІ
H(П†) = 1 + 2 + 2ПЂ sin ПЂП†
П† 1
в€’ в‰¤П†в‰¤ , (1.22)
2
пЈґ
пЈґ
пЈі1 < П†,
where is a tunable parameter that determines the size of the bandwidth
of numerical smearing. A typically good value is = 1.5 x (making the
18 2. Signed Distance Functions

2 y
x xc
Signed Distance Functions yc

Figure 2.1. xC is the closest interface point to x and y.

xC is the point on the interface closest to y as well. To see this, consider
Figure 2.1, where x, xC , and an example of a y are shown. Since xC is the
closest interface point to x, no other interface points can be inside the large
circle drawn about x passing through xC . Points closer to y than xC must
reside inside the small circle drawn about y passing through xC . Since the
small circle lies inside the larger circle, no interface points can be inside
the smaller circle, and thus xC is the interface point closest to y. The line
2.1 Introduction segment from x to xC is the shortest path from x to the interface. Any local
deviation from this line segment increases the distance from the interface.
In other words, the path from x to xC is the path of steepest descent for the
In the last chapter we deп¬Ѓned implicit functions with П†(x) в‰¤ 0 in the
interior region в„¦в€’ , П†(x) > 0 in the exterior region в„¦+ , and П†(x) = 0 on the function d. Evaluating в€’ d at any point on the line segment from x to xC
gives a vector that points from x to xC . Furthermore, since d is Euclidean
boundary в€‚в„¦. Little was said about П† otherwise, except that smoothness is
distance,
a desirable property especially in sampling the function or using numerical
approximations. In this chapter we discuss signed distance functions, which
| d| = 1, (2.2)
are a subset of the implicit functions deп¬Ѓned in the last chapter. We deп¬Ѓne
which is intuitive in the sense that moving twice as close to the interface
signed distance functions to be positive on the exterior, negative on the
gives a value of d that is half as big.
interior, and zero on the boundary. An extra condition of | П†(x)| = 1 is
The above argument leading to equation (2.2) is true for any x as long
imposed on a signed distance function.
as there is a unique closest point xC . That is, equation (2.2) is true ex-
cept at points that are equidistant from (at least) two distinct points on
the interface. Unfortunately, these equidistant points can exist, making
2.2 Distance Functions equation (2.2) only generally true. It is also important to point out that
equation (2.2) is generally only approximately satisп¬Ѓed in estimating the
A distance function d(x) is deп¬Ѓned as gradient numerically. One of the triumphs of the level set method involves
the ease with which these degenerate points are treated numerically.
d(x) = min(|x в€’ xI |) xI в€€ в€‚в„¦,
for all (2.1)
implying that d(x) = 0 on the boundary where x в€€ в€‚в„¦. Geometrically, d
may be constructed as follows. If x в€€ в€‚в„¦, then d(x) = 0. Otherwise, for
2.3 Signed Distance Functions
a given point x, п¬Ѓnd the point on the boundary set в€‚в„¦ closest to x, and
label this point xC . Then d(x) = |x в€’ xC |.
A signed distance function is an implicit function П† with |П†(x)| = d(x) for
For a given point x, suppose that xC is the point on the interface closest
all x. Thus, П†(x) = d(x) = 0 for all x в€€ в€‚в„¦, П†(x) = в€’d(x) for all x в€€ в„¦в€’ ,
to x. Then for every point y on the line segment connecting x and xC ,
2.4. Examples 19 20 2. Signed Distance Functions

П† =| x | в€’1
and П†(x) = d(x) for all x в€€ в„¦+ . Signed distance functions share all the
properties of implicit functions discussed in the last chapter. In addition,
there are a number of new properties that only signed distance functions
possess. For example,
| П†| = 1 (2.3)
as in equation (2.2). Once again, equation (2.3) is true only in a general
в„¦+ в„¦в€’ в„¦+
sense. It is not true for points that are equidistant from at least two points
П† >0 П† <0 П† >0
on the interface. Distance functions have a kink at the interface where
x
d = 0 is a local minimum, causing problems in approximating derivatives inside
outside outside
on or near the interface. On the other hand, signed distance functions
are monotonic across the interface and can be diп¬Ђerentiated there with
в€‚в„¦ в€‚в„¦
signiп¬Ѓcantly higher conп¬Ѓdence.
П† =0 П† =0
Given a point x, and using the fact that П†(x) is the signed distance to
interface
interface
the closest point on the interface, we can write
xC = x в€’ П†(x)N (2.4)

to calculate the closet point on the interface, where N is the local unit
Figure 2.2. Signed distance function П†(x) = |x| в€’ 1 deп¬Ѓning the regions в„¦в€’ and
normal at x. Again, this is true only in a general sense, since equidistant в„¦+ as well as the boundary в€‚в„¦.
points x have more than one closest point xC . Also, on our Cartesian grid,
equation (2.4) will be only an approximation of the closest point on the
interface xC . Nevertheless, we will п¬Ѓnd formulas of this sort very useful.
П†(x) = |x|в€’1, gives the same boundary в€‚в„¦, interior region в„¦в€’ , and exterior
Equations that are true in a general sense can be used in numerical ap-
region в„¦+ , that the implicit function П†(x) = x2 в€’1 did. However, the signed
proximations as long as they fail in a graceful way that does not cause an
distance function П†(x) = |x| в€’ 1 has | П†| = 1 for all x = 0. At x = 0 there
overall deterioration of the numerical method. This is a general and pow-
is a kink in our function, and the derivative is not deп¬Ѓned. While this may
erful guideline for any numerical approach. So while the user should be
seem problematic, for example for determining the normal, our Cartesian
cautiously knowledgeable of the possible failure of equations that are only
grid contains only sample points and therefore cannot resolve this kink.
generally true, one need not worry too much if the equation fails in a grace-
 << Пред. стр. страница 2(всего 23)ОГЛАВЛЕНИЕ След. стр. >>