<< . .

( 23)

. . >>

dimensional surface in x, y, z, θ, φ space, where θ, φ give the angle of the
normal, and this results in codimension-3 motion in 5-space, plus time. The
complexity goes up by a power of n over the two-dimensional case, as does
the memory requirement, where n is the one-dimensional number of points.
Again, this involves a local level set method, this time using three level set
Standard ray tracing is the ultimate Lagrangian method. Since merg-
ing and topological changes are not an issue”we actually want fronts to
Figure 10.4. The arrow in the center indicates the motion of the real curve while cross through each other without intersecting”the usual level set method
the arrows to the right and left indicate the motion of the ghost curves. has di¬culties, especially with self-intersecting fronts, see, e.g., Figure 6
in [124]. However, there are many advantages of an Eulerian ¬xed-grid ap-
10.4. Geometric Optics in a Phase-Space-Based Level Set Framework 93

proach. Ray tracing gives inadequate spatial resolution of the wave front.
This is due to the fact that points close together may diverge at later
times, leaving holes in the wave front. Interpolation steps must be added.
Part III
The method in [124] overcomes this resolution problem and also the usual
Eulerian problem of how to get the solution when waves become multi-

Image Processing and
valued and singularities such as swallowtails or caustics develop. Eulerian
approaches such as that in [59] su¬er from the second problem, and the
ingenious dynamic surface extension of Steinho¬ et al. [156], with some im-
Computer Vision
provements in Ruuth et al. [144], needs both interpolation and a method
to keep track of singularities due to multiple crossing rays.

The ¬elds of image processing and computer vision are incredibly vast, so
we do not make any attempt either to survey them or to impart any deep
insight. In the following chapters we merely illustrate a few applications of
level set methods in these areas.
The use of partial di¬erential equations in image processing and com-
puter vision, in particular the use of the level set method and dynamic
implicit surfaces, has increased dramatically in recent years. Historically,
the ¬eld of computer vision was probably the earliest to be a¬ected signif-
icantly by the level set method. There are many good general references,
e.g., [145, 29, 77, 120]. In this section we present three examples that are
prototypes for a far wider class of applications.
The ¬rst chapter discusses a basic (perhaps the basic) issue in image
processing, namely restoration of degraded images. The second concerns
image segmentation with snakes and active contours. The third concerns
reconstruction of surfaces from unorganized data points.
Traditionally, these closely related ¬elds, image processing and computer
vision, have developed independently. However, level set and related PDE-
based methods have served to provide both a new common language and
a new set of tools that have led to increased interaction.
98 11. Image Restoration

Thus, one fundamental generalization is merely to replace the heat
equation with an appropriate ¬‚ow equation of the form
ut = F (u, Du, D 2 u, x, t) (11.4)
11 for t > 0 with intial data u(x, y, 0) = u0 (x, y) de¬ning u(x, y, t) as the
processed image. This is called de¬ning a “scale space” with t > 0 as the
Image Restoration scale. As t increases, the image is generally (but not always) coarsened in
some sense. Here
Du = (ux , uy ) (11.5)
uxx uxy
D2 u = (11.6)
uyx uyy
are the gradient and Hessian, respectively. The equation is typically of sec-
ond order, as is the heat equation, although the assumption of parabolicity,
especially strict parabolicity, which implies smoothing in all directions, is
often weakened. Second order is usually chosen for several reasons: (1) The
numerical time-step restriction is typically ∆t = c1 (∆x)2 + c2 (∆y)2 for
explicit schemes, which is reasonable. (2) The method may have a useful
11.1 Introduction to PDE-Based Image and appropriate maximum principle.
These generally nonlinear methods have become popular for the follow-
Restoration ing reasons. Classical algorithms for image deblurring and denoising have
been mainly based on least squares, Fourier series, and other L2 -norm ap-
A basic idea in this ¬eld is to view a gray-scale image as a function u0 (x, y)
proximations. Consequently, the results are likely to be contaminated by
de¬ned on a square „¦ : {(x, y)|0 ¤ x, y ¤ 1}, with u0 taking on discrete
Gibbs™s phenomenon (ringing) and/or smearing near edges. Their compu-
values between 0 and 255, which we take as a continuum for the sake of
tational advantage comes from the fact that these classical algorithms are
this discussion.
linear; thus fast solvers are widely available. However, the e¬ect of the
A standard operation on images is to convolve u0 with a Gaussian of
restoration is not local in space. Other bases of orthogonal functions have
variance σ > 0,
been introduced in order to get rid of these problems, e.g., compactly sup-
ported wavelets, but Gibbs™s phenomenon and smearing are still present
1 ’(x2 +y2 )/(4σ)
J(x, y, σ) = e (11.1) for these linear procedures.
Rudin [140] made the important observation that images are charac-
to obtain terized by singularities such as edges and boundaries. Thus nonlinearity,
especially ideas related to the numerical analysis of shocks in solutions of
J(x ’ x , y ’ y , σ)u0 (x , y ) dx dy = J — u0 .
u(x, y, σ) = (11.2) systems of hyperbolic conservation laws, should play a key role in image
analysis. Later, Perona and Malik [131] described a family of nonlinear
second-order equations of the type given in equation (11.4) which have
This has the same e¬ect as solving the initial value problem for the heat
an antidi¬usive (hence deblurring) as well as a di¬usive (hence denoising)
capability. This was modi¬ed and made rigorous by Catte et al. [41] and
ut = uxx + uyy elsewhere.
Perona and Malik proposed the following. Consider the equation
u(x, y, 0) = u0 (x, y)
‚1 ‚2
for t > 0 (ignoring boundary conditions) to obtain u(x, y, σ) at t = σ > 0, · G( u) =
ut = G (ux , uy ) + G (ux , uy ) (11.7)
‚x ‚y
i.e., the expression obtained in equation (11.2).
11.2. Total Variation-Based Image Restoration 99 100 11. Image Restoration

with initial data u(x, y, 0) = u(x, y) in „¦ with appropriate boundary con- We also remark that it is very easy to show (formally at least) that
ux G(ux ) ≥ 0 implies that the evolution
ditions. This equation is parabolic (hence smoothing) when the matrix of
partial derivatives ‚
ut = G(ux ) (11.12)
G1 G1 ‚x
x y
G2 G2 leads to the estimate
x y

is positive de¬nite (or weakly parabolic when it is positive semide¬nite).
|ux | dx ¤ 0, (11.13)
When this matrix has a negative eigenvalue, equation (11.7) resembles the dt
backwards heat equation. One might expect such initial value problems to i.e., the evolution is total variation diminishing or TVD; see e.g., Harten
result in unstable blowup, especially for nonsmooth initial data. However, [80]. To see this, multiply equation (11.12) by sign ux and proceed for-
if we multiply equation (11.7) by u2p’1 /(2p) for p a positive integer and mally. In fact it is easy to see that the ¬nite di¬erence approximation to
integrate by parts, we arrive at equation (11.12),
2p ’ 1
d un ’ u n un ’ u n
|u|2p d„¦ = ’ ( u · G( u)) |u|2p’2 d„¦. ∆t
(11.9) i+1 i i i’1
un+1 = un + ’G
G , (11.14)
dt 2p i
„¦ ∆x ∆x ∆x
So if
with a time-step restriction of
1 2
ux G (ux , uy ) + uy G (ux , uy ) ≥ 0, (11.10)
∆t G(ux ) 1
< (11.15)
the solutions stay bounded in all L , p > 1, spaces, including p = ∞, which (∆x)2 ux 2
means that the maximum of u is nonincreasing. Then all one needs to do
is allow G to be backwards parabolic but satisfy equation (11.10) above,
|ui+1 ’ un+1 | ¤
|un ’ un |; (11.16)
and a restoring e¬ect is obtained. i+1 i
This initial value problem is not well posed; i.e., two nearby initial data i i
will generally result in very di¬erent solutions. There are many such ex- i.e., it is also TVD.
amples in the literature. In one spatial dimension parabolicity means that All this makes one realize that TV, as predicted by Rudin [140], is in some
the derivative of G1 (ux ) is positive, while equation (11.10) just means that sense the right class in which to process images, at least away from highly
ux G1 (ux ) ≥ 0. Obviously, functions that have the same sign as their ar- oscillatory textures. The work of Rudin [140] led to the TV preserving
gument but are sometimes decreasing in their argument will give bounded shock ¬lters of Rudin and Osher [125] and to the successful total variation
restoring results in those ranges of ux . based restoration algorithms of Rudin et al. [142, 141] and Marquina and
Osher [111].
In brief, suppose we are presented with a noisy blurred image
11.2 Total Variation-Based Image Restoration u0 = J — u + n, (11.17)
The total variation of a function of one variable u(x) can be de¬ned as where J is a given convolution kernel (see equation (11.2)) and n represents
noise. Suppose also that we have some estimate on the mean and variance of
u(x + h) ’ u(x)
T V (u) = sup dx, (11.11) the noise. We then wish to obtain the “best” restored image. Our de¬nition
of “best” must include an accurate treatment of edges, i.e., jumps.
which we will pretend is the same as |ux |dx (analogous statements are A straightforward approach is just to invert this directly. If the Fourier
transform of J, J = F J, is nonvanishing, then we could try
made in two or more spatial dimensions). Thus T V (u) is ¬nite for any
bounded increasing or decreasing function, including functions with jump ˆˆ
u = F ’1 (J ’1 u0 ). (11.18)
discontinuities. On the other hand, this is not true for |ux |p dx for any
p > 1. Note that p < 1 results in a nonconvex functional; i.e., the tri- This gives poor results, even in the absence of noise, because high
angle inequality is false. For this reason (among others) TV functions are frequencies associated with edges will be ampli¬ed, leading to major spu-
the appropriate class for shock solutions to conservation laws and for the rious oscillations. Of course, the presence of signi¬cant noise makes this
problems arising in processing images with edges. procedure disastrous.
11.2. Total Variation-Based Image Restoration 101 102 11. Image Restoration

Another approach might be to regularize this procedure. We could try is satis¬ed for all t > 0 if it is satis¬ed at t = 0. This is true regardless of
adding a penalty term to a minimization procedure; i.e., choose u to solve the choice of ».
the following constrained minimization problem: In order to satisfy the second constraint, equation (11.21), we use a
version of the projection gradient method of Rosen [138] introduced in [142].
| u|2 d„¦ + J — u ’ u0 2
We note that a nonvariational version of this idea was used by Sussman and
min µ , (11.19)
Fatemi [158] to help preserve area for the level set reinitialization step; see
Chapter 7. If we wish to solve minu f (u) d„¦ such that g(u) d„¦ = 0, we
where » = µ’1 > 0 is often a Lagrange multiplier. The minimizer is
start with a function V0 such that g(V0 ) d„¦ = 0. Then gradient descent
J(ξ1 , ξ2 )ˆ0 (ξ1 , ξ2 )
u leads us to the evolution equation
u = F ’1 , (11.20)
µ(ξ 2 + ξ 2 ) + |J(ξ1 , ξ2 )|2 ut = ’fu ’ »gu (11.26)
1 2

where µ can be chosen so that the variance of the noise is given; i.e., we
for t > 0 with u(0) = V0 . We wish to maintain the constraint under the
can choose µ so that
¬‚ow, which means
ˆˆ ˆ
|J — u ’ u0 |2 d„¦ = σ 2 = |J u ’ u0 |2 d„¦, d
(11.21) 2
gu ut d„¦ = ’ fu gu d„¦ ’ »
g(u) d„¦ = 0 = gu d„¦; (11.27)
which is a simple algebraic equation for ». Alternatively, µ could be a
thus we merely choose »(t) such that
coe¬cient in a penalty term and could be obtained by trial and error.
Obviously, this is a very simple and fast procedure, but the e¬ect of it is fu gu d„¦
»(t) = ’ . (11.28)
either to smear edges or to allow oscillations. This is because the space of 2
gu d„¦
functions we are considering does not allow clean jumps.
Thus we have
Instead, the very simple idea introduced by Rudin et al. [142] is merely
to replace the power of 2 by a 1 in the exponent of | u| in equation (11.19). fu gu d„¦
ut = ’fu + gu (11.29)
Thus, TV restoration is 2
gu d„¦
ˆ 2 and
| u| d„¦ + » J — u ’ u0
min , (11.22)
fu gu d„¦
d 2
ˆ f (u) d„¦ = ’ ¤ 0,
fu d„¦ + (11.30)
where » > 0 is a Langrange multiplier and we drop the L2 subscript in 2
dt gu d„¦
the second term. This leads us to the nonlinear Euler-Lagrange equations
(assuming J(x) = J(’x) for simplicity only) by Schwartz™s inequality, so the function to be minimized diminishes
(strictly, if gu and fu are independent), and convergence occurs when u
is such that fu + »gu = 0 for some ». These ideas generalize to much more
· ’ »J — (J — u ’ u0 ) = 0, (11.23)
| u| complicated situations, e.g., many independent constraints. In our present
setting this leads us to choosing » in equation (11.24) as
where » = 2». Of course, Fourier analysis is useless here, so the standard
method of solving this is to use gradient descent, i.e., to solve u
· (J — (J — u ’ u0 )) d„¦
| u|
»= . (11.31)
u 2
J — (J — u ’ u0 )
· ’ »J — (J — u ’ u0 )
ut = (11.24)
| u|
Thus we have our TV denoising/deblurring algorithm given by equa-
for t > 0 to steady state with u(x, y, 0) given. Again » may be a chosen so
tions (11.24) and (11.31). Again, we repeat that in practice µ is often
as to satisfy equation (11.21), although the procedure to enforce this is a
picked by the user to be a ¬xed constant. Another di¬culty with using
bit more intricate. First we note that if the mean of the noise is zero, i.e.,
equation (11.31) comes in the initialization. Recall that we need u(x, y, 0)
n d„¦ = 0 and Jd„¦ = 1, it is easy to see that the constraint
to satisfy equation (11.25) as well as
(J — u(x, y, 0) ’ u0 )2 = σ 2 .
(J — u(x, y, t) ’ u0 ) d„¦ = 0 (11.25) (11.32)
11.3. Numerical Implementation of TV Restoration 103 104 11. Image Restoration

11.3 Numerical Implementation of TV Restoration for
1 1
ui+ 1 ,j =
ui,j + ui+1,j ,
We now turn to the issue of fast numerical implementation of this method, 2 2
as well as its connection with the dynamic evolution of surfaces. The evolu- 1 1
ui,j+ 1 = ui,j + ui,j+1 .
tion equation (11.24) has an interesting geometric interpretation in terms
2 2
of level set evolution. We can view this as a procedure that ¬rst moves
This is a second-order accurate spatial discretization.
every level set of the function u with velocity equal to its mean curvature
At steady state un+1 ≡ un ≡ ui,j , and we can solve for un , obtaining
divided by the norm of the gradient of u and then projects back onto the i,j i,j
the nonlinear relationship
constraint set, for which the variance of the noise is ¬xed. The ¬rst step has
Ci+ 1 ,j ui+1,j + Ci’ 1 ,j ui’1,j + Di,j+ 1 ui,j+1 + Di,j’ 1 ui,j’1
the e¬ect of removing high-curvature specks of noise, even in the presence 2 2 2 2
ui,j = ,
of steep gradients, and leaving alone piecewise smooth clean functions at Ci+ 1 ,j + Ci’ 1 ,j + Di,j+ 1 + Di,j’ 1
2 2 2 2
their jump discontinuities. (11.39)
From a ¬nite di¬erence scheme point of view, the e¬ect at edges is easy i.e., ui,j is a weighted convex combination of its four neighbors. This
to describe. Suppose we wish to approximate equation (11.24) for » = 0. smoothing is anisotropic. If, for example, there is a large jump from ui,j
The equation can be written as to ui+1,j , then Ci+1/2,j is close to zero, and the weighting is done in an
«  «  almost WENO fashion. This helps to explain the virtues of TV denoising:
‚ ux + ‚  uy The edges are hardly smeared. Contrast this with the linear heat equation,
ut = (11.33)
which at steady state yields
‚x ‚y
2 + u2 + δ 2 + u2 + δ
u u
x y x y
Ci,j = [ui+i,j + ui’1,j + ui,j+1 + ui,j’1 ]. (11.40)
where δ > 0 is very small, chosen to avoid division by zero at places
where | u| = 0. There are serious numerical issues here involving time-step Severe smearing of edges is the result. For further discussion and
restrictions. Intuitively, an explicit scheme should be restricted by generalizations, see Chan et al. [31].
The explicit time-step restriction in equation (11.34) leads us to believe
∆t that convergence to steady state might be slow, especially in the presence
u2 + u2 + δ
¤c (11.34)
x y
(∆x)2 of signi¬cant blur and noise. Many attempts have been made to speed
this up; see e.g., Chan et al. [30]. Another interesting observation is that
for a constant c, which is terribly restrictive near ¬‚at (zero) gradients.
equation (11.24) scales in a strange way. If u is replaced by h(u), with
A typical scheme might be
h > 0, h(0) = 0, and h(255) = 255, then equation (11.24) even for » = 0 is
not invariant. This means that the evolution process is not morphological;
ui,j = un ’ Ci+ 1 ,j (un
n n n n n
i+1,j ’ ui,j ) ’ Ci’ 1 ,j (ui,j ’ ui’1,j ) see Alvarez et al. [5]; i.e., it does not depend only on the level sets of the
i,j 2
(∆x) 2 2
intensity, but on their values. One possible ¬x for these two problems is the
+Di,j+ 1 (un
n n n n n
i,j+1 ’ ui,j ) ’ Di,j’ 1 (ui,j ’ ui,j’1 ) , (11.35) following simple idea introduced by Marquina and Osher [111].
2 2
We merely multiply the right-hand side of equation (11.24) by | u| and
drive this equation to steady state. The e¬ect of this is bene¬cial in various
aspects. Although the steady states of both models are analytically the
(∆y ui+ 1 ,j )2 (∆y ui+ 1 ,j )2 same, since | u| vanishes only in ¬‚at regions, there are strong numerical,

(∆x ui,j )2 + + δ(∆x)2
2 2
Ci+ 1 ,j = + >0 analytical, and philosophical advantages of this newer model.
2 2

(1) The time-step restriction is now ∆t/(∆x)2 ¤ c for some c > 0, so
simple explicit-in-time methods can be used.
(2) We can use ENO or WENO versions of Roe™s entropy-condition-
(∆x ui,j+ 1 )2 (∆x ui,j+ 1 )2 2

+ violating scheme for the convection term (there is no viscosity or
(∆y ui,j )2 2
2 2
Di,j+ 1 = + + + δ(∆x) >0
2 2
entropy condition for images) and central di¬erencing for the reg-
(11.37) ularized anisotropic di¬usion (curvature) term. This seems to give
11.3. Numerical Implementation of TV Restoration 105 106 11. Image Restoration

better numerical answers; the numerical steady states do not have 250
the staircasing e¬ect sometimes seen in TV reconstruction; see, e.g.,
Chambolle and Lions [42].
(3) In the pure denoising case, i.e., J — u ≡ u, there is a simple maximum
principle (analytical as well as numerical).
(4) The procedure is almost morphological; i.e., if we replace u by h(v)
and u0 by h(v0 ) with h (u) > 0, then the evolution is transformed as
ut = | u| · ’ »| u|J — (J — u ’ u0 ) (11.41) 100
| u|
transforms to
v 50

<< . .

( 23)

. . >>

Copyright Design by: Sunlight webdesign