LINEBURG


<< . .

 12
( 24)



. . >>

tained a complete solution under certain conditions. Their solution is complicated, expressed
in terms of the weighted external path length of a binary tree. It is su¬ciently explicit, though,
to give a complete picture of the continuity, convexity, and oscillation properties of f n . In some
cases their solution simpli¬es dramatically.
Another class of quasi-linear recurrences involves the greatest integer function. Following
[104], consider recurrences of the form
s
a(0) = 1, a(n) = ri a( n/mi ), n ≥ 1, (9.76)
i=1

where ri > 0 for all i, and the mi are integers, mi ≥ 2 for all i. Let „ > 0 be the (unique)
solution to
s
ri m’„ = 1 . (9.77)
i
i=1

If there is an integer d and integers u i such that mi = dui for 1 ¤ i ¤ s, then lim a(n)n’„ as
n ’ ∞ does not exist, but the limit of a(d k )d’k„ as k ’ ∞ does exist. If there is no such d,
then the limit of a(n)n’„ as n ’ ∞ does exist, and can readily be computed. For example,
when
a(n) = a( n/2 ) + a( n/3 ) + a( n/6 ) for n ≥ 1 ,

this limit is 12(log 432)’1 . Convergence to the limit is extremely slow, as is shown in [104]. The
method of proof used in [104] is based on renewal theory. Several other methods for dealing
with recurrences of the type (9.76) are mentioned in [104] and the references listed in that
paper. There are connections to other recurrences that are linear in two variables, such as

b(m, n) = b(m, n ’ 1) + b(m ’ 1, n) + b(m ’ 1, n ’ 1), m, n ≥ 1 . (9.78)



85
Consider an in¬nite sequence of integers 2 ¤ a 1 < a2 < . . . such that

a’1 log aj < ∞ ,
j
j=1

and de¬ne c(0) = 0,

c(n) = c( n/aj ) + 1, n≥1. (9.79)
j=1

If ρ is the (unique) positive solution to

a’ρ = 1 ,
j
j=1

then Erd¨s [103] showed that
o
c(n) ∼ cnρ as n’∞ (9.80)

for a positive constant c. Although the recurrence (9.79) is similar to that of Eq. (9.76), the
results are di¬erent (no oscillations can occur for a recurrence given by Eq. (9.79)) and the
methods are dissimilar.
Karp [221] considers recurrences of the type T (x) = a(x)+T (h(x)), where x is a nonnegative
real variable, a(x) ≥ 0, and h(x) is a random variable, 0 ¤ h(x) ¤ x, with m(x) being the
expectation of h(x). Such recurrences arise frequently in the analysis of algorithms, and Karp
proves several theorems that bound the probability that T (x) is large. For example, he obtains
the following result.

Theorem 9.2. Suppose that a(x) is a nondecreasing continuous function that is strictly in-
+
creasing on {x : a(x) > 0}, and m(x) is a continuous function. Then for all x ∈ and
¡




+,
k∈ ¢




Prob (T (x) ≥ u(x) + ka(x)) ¤ (m(x)/x) k ,

where u(x) is the unique least nonnegative solution to the equation u(x) = a(x) + u(m(x)).

Another result, proved in [176], is the following estimate.

+
Theorem 9.3. Suppose that r, a1 , . . . , aN ∈ and that b ≥ 0. For n > N , de¬ne
¡




b + an’1 + an’2 + · · · + an’k
an = 1 + max . (9.81)
k+r
1¤k¤n’1

Then
an ∼ (n/r)1/2 as n’∞. (9.82)

Theorem 9.3 is proved by an involved induction on the behavior of the a n .

86
10. Analytic generating functions

Combinatorialists use recurrence, generating functions, and such transformations as the
Vandermonde convolution; others, to my horror, use contour integrals, di¬erential equations,
and other resources of mathematical analysis.
J. Riordan [336]

The use of analytic methods in combinatorics did horrify Riordan. They are widespread,
though, because of their utility, which even Riordan could not deny. About half of this chapter
is devoted to such methods, as they are extremely ¬‚exible and give very precise estimates.

10.1. Introduction and general estimates

This section serves as an introduction to most of the remaining sections of the paper,
which are concerned largely with the use of methods of complex variables in asymptotics.
Many of the results to be presented later can be used with little or no knowledge of analytic
functions. However, even some slight knowledge of complex analysis is helpful in getting an
understanding of the scope and limitations of the methods to be discussed. There are many
textbooks on analytic functions, such as [205, 364]. This chapter assumes that the reader
has some knowledge of this ¬eld, but not a deep one. It reviews the concepts that are most
relevant in asymptotic enumeration, and how they a¬ect the estimates that can be obtained. It
is not a general introduction to the subject of complex analysis, and the choices of topics, their
ordering, and the decision of when to include proofs were all made with the goal of illustrating
how to use complex analysis in asymptotics.
There are several de¬nitions of analytic functions, all equivalent. For our purposes, it will
be most convenient to call a function f (z) of one complex variable analytic in a connected open
set S ⊆ if in a small neighborhood of every point w ∈ S, f (z) has an expansion as a power
£




series

an (z ’ w)n ,
f (z) = an = an (w), (10.1)
n=0

that converges. Practically all the functions encountered in asymptotic enumeration that are
analytic are analytic in a disk about the origin. A necessary and su¬cient condition for f (z),
de¬ned by a power series (6.1), to be analytic in a neighborhood of the origin is that |a n | ¤ C n
for some constant C > 0. Therefore there is an e¬ective dichotomy, with common generating
functions either not converging near 0 and being only formal power series, or else converging


87
and being analytic.
A function f (z) is called meromorphic in S if it is analytic in S except at a (countable
isolated) subset S ⊆ S, and in a small neighborhood of every w ∈ S , f (z) has an expansion
of the form

an (z ’ w)n ,
f (z) = an = an (w) . (10.2)
n=’N (w)

Thus meromorphic functions can have poles, but nothing more. Alternatively, a function is
meromorphic in S if and only if it is the quotient of two functions analytic in S. In particular,
z ’5 is meromorphic throughout the complex plane, but sin(1/z) is not. In general, functions
given by nice expressions are analytic away from obvious pathological points, since addition,
multiplication, division, and composition of analytic functions usually yield analytic or mero-
morphic functions in the proper domains. Thus sin(1/z) is analytic throughout \ {0}, and
£




so is z ’5 , while exp(1/(1 ’ z)) is analytic throughout \ {1}, but is not meromorphic because
£




of the essential singularity at z = 1. Not all functions that might seem smooth are analytic,
though, as neither f (z) = z (¯ denoting the complex conjugate of z) nor f (z) = |z| is analytic
¯z
anywhere. The smoothness condition imposed by (10.1) is very stringent.
Analytic continuation is an important concept. A function f (z) may be de¬ned and analytic
in S, but there may be another function g(z) that is analytic in S ⊃ S and such that g(z) =
f (z) for z ∈ S. In that case we say that g(z) provides an analytic continuation of f (z) to S ,
and it is a theorem that this extension is unique. A simple example is provided by

1
zn = . (10.3)
1’z
n=0

The power series on the left side converges only for |z| < 1, and de¬nes an analytic function
there. On the other hand, (1 ’ z)’1 is analytic throughout \ {1}, and so provides an analytic
£




continuation for the power series. This is a common phenomenon in asymptotic enumeration.
Typically a generating function will converge in a disk |z| < r, will have a singularity at r, but
will be continuable to a region of the form

{z : |z| < r + δ, |Arg(z ’ r)| > π/2 ’ } (10.4)

for δ, > 0. When this happens, it can be exploited to provide better or easier estimates of the
coe¬cients, as is shown in Section 11.1. That section explains the reasons why continuation
to a region of the form (10.4) is so useful.


88
If f (z) is analytic in S, z is on the boundary of S, but f (z) cannot be analytically continued
to a neighborhood of z, we say that z is a singularity of f (z). Isolated singularities that are
not poles are called essential, so that z = 1 is an essential singularity of exp(1/(1 ’ z)), but
not of 1/(1 ’ z). (Note that z = 1 is an essential singularity of f (z) = (1 ’ z) 1/2 even though
f (1) = 0.) Throughout the rest of this chapter we will often refer to large singularities and
small singularities. These are not precise concepts, and are meant only to indicate how fast
the function f (z) grows as z ’ z0 , where z0 is a singularity. If z0 = 1, we say that (1 ’ z)1/2 ,
log(1 ’ z), (1 ’ z)’10 have small singularities, since |f (z)| either decreases or grows at most like
a negative power of |1 ’ z| as z ’ 1. On the other hand, exp(1/(1 ’ z)) or exp((1 ’ z) ’1/5 ) will
be said to have large singularities. Note that for z = 1 + iy, y ∈ , exp(1/(1 ’ z)) is bounded,
¡




so the choice of path along which the singularity is approached is important. In determining
the size of a singularity z0 , we will usually be concerned with real z 0 and generating functions

f (z) with nonnegative coe¬cients, and then usually will need to look only at z real, z ’ z 0 .
When the function f (z) is entire (that is, analytic throughout ), we will say that ∞ is a
£




singularity of f (z) (unless f (z) is a constant), and will use the large vs. small singularity
classi¬cation depending on how fast f (z) grows as |z| ’ ∞. The distinction between small
and large singularities is important in asymptotics because di¬erent methods are used in the
two cases.
A simple closed contour “ in the complex plane is given by a continuous mapping γ :
[0, 1] ’ with the properties that γ(0) = γ(1), and that γ(s) = γ(t) whenever 0 ¤ s < t ¤ 1
£




and either s = 0 or t = 1. Intuitively, “ is a closed path in the complex plane that does not
intersect itself. For most applications that will be made in this chapter, simple closed contours
“ will consist of line segments and sections of circles. For such contours it is easy to prove that
the complex plane is divided by the contour into two connected components, the inside and
the outside of the curve. This result is true for all simple closed curves by the Jordan curve
theorem, but this result is surprisingly hard to prove.
In asymptotic enumeration, the basic result about analytic functions is the Cauchy integral
formula for their coe¬cients.

Theorem 10.1. If f (z) is analytic in an open set S containing 0, and

an z n
f (z) = (10.5)
n=0




89
in a neighborhood of 0, then for any n ≥ 0,

an = [z n ]f (z) = (2πi)’1 f (z)z ’n’1 dz , (10.6)


where “ is any simple closed contour in S that contains the origin inside it and is positively
oriented (i.e., traversed in counterclockwise direction).

An obvious question is why should one use the integral formula (10.6) to determine the
coe¬cient an of f (z). After all, the series (10.5) shows that

dn
n! an = n f (z) . (10.7)
dz z=0

Unfortunately the di¬erentiation involved in (10.7) is hard to control. Derivatives involve
taking limits, and so even small changes in a function can produce huge changes in derivatives,
especially high order ones. The special properties of analytic functions are not re¬‚ected in the
formula (10.7), and for nonanalytic functions there is little that can be done. On the other
hand, Cauchy™s integral formula (10.6) does use special properties of analytic functions, which
allow the determination of the coe¬cients of f (z) from the values of f (z) along any closed
path. This determination involves integration, so that even coarse information about the size
of f (z) can be used with it. The analytic methods that will be outlined exploit the freedom of
choice of the contour of integration to relate the behavior of the coe¬cients to the behavior of
the function near just one or sometimes a few points.
If the power series (10.5) converges for |z| < R, and for the contour “ we choose a circle
z = r exp(iθ), 0 ¤ θ ¤ 2π, 0 < r < R, then the validity of (10.6) is easily checked by direct
computation, since the power series converges absolutely and uniformly so one can interchange
integration and summation. The strength of Cauchy™s formula is in the freedom to choose the
contour “ in di¬erent ways. This freedom yields most of the powerful results to be discussed
in the following sections, and later in this section we will outline how this is achieved. First
we discuss some simple applications of Theorem 10.1 obtained by choosing “ to be a circle
centered at the origin.

Theorem 10.2. If f (z) is analytic in |z| < R, then for any r with 0 < r < R and any n ∈ Z,
n ≥ 0,
|[z n ]f (z)| ¤ r ’n max |f (z)| . (10.8)
|z|=r




90
The choice of “ in Theorem 10.1 to be the circle of radius r gives Theorem 10.2. If f (z),
de¬ned by (10.5), has an ≥ 0 for all n, then

an |z|n = f (|z|)
|f (z)| ¤
n=0

and therefore we obtain Lemma 8.1 as an easy corollary to Theorem 10.2. The advantage of
Theorem 10.2 over Lemma 8.1 is that there is no requirement that a n ≥ 0. The bound of
Theorem 10.2 is usually weaker than the correct value by a small multiplicative factor such as
n1/2 .
If f (z) is analytic in |z| < R, then for any δ > 0, f (z) is bounded in |z| < R ’ δ, and
so Theorem 10.2 shows that an = [z n ]f (z) satis¬es |an | = O((R ’ δ)’n ). On the other hand,
if |an | = O(S ’n ), then the power series (10.5) converges for |z| < S and de¬nes an analytic
function in that disk. Thus we obtain the easy result that if f (z) is analytic in a disk |z| < R
but in no larger disk, then
lim sup |an |1/n = R’1 . (10.9)




Example 10.1. Oscillating sequence. Consider the sequence, discussed already in Exam-
ple 9.4, given by
n
n (’1)k
an = , n = 0, 1, . . . . (10.10)
k k!
k=0

The maximal term in the sum (10.10) is of order roughly exp(cn 1/2 ), so an cannot be much
larger. However, the sum (10.10) does not show that a n cannot be extremely small. Could
we have |an | ¤ exp(’n) for all n, say? That this is impossible is obvious from (9.39), though,
by the argument above. The generating function f (z), given by Eq. (9.39), is analytic in
|z| < 1, but has an essential singularity at z = 1, so we immediately see that for any > 0,
|an | < (1 + )n for all su¬ciently large n, and that |a n | > (1 ’ )n for in¬nitely many n.
(More powerful methods for dealing with analytic generating functions, such as the saddle
point method to be discussed in Section 12, can be used to obtain the asymptotic relation for
an given in Example 9.4.)
 




There is substantial literature dealing with the growth rate of coe¬cients of analytic func-
tions. The book of Evgrafov [110] is a good reference for these results. However, the estimates
presented there are not too useful for us, since they apply to wide classes of often pathological


91
functions. In combinatorial enumeration we usually encounter much tamer generating func-
tions for which the crude bounds of [110] are obvious or easy to derive. Instead, we need to
use the tractable nature of the functions we encounter to obtain much more delicate estimates.
The basic result, derived earlier, is that the power series coe¬cients a n of a generating
function f (z), de¬ned by (10.5), grow in absolute value roughly like R ’n , if f (z) is analytic
in |z| < R. A basic result about analytic functions says that if the Taylor series (10.5) of f (z)
converges for |z| < R but for every > 0 there is a z with |z| = R + such that the series
(10.5) diverges at z, then f (z) has a singularity z with |z| = R. Thus the exponential growth
rate of the an is determined by the distance from the origin of the nearest singularity of f (z),
with close singularities giving large coe¬cients. Sometimes it is not obvious what R is. When
the coe¬cients of f (z) are positive, as is common in combinatorial enumeration and analysis
of algorithms, there is a useful theorem of Pringsheim [364]:

Theorem 10.3. Suppose that f (z) is de¬ned by Eq. (10.5) with a n ≥ 0 for all n ≥ n0 , and
that the series (10.5) for f (z) converges for |z| < R but not for any |z| > R. Then z = R is a
singularity of f (z).

As we remarked above, the exponential growth rate of the a n is determined by the distance
from the origin of the nearest singularity. Theorem 10.3 says that if the coe¬cients a n are non-
negative, it su¬ces to look along the positive real axis to determine the radius of convergence
R, which is also the desired distance to the singularity. There can be other singularities at the
same distance from the origin (for example, f (z) = (1 ’ z 2 )’1 has singularities at z = ±1),
but Theorem 10.3 guarantees that none are closer to 0 than the positive real one.
Since the singularities of smallest absolute value of a generating function exert the dominant
in¬‚uence on the asymptotics of the corresponding sequence, they are called the dominant
singularities. In the most common case there is just one dominant singularity, and it is almost
always real. However, we will sometimes speak of a large set of singularities (such as the k
¬rst order poles in Theorem 9.1, which are at di¬erent distances from the origin) as dominant
ones. This allows some dominant singularities to be more in¬‚uential than others.
Many techniques, including the elementary methods of Section 8, obtain bounds for sum-
matory functions of coe¬cients even when they cannot estimate the individual coe¬cients.
fn z n
These methods succeed largely because they create a dominant singularity. If f (z) =
converges for |z| < 1, diverges for |z| > 1, and has f n ≥ 0, then the singularity at z = 1 is at


92
least as large as any other. However, there could be other singularities on |z| = 1 that are just
as large. (This holds for the functions f 2 (z) and f3 (z) de¬ned by (8.2) and (8.4).) When we
consider the generating function of k¤n fk , though, we ¬nd that

∞ n
z n = (1 ’ z)’1 f (z) ,
h(z) = fk (10.11)
n=0 k=0

so that h(z) has a singularity at z = 1 that is much larger than any other one. That often

<< . .

 12
( 24)



. . >>

Copyright Design by: Sunlight webdesign