LINEBURG

ńņšąķčöą 12 |

in terms of the weighted external path length of a binary tree. It is suļ¬ciently explicit, though,

to give a complete picture of the continuity, convexity, and oscillation properties of f n . In some

cases their solution simpliļ¬es dramatically.

Another class of quasi-linear recurrences involves the greatest integer function. Following

[104], consider recurrences of the form

s

a(0) = 1, a(n) = ri a( n/mi ), n ā„ 1, (9.76)

i=1

where ri > 0 for all i, and the mi are integers, mi ā„ 2 for all i. Let Ļ„ > 0 be the (unique)

solution to

s

ri mā’Ļ„ = 1 . (9.77)

i

i=1

If there is an integer d and integers u i such that mi = dui for 1 ā¤ i ā¤ s, then lim a(n)nā’Ļ„ as

n ā’ ā does not exist, but the limit of a(d k )dā’kĻ„ as k ā’ ā does exist. If there is no such d,

then the limit of a(n)nā’Ļ„ as n ā’ ā does exist, and can readily be computed. For example,

when

a(n) = a( n/2 ) + a( n/3 ) + a( n/6 ) for n ā„ 1 ,

this limit is 12(log 432)ā’1 . Convergence to the limit is extremely slow, as is shown in [104]. The

method of proof used in [104] is based on renewal theory. Several other methods for dealing

with recurrences of the type (9.76) are mentioned in [104] and the references listed in that

paper. There are connections to other recurrences that are linear in two variables, such as

b(m, n) = b(m, n ā’ 1) + b(m ā’ 1, n) + b(m ā’ 1, n ā’ 1), m, n ā„ 1 . (9.78)

85

Consider an inļ¬nite sequence of integers 2 ā¤ a 1 < a2 < . . . such that

ā

aā’1 log aj < ā ,

j

j=1

and deļ¬ne c(0) = 0,

ā

c(n) = c( n/aj ) + 1, nā„1. (9.79)

j=1

If Ļ is the (unique) positive solution to

ā

aā’Ļ = 1 ,

j

j=1

then ErdĀØs [103] showed that

o

c(n) ā¼ cnĻ as nā’ā (9.80)

for a positive constant c. Although the recurrence (9.79) is similar to that of Eq. (9.76), the

results are diļ¬erent (no oscillations can occur for a recurrence given by Eq. (9.79)) and the

methods are dissimilar.

Karp [221] considers recurrences of the type T (x) = a(x)+T (h(x)), where x is a nonnegative

real variable, a(x) ā„ 0, and h(x) is a random variable, 0 ā¤ h(x) ā¤ x, with m(x) being the

expectation of h(x). Such recurrences arise frequently in the analysis of algorithms, and Karp

proves several theorems that bound the probability that T (x) is large. For example, he obtains

the following result.

Theorem 9.2. Suppose that a(x) is a nondecreasing continuous function that is strictly in-

+

creasing on {x : a(x) > 0}, and m(x) is a continuous function. Then for all x ā and

Ā”

+,

kā Ā¢

Prob (T (x) ā„ u(x) + ka(x)) ā¤ (m(x)/x) k ,

where u(x) is the unique least nonnegative solution to the equation u(x) = a(x) + u(m(x)).

Another result, proved in [176], is the following estimate.

+

Theorem 9.3. Suppose that r, a1 , . . . , aN ā and that b ā„ 0. For n > N , deļ¬ne

Ā”

b + anā’1 + anā’2 + Ā· Ā· Ā· + anā’k

an = 1 + max . (9.81)

k+r

1ā¤kā¤nā’1

Then

an ā¼ (n/r)1/2 as nā’ā. (9.82)

Theorem 9.3 is proved by an involved induction on the behavior of the a n .

86

10. Analytic generating functions

Combinatorialists use recurrence, generating functions, and such transformations as the

Vandermonde convolution; others, to my horror, use contour integrals, diļ¬erential equations,

and other resources of mathematical analysis.

J. Riordan [336]

The use of analytic methods in combinatorics did horrify Riordan. They are widespread,

though, because of their utility, which even Riordan could not deny. About half of this chapter

is devoted to such methods, as they are extremely ļ¬‚exible and give very precise estimates.

10.1. Introduction and general estimates

This section serves as an introduction to most of the remaining sections of the paper,

which are concerned largely with the use of methods of complex variables in asymptotics.

Many of the results to be presented later can be used with little or no knowledge of analytic

functions. However, even some slight knowledge of complex analysis is helpful in getting an

understanding of the scope and limitations of the methods to be discussed. There are many

textbooks on analytic functions, such as [205, 364]. This chapter assumes that the reader

has some knowledge of this ļ¬eld, but not a deep one. It reviews the concepts that are most

relevant in asymptotic enumeration, and how they aļ¬ect the estimates that can be obtained. It

is not a general introduction to the subject of complex analysis, and the choices of topics, their

ordering, and the decision of when to include proofs were all made with the goal of illustrating

how to use complex analysis in asymptotics.

There are several deļ¬nitions of analytic functions, all equivalent. For our purposes, it will

be most convenient to call a function f (z) of one complex variable analytic in a connected open

set S ā if in a small neighborhood of every point w ā S, f (z) has an expansion as a power

Ā£

series

ā

an (z ā’ w)n ,

f (z) = an = an (w), (10.1)

n=0

that converges. Practically all the functions encountered in asymptotic enumeration that are

analytic are analytic in a disk about the origin. A necessary and suļ¬cient condition for f (z),

deļ¬ned by a power series (6.1), to be analytic in a neighborhood of the origin is that |a n | ā¤ C n

for some constant C > 0. Therefore there is an eļ¬ective dichotomy, with common generating

functions either not converging near 0 and being only formal power series, or else converging

87

and being analytic.

A function f (z) is called meromorphic in S if it is analytic in S except at a (countable

isolated) subset S ā S, and in a small neighborhood of every w ā S , f (z) has an expansion

of the form

ā

an (z ā’ w)n ,

f (z) = an = an (w) . (10.2)

n=ā’N (w)

Thus meromorphic functions can have poles, but nothing more. Alternatively, a function is

meromorphic in S if and only if it is the quotient of two functions analytic in S. In particular,

z ā’5 is meromorphic throughout the complex plane, but sin(1/z) is not. In general, functions

given by nice expressions are analytic away from obvious pathological points, since addition,

multiplication, division, and composition of analytic functions usually yield analytic or mero-

morphic functions in the proper domains. Thus sin(1/z) is analytic throughout \ {0}, and

Ā£

so is z ā’5 , while exp(1/(1 ā’ z)) is analytic throughout \ {1}, but is not meromorphic because

Ā£

of the essential singularity at z = 1. Not all functions that might seem smooth are analytic,

though, as neither f (z) = z (ĀÆ denoting the complex conjugate of z) nor f (z) = |z| is analytic

ĀÆz

anywhere. The smoothness condition imposed by (10.1) is very stringent.

Analytic continuation is an important concept. A function f (z) may be deļ¬ned and analytic

in S, but there may be another function g(z) that is analytic in S ā S and such that g(z) =

f (z) for z ā S. In that case we say that g(z) provides an analytic continuation of f (z) to S ,

and it is a theorem that this extension is unique. A simple example is provided by

ā

1

zn = . (10.3)

1ā’z

n=0

The power series on the left side converges only for |z| < 1, and deļ¬nes an analytic function

there. On the other hand, (1 ā’ z)ā’1 is analytic throughout \ {1}, and so provides an analytic

Ā£

continuation for the power series. This is a common phenomenon in asymptotic enumeration.

Typically a generating function will converge in a disk |z| < r, will have a singularity at r, but

will be continuable to a region of the form

{z : |z| < r + Ī“, |Arg(z ā’ r)| > Ļ/2 ā’ } (10.4)

for Ī“, > 0. When this happens, it can be exploited to provide better or easier estimates of the

coeļ¬cients, as is shown in Section 11.1. That section explains the reasons why continuation

to a region of the form (10.4) is so useful.

88

If f (z) is analytic in S, z is on the boundary of S, but f (z) cannot be analytically continued

to a neighborhood of z, we say that z is a singularity of f (z). Isolated singularities that are

not poles are called essential, so that z = 1 is an essential singularity of exp(1/(1 ā’ z)), but

not of 1/(1 ā’ z). (Note that z = 1 is an essential singularity of f (z) = (1 ā’ z) 1/2 even though

f (1) = 0.) Throughout the rest of this chapter we will often refer to large singularities and

small singularities. These are not precise concepts, and are meant only to indicate how fast

the function f (z) grows as z ā’ z0 , where z0 is a singularity. If z0 = 1, we say that (1 ā’ z)1/2 ,

log(1 ā’ z), (1 ā’ z)ā’10 have small singularities, since |f (z)| either decreases or grows at most like

a negative power of |1 ā’ z| as z ā’ 1. On the other hand, exp(1/(1 ā’ z)) or exp((1 ā’ z) ā’1/5 ) will

be said to have large singularities. Note that for z = 1 + iy, y ā , exp(1/(1 ā’ z)) is bounded,

Ā”

so the choice of path along which the singularity is approached is important. In determining

the size of a singularity z0 , we will usually be concerned with real z 0 and generating functions

ā’

f (z) with nonnegative coeļ¬cients, and then usually will need to look only at z real, z ā’ z 0 .

When the function f (z) is entire (that is, analytic throughout ), we will say that ā is a

Ā£

singularity of f (z) (unless f (z) is a constant), and will use the large vs. small singularity

classiļ¬cation depending on how fast f (z) grows as |z| ā’ ā. The distinction between small

and large singularities is important in asymptotics because diļ¬erent methods are used in the

two cases.

A simple closed contour Ī“ in the complex plane is given by a continuous mapping Ī³ :

[0, 1] ā’ with the properties that Ī³(0) = Ī³(1), and that Ī³(s) = Ī³(t) whenever 0 ā¤ s < t ā¤ 1

Ā£

and either s = 0 or t = 1. Intuitively, Ī“ is a closed path in the complex plane that does not

intersect itself. For most applications that will be made in this chapter, simple closed contours

Ī“ will consist of line segments and sections of circles. For such contours it is easy to prove that

the complex plane is divided by the contour into two connected components, the inside and

the outside of the curve. This result is true for all simple closed curves by the Jordan curve

theorem, but this result is surprisingly hard to prove.

In asymptotic enumeration, the basic result about analytic functions is the Cauchy integral

formula for their coeļ¬cients.

Theorem 10.1. If f (z) is analytic in an open set S containing 0, and

ā

an z n

f (z) = (10.5)

n=0

89

in a neighborhood of 0, then for any n ā„ 0,

an = [z n ]f (z) = (2Ļi)ā’1 f (z)z ā’nā’1 dz , (10.6)

Ī“

where Ī“ is any simple closed contour in S that contains the origin inside it and is positively

oriented (i.e., traversed in counterclockwise direction).

An obvious question is why should one use the integral formula (10.6) to determine the

coeļ¬cient an of f (z). After all, the series (10.5) shows that

dn

n! an = n f (z) . (10.7)

dz z=0

Unfortunately the diļ¬erentiation involved in (10.7) is hard to control. Derivatives involve

taking limits, and so even small changes in a function can produce huge changes in derivatives,

especially high order ones. The special properties of analytic functions are not reļ¬‚ected in the

formula (10.7), and for nonanalytic functions there is little that can be done. On the other

hand, Cauchyā™s integral formula (10.6) does use special properties of analytic functions, which

allow the determination of the coeļ¬cients of f (z) from the values of f (z) along any closed

path. This determination involves integration, so that even coarse information about the size

of f (z) can be used with it. The analytic methods that will be outlined exploit the freedom of

choice of the contour of integration to relate the behavior of the coeļ¬cients to the behavior of

the function near just one or sometimes a few points.

If the power series (10.5) converges for |z| < R, and for the contour Ī“ we choose a circle

z = r exp(iĪø), 0 ā¤ Īø ā¤ 2Ļ, 0 < r < R, then the validity of (10.6) is easily checked by direct

computation, since the power series converges absolutely and uniformly so one can interchange

integration and summation. The strength of Cauchyā™s formula is in the freedom to choose the

contour Ī“ in diļ¬erent ways. This freedom yields most of the powerful results to be discussed

in the following sections, and later in this section we will outline how this is achieved. First

we discuss some simple applications of Theorem 10.1 obtained by choosing Ī“ to be a circle

centered at the origin.

Theorem 10.2. If f (z) is analytic in |z| < R, then for any r with 0 < r < R and any n ā Z,

n ā„ 0,

|[z n ]f (z)| ā¤ r ā’n max |f (z)| . (10.8)

|z|=r

90

The choice of Ī“ in Theorem 10.1 to be the circle of radius r gives Theorem 10.2. If f (z),

deļ¬ned by (10.5), has an ā„ 0 for all n, then

ā

an |z|n = f (|z|)

|f (z)| ā¤

n=0

and therefore we obtain Lemma 8.1 as an easy corollary to Theorem 10.2. The advantage of

Theorem 10.2 over Lemma 8.1 is that there is no requirement that a n ā„ 0. The bound of

Theorem 10.2 is usually weaker than the correct value by a small multiplicative factor such as

n1/2 .

If f (z) is analytic in |z| < R, then for any Ī“ > 0, f (z) is bounded in |z| < R ā’ Ī“, and

so Theorem 10.2 shows that an = [z n ]f (z) satisļ¬es |an | = O((R ā’ Ī“)ā’n ). On the other hand,

if |an | = O(S ā’n ), then the power series (10.5) converges for |z| < S and deļ¬nes an analytic

function in that disk. Thus we obtain the easy result that if f (z) is analytic in a disk |z| < R

but in no larger disk, then

lim sup |an |1/n = Rā’1 . (10.9)

Example 10.1. Oscillating sequence. Consider the sequence, discussed already in Exam-

ple 9.4, given by

n

n (ā’1)k

an = , n = 0, 1, . . . . (10.10)

k k!

k=0

The maximal term in the sum (10.10) is of order roughly exp(cn 1/2 ), so an cannot be much

larger. However, the sum (10.10) does not show that a n cannot be extremely small. Could

we have |an | ā¤ exp(ā’n) for all n, say? That this is impossible is obvious from (9.39), though,

by the argument above. The generating function f (z), given by Eq. (9.39), is analytic in

|z| < 1, but has an essential singularity at z = 1, so we immediately see that for any > 0,

|an | < (1 + )n for all suļ¬ciently large n, and that |a n | > (1 ā’ )n for inļ¬nitely many n.

(More powerful methods for dealing with analytic generating functions, such as the saddle

point method to be discussed in Section 12, can be used to obtain the asymptotic relation for

an given in Example 9.4.)

Ā

There is substantial literature dealing with the growth rate of coeļ¬cients of analytic func-

tions. The book of Evgrafov [110] is a good reference for these results. However, the estimates

presented there are not too useful for us, since they apply to wide classes of often pathological

91

functions. In combinatorial enumeration we usually encounter much tamer generating func-

tions for which the crude bounds of [110] are obvious or easy to derive. Instead, we need to

use the tractable nature of the functions we encounter to obtain much more delicate estimates.

The basic result, derived earlier, is that the power series coeļ¬cients a n of a generating

function f (z), deļ¬ned by (10.5), grow in absolute value roughly like R ā’n , if f (z) is analytic

in |z| < R. A basic result about analytic functions says that if the Taylor series (10.5) of f (z)

converges for |z| < R but for every > 0 there is a z with |z| = R + such that the series

(10.5) diverges at z, then f (z) has a singularity z with |z| = R. Thus the exponential growth

rate of the an is determined by the distance from the origin of the nearest singularity of f (z),

with close singularities giving large coeļ¬cients. Sometimes it is not obvious what R is. When

the coeļ¬cients of f (z) are positive, as is common in combinatorial enumeration and analysis

of algorithms, there is a useful theorem of Pringsheim [364]:

Theorem 10.3. Suppose that f (z) is deļ¬ned by Eq. (10.5) with a n ā„ 0 for all n ā„ n0 , and

that the series (10.5) for f (z) converges for |z| < R but not for any |z| > R. Then z = R is a

singularity of f (z).

As we remarked above, the exponential growth rate of the a n is determined by the distance

from the origin of the nearest singularity. Theorem 10.3 says that if the coeļ¬cients a n are non-

negative, it suļ¬ces to look along the positive real axis to determine the radius of convergence

R, which is also the desired distance to the singularity. There can be other singularities at the

same distance from the origin (for example, f (z) = (1 ā’ z 2 )ā’1 has singularities at z = Ā±1),

but Theorem 10.3 guarantees that none are closer to 0 than the positive real one.

Since the singularities of smallest absolute value of a generating function exert the dominant

inļ¬‚uence on the asymptotics of the corresponding sequence, they are called the dominant

singularities. In the most common case there is just one dominant singularity, and it is almost

always real. However, we will sometimes speak of a large set of singularities (such as the k

ļ¬rst order poles in Theorem 9.1, which are at diļ¬erent distances from the origin) as dominant

ones. This allows some dominant singularities to be more inļ¬‚uential than others.

Many techniques, including the elementary methods of Section 8, obtain bounds for sum-

matory functions of coeļ¬cients even when they cannot estimate the individual coeļ¬cients.

fn z n

These methods succeed largely because they create a dominant singularity. If f (z) =

converges for |z| < 1, diverges for |z| > 1, and has f n ā„ 0, then the singularity at z = 1 is at

92

least as large as any other. However, there could be other singularities on |z| = 1 that are just

as large. (This holds for the functions f 2 (z) and f3 (z) deļ¬ned by (8.2) and (8.4).) When we

consider the generating function of kā¤n fk , though, we ļ¬nd that

ā n

z n = (1 ā’ z)ā’1 f (z) ,

h(z) = fk (10.11)

n=0 k=0

so that h(z) has a singularity at z = 1 that is much larger than any other one. That often

ńņšąķčöą 12 |

Copyright © Design by: Sunlight webdesign