LINEBURG

k k!

k=0 k=0

which is Eq. (1.1).

The formula (1.1) for derangements is easy to use because the terms decrease rapidly.

Moreover, this formula is exceptionally simple, largely because N ≥ (Q) depends only on |Q|. In

general, the inclusion-exclusion principle produces complicated sums that are hard to estimate.

A frequently helpful tool is provided by the Bonferroni inequalities [81, 351]. One form of these

inequalities is that for any integer m ≥ 0,

(’1)|Q\R| N≥ (Q)

N= (R) ≥ (5.56)

Q

R⊆Q⊆P

|Q\R|¤2m

and

(’1)|Q\R| N≥ (Q) .

N= (R) ¤ (5.57)

Q

R⊆Q⊆P

|Q\R|¤2m+1

Thus in general

(’1)|Q\R| N≥ (Q) ¤

N= (R) ’ N≥ (Q) . (5.58)

Q Q

R⊆Q⊆P R⊆Q⊆P

|Q\R|¤k |Q\R|¤k+1

These inequalities are frequently applied for n = |X| increasing. Typically one chooses k

that increases much more slowly than n, so that the individual terms N ≥ (Q) in (5.58) can

be estimated asymptotically, as the interactions of the di¬erent properties counted by N ≥ (Q)

is not too complicated to estimate. Bender [33] presents some useful general principles to be

used in such estimates (especially the asymptotically Poisson distribution that tends to occur

when the method is successful). We present an adaptation of an example from [33].

25

Example 5.7. Balls and cells. Given n labeled cells and m labeled balls, let a h (m, n) be

the number of ways to place the balls into cells so that exactly h of the cells are empty. We

consider h ¬xed. Let X be the ways of placing the balls into the cells (n m in total), and

P = {P1 , . . . , Pn }, where Pi is the property that the i-th cell is empty. If R = {P 1 , . . . , Ph },

n

then ah (m, n) = N= (R). Now

h

N≥ (Q) = (n ’ |Q|)m , (5.59)

so

n’h

(n ’ h ’ t)m

N≥ (Q) = t

Q

R⊆Q⊆P

|Q\R|=t

= nm e’mh/n (ne’m/n )t (t!)’1 (1 + O((t2 + 1)mn’2 + (t2 + 1)n’1 )) ,

(5.60)

provided t2 ¤ n and mt2 n’2 ¤ 1, say. In the range 0 ¤ t ¤ log n, n log n ¤ m ¤ n 2 (log n)’3 ,

we ¬nd that the right-hand side of (5.60) is

nm e’mh/n (ne’m/n )t (t!)’1 (1 + O(mn’2 (log n)2 )) .

We now apply (5.58) with k = log n , and obtain

n n

nm exp(’mh/n ’ ne’m/n )

ah (m, n) = N= (R) ∼

h h

(5.61)

∼ nm (h!)’1 (ne’m/n )h exp(’ne’m/n )

as m, n ’ ∞, provided n log n ¤ m ¤ n2 (log n)’3 . Since ah (m, n)n’m is the probability that

there are exactly h empty cells, the relation (5.61) (which we have established only for ¬xed h)

shows that this probability is asymptotically distributed like a Poisson random variable with

parameter n exp(’m/n).

Many additional results on random distributions of balls into cells, and references to the

extensive literature on this subject can be found in [241].

Bonferroni inequalities include other methods for estimating N = (R) by linear combinations

of the N≥ (Q). Recent approaches and references (phrased in probabilistic terms) can be found

in [152]. For bivariate Bonferroni inequalities (where one asks for the probability that at least

one of two sets of events occurs) see [153, 249].

The Chen-Stein method [75] is a powerful technique that is often used in place of the

principle of inclusion-exclusion, especially in probabilistic literature. Recent references are

[17, 27].

26

5.3. Euler-Maclaurin and Poisson summation formulas

Section 5.0 showed that sums can be successfully approximated by integrals if the sum-

mands are all small compared to the total sum and vary smoothly as functions of the summation

index. The approximation (5.29), though crude, is useful in a wide variety of cases. Sometimes,

though, more accurate approximations are needed. An obvious way is to improve the bound

(5.29). If g(x) is really smooth, we can expect that the di¬erence

n+1

an ’ g(u)du

n

will vary in a regular way with n. This is indeed the case, and it is exploited by the Euler-

Maclaurin summation formula. It can be found in many books, such as [63, 175, 297, 298].

There are many formulations, but they do not di¬er much.

Euler-Maclaurin summation formula. Suppose that g(x) has 2m continuous derivatives

in [a, b], a, b ∈ Z. Then

b m

b

B2r

g (2r’1) (b) ’ g (2r’1) (a)

g(k) = g(x)dx +

(2r)!

a r=1

k=a

(5.62)

1

+ {g(a) + g(b)} + Rm ,

2

where

b

B2m (x ’ x )

g (2m) (x)

Rm = ’ dx , (5.63)

(2m)!

a

and so

b

|B2m (x ’ x )|

|g (2m) (x)|

|Rm | ¤ dx . (5.64)

(2m)!

a

In the above formulas, the Bn (x) denote the Bernoulli polynomials, de¬ned by

∞

zexz zn

= Bn (x) . (5.65)

ez ’ 1 n!

n=0

The Bn are the Bernoulli numbers, de¬ned by

∞

zn

z

= Bn , (5.66)

ez ’ 1 n!

n=0

so that Bn = Bn (0), and

B0 = 1 , B1 = ’ 1/2 , B2 = 1/6 ,

B3 = B 5 = B 7 = · · · = 0 , (5.67)

B4 = ’1/30 , B6 = 1/42 , B8 = ’ 1/30, . . . .

27

It is known that

|B2m (x ’ x )| ¤ |B2m | , (5.68)

so we can simplify (5.64) to

b

’1

|g (2m) (x)|dx .

|Rm | ¤ |B2m |((2m)!) (5.69)

a

There are many applications of the Euler-Maclaurin formula. One of the most frequently

cited ones is to estimate factorials.

Example 5.8. Stirling™s formula. We transform the product in the de¬nition of n! into a sum

by taking logarithms, and ¬nd that for g(x) = log x and m = 1 we have

n n

1 1 1

log n! = log k = (log x)dx + log n + B2 ’ 1 + R1 , (5.70)

2 2 n

1

k=1

where

n

B2 (x ’ x )

dx = C + O(n’1 )

R1 = (5.71)

2

2x

1

for

∞

B2 (x ’ x )

C= dx . (5.72)

2x2

1

Therefore

1

log n + C + 13/12 + O(n’1 ) ,

log n! = n log n ’ n + (5.73)

2

which gives

n! ∼ C n1/2 nn e’n as n ’ ∞ . (5.74)

To obtain Stirling™s formula (4.1), we need to show that C = (2π)1/2 . This can be done in

several ways (cf. [63]). In Examples 12.1, 12.4, and 12.5 we will see other methods of deriving

(4.1).

There is no requirement that the function g(x) in the Euler-Maclaurin formula be positive.

That was not even needed for the crude approximation of a sum by an integral given in

Section 5.0. The function g(x) can even take complex values. (After all, Eq. (5.62) is an

identity!) However, in most applications this formula is used to derive an asymptotic estimate

with a small error term. For that, some high order derivatives have to be small, which means

that g(x) cannot change sign too rapidly. In particular, the Euler-Maclaurin formula usually

is not very useful when the g(k) alternate in sign. In those cases one can sometimes use

28

the di¬erencing trick (cf. Example 5.5) and apply the Euler-Maclaurin formula to h(k) =

g(2k) + g(2k + 1). There is also Boole™s summation formula for alternating sums that can be

applied. (See Chapter 2, §3 and Chapter 6, §6 of [298], for example.) Generalizations to other

periodic patterns in the coe¬cients have been derived by Berndt and Schoenfeld [47].

The bounds for the error term Rm in the Euler-Maclaurin formula that were stated above

can often be improved by using special properties of the function g(x). For example, when

g(x) is analytic in x, there are contour integrals for R m that sometimes give good estimates

(cf. [315]).

The Poisson summation formula states that

∞ ∞ ∞

f (n + a) = exp(2πima) f (y) exp(’2πimy)dy (5.75)

’∞

n=’∞ m=’∞

for “nice” functions f (x). The functions for which (5.75) holds include all continuous f (x) for

which |f (x)|dx < ∞, which are of bounded variation, and for which f (n + a) converges

n

for all a. For weaker conditions that ensure validity of (5.75), we refer to [63, 365]. The

Poisson summation formula often converts a slowly convergent sum into a rapidly convergent

one. Generally it is not as widely applicable as the Euler-Maclaurin formula as it requires

extreme regularity for the Fourier coe¬cients to decrease rapidly. On the other hand, it can

be applied in some situations that are not covered by the Euler-Maclaurin formula, including

some where the coe¬cients vary in sign.

Example 5.9. Sum of exp(’±k 2 ). We consider again the function h(±) of Example 5.3. We

let f (x) = exp(’±x2 ), a = 0. Eq. (5.15) then gives

∞ ∞

2 1/2

exp(’π 2 m2 /±) .

h(±) = exp(’±n ) = (π/±) (5.76)

n=’∞ m=’∞

This is an identity, and the sum on the right-hand side above converges rapidly for small

±. Many applications require the evaluation of the sum on the left in which ± tends to 0.

Eq. (5.76) o¬ers a method of converting a slowly convergent sum into a tractable one, whose

asymptotic behavior is explicit.

5.4. Bootstrapping and other basic methods

Bootstrapping is a useful technique that uses asymptotic information to obtain improved

estimates. Usually we start with some rough bounds, and by combining them with the relations

de¬ning the function or sequence that we are studying, we obtain better bounds.

29

Example 5.10. Approximation of Bell numbers. Example 5.4 obtained the asymptotics of

the Bell numbers Bn , but only in terms of w, the solution to Eq. (5.41). We now show how

to obtain asymptotic expansions for w. As n increases, so does w. Therefore log(w + 1) also

increases, and so w < n for large n. Thus

n = w log(w + 1) < w log(n + 1) ,

and so

n(log(n + 1))’1 < w < n . (5.77)

Therefore

log(w + 1) = log n + O(log log n) , (5.78)

and so

n n n log log n

w= = +O . (5.79)

(log n)2

log(w + 1) log n

To go further, note that by (5.79),

n log log n

log(w + 1) = log 1+O

log n log n

(5.80)

= log n ’ log log n + O((log log n)(log n) ’1 ) ,

and so by applying this estimate in Eq. (5.41), we obtain

n log log n n(log log n)2

n n log log n

w= + + +O . (5.81)

(log n)2 (log n)3 (log n)3

log n

This procedure can be iterated inde¬nitely to obtain expansions for w with error terms

O(n(log n)’± ) for as large a value of ± as desired.

In the above example, w can also be estimated by other methods, such as the Lagrange-

B¨ rmann inversion formula (cf. Example 6.7). However, the bootstrapping method is much

u

more widely applicable and easy to apply. It will be used several times later in this chapter.

5.5. Estimation of integrals

In some of the examples in the preceding sections integrals were used to approximate

sums. The integrals themselves were always easy to evaluate. That is true in most asymptotic

enumeration problems, but there do occur situations where the integrals are more complicated.

Often the hard integrals are of the form

β

f (x) = g(t) exp(xh(t))dt , (5.82)

±

30

and it is necessary to estimate the behavior of f (x) as x ’ ∞, with the functions g(t), h(t) and

the limits of integration ± and β held ¬xed. There is a substantial theory of such integrals, and

good references are [54, 63, 100, 315]. The basic technique is usually referred to as Laplace™s

method, and consists of approximating the integrand by simpler functions near its maxima.

This approach is similar to the one that is discussed at length in Section 5.1 for estimating

sums. The contributions of the approximations are then evaluated, and it is shown that the

remaining ranges of integration, away from the maxima, contribute a negligible amount. By

breaking up the interval of integration we can write the integral (5.82) as a sum of several

integrals of the same type, with the property that there is a unique maximum of the integrand

and that it occurs at one of the endpoints. When ± > 0, the maximum of the integrand occurs

for large x at the maximum of h(t) (except in rare cases where g(t) = 0 for that t for which

h(t) is maximized). Suppose that the maximum occurs at t = ± > 0. It often happens that

h(t) = h(±) ’ c(t ’ ±)2 + O(|t ’ ±|3 ) (5.83)

for ± ¤ t ¤ β and c = ’h (±)/2 > 0, and then one obtains the approximation

f (x) ∼ g(±) exp(xh(±))[’π/(4xh (±))]1/2 as x ’ ∞ , (5.84)

provided g(±) = 0. For precise statements of even more general and rigorous results, see for

example Chapter 3, §7 of [315]. Those results cover functions h(t) that behave near t = ± like

h(±) ’ c(t ’ ±)µ for any µ > 0.

When the integral is highly oscillatory, as happens when h(t) = iu(t) for a real-valued

function u(t), still other techniques (such as the stationary phase method), are used. We

will not present them here, and refer to [54, 63, 100, 315] for descriptions and applications.

In Section 12.1 we will discuss the saddle point method, which is related to both Laplace™s

method and the stationary phase method.

Laplace integrals

∞

F (x) = f (t) exp(’xt)dt (5.85)

0

can often be approximated by integration by parts. We have (under suitable conditions on

f (t))

∞

’1 ’1

F (x) = x f (0) + x f (t) exp(’xt)dt

0

∞

’1 ’2 ’2

=x f (0) + x f (0) + x f (t) exp(’xt)dt , (5.86)

0

31

and so on. There are general results, usually associated with the name of Watson™s Lemma,

for deriving such expansions. For references, see [100, 315].

6. Generating functions

6.1. A brief overview

Generating functions are a wonderfully powerful and versatile tool, and most asymptotic

estimates are derived from them. The most common ones in combinatorial enumeration are

the ordinary and exponential generating functions. If a 0 , a1 , . . ., is any sequence of real or

complex numbers, the ordinary generating function is

∞

an z n ,

f (z) = (6.1)

n=0

while the exponential generating function is

∞

an z n

f (z) = . (6.2)

n!

n=0

Copyright Design by: Sunlight webdesign