LINEBURG

k=1

As was noted before, p(n) is larger than any of the other terms, but not by enough to dominate

the sum. We therefore try the other approaches that were listed at the beginning of this section.

We use only the estimate (1.6). Since (1 ’ x) 1/2 < 1 ’ x/2 for 0 ¤ x ¤ 1, we ¬nd that for large

n,

p(k) ¤ np(n ’ n2/3 )

k<n’n2/3

¤ exp(C(n ’ n2/3 )1/2 )

(5.23)

¤ exp(Cn1/2 ’ Cn1/6 /2)

= O(p(n) exp(’Cn1/6 /3)) .

Thus most of the values of k contribute a negligible amount to the sum. For k = n ’ j,

0 ¤ j ¤ n2/3 , we ¬nd that

p(n ’ j)/p(n) = (1 + O(n’1/3 )) exp(C(n ’ j)1/2 ’ Cn1/2 ) .

Since

(n ’ j)1/2 = n1/2 ’ jn’1/2 /2 + O(j 2 n’3/2 ) ,

p(n ’ j)/p(n) = exp(’Cjn’1/2 /2 + O(n’1/6 )) (5.24)

= (1 + O(n’1/6 )) exp(’Cjn’1/2 /2) .

18

Thus the ratios p(n ’ j)/p(n) decrease geometrically, and so

(1 + O(n’1/6 ))

’1

= 2C ’1 n1/2 (1 + O(n’1/6 )) .

p(n) p(n ’ j) = (5.25)

’1/2 /2)

1 ’ exp(’Cn

2/3

0¤j¤n

Therefore, combining all the estimates,

n

1 + O(n’1/6 ) 1/2

eCn .

Tn = p(k) = (5.26)

2 · C · 31/2 · n1/2

k=1

The O(n’1/6 ) error term above can easily be improved with a little more care to O(n ’1/2 ),

even if we continue to rely only on (1.6).

Before presenting further examples, we discuss some of the problems that can arise even

in the simple setting of estimating positive sums. We then introduce the basic technique of

approximating sums by integrals.

The lack of uniform convergence is a frequent cause of incorrect estimates. If a n (k) ∼ cn (k)

for each k as n ’ ∞, it does not necessarily follow that

bn = an (k) ∼ cn (k) as n’∞. (5.27)

k k

n n

A simple counterexample is given by a n (k) = and cn (k) = (1 + k/n). To conclude

k k

that (5.27) holds, it is usually necessary to know that a n (k) ∼ cn (k) as n ’ ∞ uniformly in

k. Such uniform convergence does hold if we replace c n (k) in the counterexample above by

n

(1 + k/n2 ), for example.

cn (k) = k

There is a general principle that sums of terms that vary smoothly with the index of

summation should be replaced by integrals, so that for ± > 0, say,

n n+1

±

u± du

k∼ as n’∞. (5.28)

1

k=1

The advantage of replacing a sum by an integral is that integrals are usually much easier to

handle. Many more closed-form expressions are available for de¬nite and inde¬nite integrals

than for sums. We will discuss extensions of this principle of replacing sums by integrals further

in Section 5.3, when we present the Euler-Maclaurin summation formula. Usually, though, we

do not need anything sophisticated, and the application of the principle to situations like that

of (5.28) is easy to justify. If an = g(n) for some function g(x) of a real argument x, then

n+1

g(n) ’ g(u)du ¤ max |g(u) ’ g(n)| , (5.29)

n¤u¤n+1

n

19

and so

g(n) ’ g(u)du ¤ max |g(u) ’ g(n)| , (5.30)

n¤u¤n+1

n n

where the integral is over [a, b + 1] if the sum is over a ¤ n ¤ b, a, b ∈ Z. If g(u) is continuously

di¬erentiable, then |g(u) ’ g(n)| ¤ max n¤v¤n+1 |g (v)| for n ¤ u ¤ n + 1. This gives the

estimate

b b

b+1

g(n) ’ g(u)du ¤ max |g (v)| . (5.31)

n¤v¤n+1

a

n=a n=a

Often one can ¬nd a simple explicit function h(w) such that |g (v)| ¤ h(w) for any v and w

with |v ’ w| ¤ 1, in which case Eq. (5.31) can be replaced by

b b+1 b+1

g(n) ’ g(u)du ¤ h(v)dv . (5.32)

a a

n=a

For good estimates to be obtained from integral approximations to sums, it is usually necessary

for individual terms to be small compared to the sum.

Example 5.3. Sum of exp(’±k 2 ). In the ¬nal stages of an asymptotic approximation one

often encounters sums of the form

∞

exp(’±k 2 ) ,

h(±) = ±>0. (5.33)

k=’∞

There is no closed form for the inde¬nite integral of exp(’±u 2 ) (it is expressible in terms of

the Gaussian error function only), but there is the famous evaluation of the de¬nite integral

∞

exp(’±u2 )du = (π/±)1/2 . (5.34)

’∞

Thus it is natural to approximate h(±) by (π/±) 1/2 . If g(u) = exp(’±u2 ), then g (u) =

’2±ug(u), and so for n ≥ 0,

max |g (v)| ¤ 2±(n + 1)g(n) . (5.35)

n¤v¤n+1

For the integral in Eq. (5.30) to yield a good approximation to the sum we must show that

the error term is smaller than the integral. The largest term in the sum occurs at n = 0 and

equals 1. The error bound (5.35) that comes from approximating g(0) = 1 by the integral of

g(u) over 0 ¤ u ¤ 1 is 2±. Therefore we cannot expect to obtain a good estimate unless ± ’ 0.

We ¬nd that

2±(n + 1)g(n) ¤ 4±ug(u/2) for n ≥ 1, n ¤ u ¤ n + 1 ,

20

so (integral approximation again!)

∞ ∞

2±(n + 1)g(n) ¤ 4± ug(u/2)du

1

n=1

(5.36)

∞

ug(u/2)du = (8±)1/2 .

¤ 4±

0

Therefore, taking into account the error for n = 0 which was not included in the bound (5.36),

we have

∞ ∞

2

exp(’±u2 )du + O(±1/2 + ±)

h(±) = exp(’±n ) =

’∞

n=’∞

(5.37)

= (π/±)1/2 + O(±1/2 ) ± ’ 0+ .

as

For this sum much more precise estimates are available, as will be shown in Example 5.9. For

many purposes, though, (5.37) is su¬cient.

Example 5.3 showed how to use the basic tool of approximating a sum by an integral.

Moreover, the estimate (5.37) that it provides is ubiquitous in asymptotic enumeration, since

many approximations reduce to it. This is illustrated by the following example.

Example 5.4. Bell numbers (cf. [63]). The Bell number, B(n), counts the partitions of an

n-element set. It is given by [81]

∞

kn

’1

B(n) = e . (5.38)

k!

k=1

In this sum no single term dominates. The ratio of the (k + 1)-st to the k-th term is

n

(k + 1)n k! 1 1

· n= 1+ . (5.39)

(k + 1)! k k+1 k

As k increases, this ratio strictly decreases. We search for the point where it is about 1. For

k ≥ 2,

n

1 1

= exp(n/k + O(n/k 2 )) ,

1+ = exp n log 1 + (5.40)

k k

so the ratio is close to 1 for n/k close to log(k + 1). We choose k 0 to be the closest integer to

w, the solution to

n = w log(w + 1) . (5.41)

21

For k = k0 + j, 1 ¤ j ¤ k0 /2, we ¬nd, since log(1 + i/k0 ) = i/k0 ’ i2 /(2k0 ) + O(i3 /k0 ),

2 3

kn n (1 + j/k0 )n

k0

=

k0 ! k0 Πj (1 + i/k0 )

j

k! i=1

(5.42)

n

k0

exp jn/k0 ’ j log k0 ’ j 2 (n + k0 )/(2k0 ) + O(nj 3 /k0 + j/k0 ) .

2 3

=

k0 !

The same estimate applies for ’k0 /2 ¤ j ¤ 0. The term jn/k0 ’ j log k0 is small, since

|k0 ’ w| ¤ 1/2 and w satis¬es (5.41). We ¬nd

n/k0 ’ log k0 = n/w ’ log(w + 1) + O(n/w 2 + 1/w)

(5.43)

= O(n/w2 + 1/w) .

By (5.41), w ∼ n/ log n as n ’ ∞. We now further restrict j to |j| ¤ n 1/2 log n. Then (5.42)

and (5.43) yield

kn n

k0

exp(’j 2 (n + k0 )/(2k0 ) + O((log n)6 n’1/2 )) .

2

= (5.44)

k! k0 !

Approximating the sum by an integral, as in Example 5.3, shows that

kn n

k0

k0 (2π)1/2 (n + k0 )’1/2 (1 + O((log n)6 n’1/2 )) .

= (5.45)

k! k0 !

k

|j|¤n1/2 log n

(An easy way to obtain this is to apply the estimate of Example 5.3 to the sum from ’∞ to

∞, and show that the range |j| > n1/2 log n contributes little.) To estimate the contribution of

the remaining summands, with |j| > n 1/2 log n, we observe that the ratio of successive terms

is ¤ 1, so the range 1 ¤ k ¤ k0 ’ n1/2 log n contributes at most k0 (the number of terms)

times the largest term, which arises for k = k 0 ’ n1/2 log n . By (5.44), this largest term is

O(k0 (k0 !)’1 exp(’(log n)3 )) .

n

For k ≥ k1 ≥ k0 + n1/2 log n , we ¬nd that the ratio of the (k + 1)-st to the k-th term is, for

large n,

n

1 1 2 3

¤ 1+ = exp(n/k1 ’ log(k1 + 1) ’ n/(2k1 ) + O(n/k1 ))

k1 + 1 k1

(5.46)

2 3

¤ exp(’(k1 ’ k0 )n/k1 + O(n/k1 ))

¤ exp(’2n’1/2 ) ¤ 1 ’ n’1/2 ,

and so the sum of these terms, for k1 ¤ k < ∞, is bounded above by n1/2 times the term for

k = k1 . Therefore the estimate on the right-hand side of (5.45) applies even when we sum on

all k, 1 ¤ k < ∞.

22

n

To obtain an estimate for B(n), it remains only to estimate k 0 /k0 !. To do this, we apply

Stirling™s formula and use the property that |k 0 ’ w| ¤ 1/2 to deduce that

B(n) ∼ (log w)1/2 wn’w ew as n’∞, (5.47)

where w is given by (5.41).

There is no explicit formula for w in terms of n, and substituting various asymptotic

approximations to w, such as

n n

w= +O (5.48)

(log n)2

log n

(see Example 5.10) yields large error terms in (5.47), so for accuracy it is usually better to

use (5.47) as is. There are other approximations to B(n) in the literature (see, for example,

[33, 63]). They di¬er slightly from (5.47) because they estimate B(n) in terms of roots of

equations other than (5.41).

Other methods of estimating B(n) are presented in Examples 12.5 and 12.6.

5.2. Alternating sums and the principle of inclusion-exclusion

At the beginning of Section 5, the reader was advised in general to search for identities and

transformations when dealing with general sums. This advice is even more important when

dealing with sums of terms that have alternating or irregularly changing coe¬cients. Finding

the largest term is of little help when there is substantial cancellation among terms. Several

general approaches for dealing with this di¬culty will be presented later. Generating function

methods for dealing with complicated sums are discussed in Section 6. Contour integration

methods for alternating sums are mentioned in Section 10.3. The summation formulas of the

next section can sometimes be used to estimate sums with regularly varying coe¬cients as

well. In this section we present some basic elementary techniques that are often su¬cient.

Sometimes it is possible to obtain estimates of sums with positive and negative summands

by approximating separately the sums of the positive and of the negative summands. Methods

of the preceding section or of the next section are useful in such situations. However, this

approach is to be avoided as much as possible, because it often requires extremely precise

estimates of the two sums to obtain even rough bounds on the desired sums. One method that

often works and is much simpler consists of a simple pairing of adjacent positive and negative

terms.

23

Example 5.5. Alternating sum of square roots. Let

n

(’1)k k 1/2 .

Sn = (5.49)

k=1

We have

1/2

1

1/2 1/2 1/2

(2m) ’ (2m ’ 1) = (2m) 1’ 1’

2m

1

= (2m)1/2 1 ’ 1 ’ + O(m’2 ) (5.50)

4m

= (8m)’1/2 + O(m’3/2 ) ,

so

2 n/2 n/2

k 1/2

(8m)’1/2 + O(1)

(’1) k =

m=1

k=1

(5.51)

= n1/2 /2 + O(1) .

Hence ± 1/2

n /2 + O(1) if n is even ,

Sn = (5.52)

’n1/2 /2 + O(1) if n is odd .

In Example 5.5, the sums of the positive terms and of the negative terms can easily be

estimated accurately (for example, by using the Euler-Maclaurin formula of the next section)

to obtain (5.52). In other cases, though, the cancellation is too extensive for such an approach

to work. This is especially true for sums arising from the principle of inclusion-exclusion.

Suppose that X is some set of objects and P is a set of properties. For R ⊆ P , let N = (R)

be the number of objects in X that have exactly the properties in R and none of the properties

in P \ R. We let N≥ (R) denote the number of objects in X that have all the properties in R

and possibly some of those in P \ R. The principle of inclusion-exclusion says that

(’1)|Q\R| N≥ (Q) .

N= (R) = (5.53)

R⊆Q⊆P

(This is a basic version of the principle. For more general results, proofs, and references, see

[81, 173, 351].)

24

Example 5.6. Derangements of n letters. Let X be the set of permutations of n letters, and

suppose that Pi , 1 ¤ i ¤ n, is the property that the i-th letter is ¬xed by a permutation, and

P = {P1 , . . . , Pn }. Then dn , the number of derangements of n letters, equals N = (φ), where φ

is the empty set, and so by (5.53)

(’1)|Q| N≥ (Q) .

dn = (5.54)

Q⊆P

However, N≥ (Q) is just the number of permutations that leave all letters speci¬ed by Q ¬xed,

and thus

(’1)|Q| (n ’ |Q|)!

dn =

Q⊆P

(5.55)

n n

n n!

(’1)k (n ’ k)! (’1)k

Copyright Design by: Sunlight webdesign