LINEBURG

cn z n = 1 ’ . (7.17)

∞ n

1+ n=1 n!z

n=1

We apply Theorem 7.2 with an = n! for n ≥ 1 and F (x, y) = 1 ’ (1 + y)’1 . We easily obtain

cn ∼ n! as n’∞, (7.18)

so that almost all permutations are indecomposable.

54

Some further useful expansions for functional inverses and computations of formal power

series have been obtained by Bender and Richmond [40].

8. Elementary estimates for convergent generating functions

The word “elementary” in the title of this section is a technical term that means the proofs

do not use complex variables. It does not necessarily imply that the proofs are simple. While

some, such as those of Section 8.1, are easy, others are more complicated. The main advantage

of elementary methods is that they are much easier to use, and since they impose much weaker

requirements on the generating functions, they are more widely applicable. Usually they only

+.

impose conditions on the generating function f (z) for z ∈ ¡

The main disadvantage of elementary methods is that the estimates they give tend to be

much weaker than those derived using analytic function approaches. It is easy to explain why

that is so by considering the two generating functions

∞

z n = (1 ’ z)’1

f1 (z) = (8.1)

n=0

and

∞

2z 2n = 3/2 + 2z 2 (1 ’ z 2 )’1 .

f2 (z) = 3/2 + (8.2)

n=1

Both series converge for |z| < 1 and diverge for |z| > 1, and both blow up as z ’ 1. However,

1’z

f1 (z) ’ f2 (z) = ’ ’0 as z’1. (8.3)

2(1 + z)

Thus these two functions behave almost identically near z = 1. Since f 1 (z) and f2 (z) are both

∼ (1 ’ z)’1 as z ’ 1’ , z ∈ +, +,

and their di¬erence is O(|z ’ 1|) for z ∈ it would require

¡ ¡

exceptionally delicate methods to detect the di¬erences in the coe¬cients of the f j (z) just from

+.

their behavior for z ∈ There is a substantial di¬erence in the behavior of f 1 (z) and f2 (z)

¡

for real z if we let z ’ ’1, so our argument does not completely exclude the possibility of

obtaining detailed information about the coe¬cients of these functions using methods of real

variables only. However, if we consider the function

∞

3z 3n = 2 + 3z 3 (1 ’ z 3 )’1 ,

f3 (z) = 2 + (8.4)

n=1

then f1 (z) and f3 (z) are both ∼ (1 ’ z)’1 as z ’ 1’ , z ∈ +, yet now

¡

|f1 (z) ’ f3 (z)| = O(|z ’ 1|) for all z∈ .

¡

55

This di¬erence is comparable to what would be obtained by modifying a single coe¬cient of one

generating function. To determine how such slight changes in the behavior of the generating

functions a¬ect the behavior of the coe¬cients we would need to know much more about

the functions if we were to use real variable methods. On the other hand, analytic methods,

discussed in Section 10 and later, are good at dealing with such problems. They require less

precise knowledge of the behavior of a function on the real line. Instead, they impose weak

conditions on the function in a wider domain, namely that of the complex numbers.

For reasons discussed above, elementary methods cannot be expected to produce precise

estimates of individual coe¬cients. They often do produce good estimates of summatory

functions of the coe¬cients, though. In the examples above, we note that

N

[z n ]fj (z) ∼ N as N ’∞ (8.5)

n=1

for 1 ¤ j ¤ 3. This holds because the fj (z) have the same behavior as z ’ 1’ , and is part of

a more general phenomenon. Good knowledge of the behavior of the generating function on

the real axis combined with weak restrictions on the coe¬cients often leads to estimates for

the summatory function of the coe¬cients.

There are cases where elementary methods give precise bounds for individual coe¬cients.

fn z n that

Typically when we wish to estimate f n , with ordinary generating function f (z) =

converges for |z| < 1 but not for |z| > 1, we apply the methods of this section to

gn = fn ’ fn’1 for n ≥ 1, g0 = f0 (8.6)

with generating function

∞

gn z n = (1 ’ z)f (z) .

g(z) = (8.7)

n=0

Then

n

gk = f n , (8.8)

k=0

and so estimates of the summatory function of the g k yield estimates for fn . The di¬culty with

this approach is that now g(z) and not f (z) has to satisfy the hypotheses of the theorems,

which requires more knowledge of the f n . For example, most of the Tauberian theorems

apply only to power series with nonnegative coe¬cients. Hence to use the di¬erencing trick

above to obtain estimates for fn we need to know that fn’1 ¤ fn for all n. In some cases

(such as that of fn = pn , the number of ordinary partitions of n) this is easily seen to hold

56

through combinatorial arguments. In other situations where one might like to apply elementary

methods, though, fn’1 ¤ fn is either false or else is hard to prove. When that happens, other

methods are required to estimate fn .

8.1. Simple upper and lower bounds

A trivial upper bound method turns out to be widely applicable in asymptotic enumeration,

and is surprisingly powerful. It relies on nothing more than the nonnegativity of the coe¬cients

of a generating function.

Lemma 8.1. Suppose that f (z) is analytic in |z| < R, and that [z n ]f (z) ≥ 0 for all n ≥ 0.

Then for any x, 0 < x < R, and any n ≥ 0,

[z n ]f (z) ¤ x’n f (x) . (8.9)

Example 8.1. Lower bound for factorials. Let f (z) = exp(z). Then Lemma 8.1 yields

1

= [z n ]ez ¤ x’n ex (8.10)

n!

for every x > 0. The logarithm of x’n ex is x ’ n log x, and di¬erentiating and setting it equal

to 0 shows that the minimum value is attained at x = n. Therefore

1

= [z n ]ez ¤ n’n en , (8.11)

n!

and so n! ≥ nn e’n . This lower bound holds uniformly for all n, and is o¬ only by an asymptotic

factor of (2πn)1/2 from Stirling™s formula (4.1).

fn z n . Lemma 8.1 is proved by noting that for 0 < x < R, the n-th

Suppose that f (z) =

term, fn xn , in the power series expansion of f (x), is ¤ f (x). As we will see in Section 10, it

is often possible to derive a similar bound on the coe¬cients f n even without assuming that

they are nonnegative. However, the proof of Lemma 8.1 shows something more, namely that

f0 x’n + f1 x’n+1 + · · · + fn’1 x’1 + fn ¤ x’n f (x) (8.12)

for 0 < x < R. When x ¤ 1, this yields an upper bound for the summatory function of the

coe¬cients. Because (8.12) holds, we see that the bound of Lemma 8.1 cannot be sharp in

general. What is remarkable is that the estimates obtainable from that lemma are often not

far from best possible.

57

Example 8.2. Upper bound for the partition function. Let p(n) denote the partition function.

It has the ordinary generating function

∞ ∞

n

(1 ’ z k )’1 .

f (z) = p(n)z = (8.13)

n=0 k=1

Let g(s) = log f (e’s ), and consider s > 0, s ’ 0. There are extremely accurate estimates of

g(s). It is known [13, 23], for example, that

g(s) = π 2 /(6s) + (log s)/2 ’ (log 2π)/2 ’ s/24 + O(exp(’4π 2 /s)) . (8.14)

If we use (8.14), we ¬nd that x’n f (x) is minimized at x = exp(’s) with

s = π/(6n)1/2 ’ 1/(4n) + O(n’3/2 ) , (8.15)

which yields

p(1) + p(2) + · · · + p(n) ¤ 2’3/4 e’1/4 n’1/4 (1 + o(1)) exp(2π6’1/2 n1/2 ) . (8.16)

Comparing this to the asymptotic formula for the sum that is obtainable from (1.6) (see

Example 5.2), we see that the bound of (8.16) is too high by a factor of n 1/4 . If we use (8.16)

to bound p(n) alone, we obtain a bound that is too large by a factor of n 3/4 .

The application of Lemma 8.1 outlined above depended on the expansion (8.14), which is

complicated to derive, involving modular transformation properties of p(n) that are beyond

the scope of this survey. (See [13, 23] for derivations.) Weaker estimates that are still useful

are much easier to derive. We obtain one such bound here, since the arguments illustrate some

of the methods from the preceding sections.

Consider

∞

’ log(1 ’ e’ks ) .

g(s) = (8.17)

k=1

If we replace the sum by the integral

∞

’ log(1 ’ e’us )du ,

I(s) = (8.18)

1

we ¬nd on expanding the logarithm that

∞ ∞

∞

m’1 e’mus du = s’1 m’2 e’ms ,

I(s) = (8.19)

1 m=1 m=1

58

since the interchange of summation and integration is easy to justify, as all the terms are

positive. Therefore as s ’ 0+ ,

∞

m’2 = π 2 /6 ,

sI(s) ’ (8.20)

m=1

so that I(s) ∼ π 2 /(6s) as s ’ 0+ . It remains to show that I is indeed a good approximation

to g(s). This follows easily from the bound (5.32), since it shows that

∞

se’vs

g(s) = I(s) + O dv . (8.21)

1 ’ e’vs

1

We could estimate the integral in (8.21) carefully, but we only need rough upper bounds for

it, so we write it as

∞ ∞

se’vs e’u

dv = du

1 ’ e’vs 1 ’ e’u

1 s

1 ∞

e’u e’u

= du + du (8.22)

1 ’ e’u 1 ’ e’u

s 1

1 1

du du

= +c¤ + c = c ’ log s

eu ’ 1 u

s s

for some constant c. Thus we ¬nd that

g(s) = I(s) + O(log(s’1 )) s ’ 0+ .

as (8.23)

Combining (8.23) with (8.20) we see that

g(s) ∼ π 2 /(6s) s ’ 0+ .

as (8.24)

Therefore, choosing s = π/(6n)1/2 , x = exp(’s) in Lemma 8.1, we obtain a bound of the form

p(n) ¤ exp((1 + o(1))π(2/3)1/2 n1/2 ) as n’∞. (8.25)

Lemma 8.1 yields a lower bound for n! that is only a factor of about n 1/2 away from

optimal. That is common. Usually, when the function f (z) is reasonably smooth, the best

bound obtainable from Lemma 8.1 will only be o¬ from the correct value by a polynomial

factor of n, and often only by a factor of n 1/2 .

The estimate of Lemma 8.1 can often be improved with some additional knowledge about

the fn . For example, if fn+1 ≥ fn for all n ≥ 0, then we have

x’n f (x) ≥ fn + fn+1 x + fn+2 x2 + · · · ≥ fn (1 ’ x)’1 . (8.26)

59

For fn = p(n), the partition function, then yields an upper bound for p(n) that is too large by

a factor of n1/4 .

To optimize the bound of Lemma 8.1, one should choose x ∈ (0, R) carefully. Usually there

is a single best choice. In some pathological cases the optimal choice is obtained by letting

x ’ 0+ or x ’ R’ . However, usually we have limx’R’ f (x) = ∞, and [z m ]f (z) > 0 for some

m with 0 ¤ m < n as well as for some m > n. Under these conditions it is easy to see that

lim x’n f (x) = lim x’n f (x) = ∞ . (8.27)

x’0+ x’R’

Thus it does not pay to make x too small or too large. Let us now consider

g(x) = log(x’n f (x)) = log f (x) ’ n log x . (8.28)

Then

f n

g (x) = (x) ’ , (8.29)

f x

and the optimal choice must be at a point where g (x) = 0. For most commonly encountered

functions f (x), there exists a constant x 0 > 0 such that

f

(x) > 0 (8.30)

f

for x0 < x < R, and so g (x) > 0 for all x ∈ (0, R) if n is large enough. For such n there

is then a unique choice of x that minimizes the bound of Lemma 8.1. However, one major

advantage of Lemma 8.1 is that its bound holds for all x. To apply this lemma, one can use

any x that is convenient to work with. Usually if this choice is not too far from the optimal

one, the resulting bound is fairly good.

We have already remarked above that the bound of Lemma 8.1 is usually close to best

possible. It is possible to prove general lower bounds that show this for a wide class of functions.

The method, originated in [277] and developed in [305], relies on simple elementary arguments.

However, the lower bounds it produces are substantially weaker than the upper bounds of

Lemma 8.1. Furthermore, to apply them it is necessary to estimate accurately the minimum of

x’n f (x), instead of selecting any convenient values of x. A more general version of the bound

below is given in [305].

fn xn converges for |x| < 1, fn ≥ 0 for all n, fm0 > 0

Theorem 8.1. Suppose that f (x) =

for some m0 , and fn = ∞. Then for n ≥ m0 , there is a unique x0 = x0 (n) ∈ (0, 1) that

60

minimizes x’n f (x). Let s0 = ’ log x0 , and

‚2

A = 2 log f (e’s ) . (8.31)

‚s s=s0

If A ≥ 106 and for all t with

s0 ¤ t ¤ s0 + 20A’1/2 (8.32)

we have

‚3

log f (e’s ) ¤ 10’3 A3/2 , (8.33)

3

‚s s=t

then

n

fk ≥ x’n f (x0 ) exp(’30s0 A1/2 ’ 100) . (8.34)

0

k=0

As is usual for Tauberian theorems, Theorem 8.1 only provides bounds on the sum of

coe¬cients of f (z). As we mentioned before, this is unavoidable when one relies only on

information about the behavior of f (z) for z a positive real number. The conditions that

Theorem 8.1 imposes on the derivatives are usually satis¬ed in combinatorial enumeration

applications and are easy to verify.

Example 8.3. Lower bound for the partition function. Let f (z) and g(s) be as in Example 8.2.

We showed there that g(s) satis¬es (8.24) and similar rough estimates show that g (s) ∼

’π 2 /(6s2 ), g (s) ∼ π 2 /(3s3 ), and g (s) ∼ ’π 2 /s4 as s ’ 0+ . Therefore the hypotheses of

Theorem 8.1 are satis¬ed, and we obtain a lower bound for p(0) + p(1) + · · · + p(n). If we only

use the estimate (8.24) for g(s), then we can only conclude that for x = e ’s ,

log(x’n f (x)) = ns + g(s) ∼ ns + π 2 /(6s) as s’0, (8.35)

and so the minimum value occurs at s ∼ π/(6n) 1/2 as n ’ ∞. This only allows us to conclude

that for every > 0 and n large enough,

log(p(0) + · · · + p(n)) ≥ (1 ’ )π(2/3) 1/2 n1/2 . (8.36)

However, we can also conclude even without further computations that this lower bound will

be within a multiplicative factor of exp(cn 1/4 ) of the best upper bound that can be obtained

from Lemma 8.1 for some c > 0 (and therefore within a multiplicative factor of exp(cn 1/4 ) of

the correct value). In particular, if we use the estimate (8.14) for g(s), we ¬nd that for some

c > 0,

p(0) + · · · + p(n) ≥ exp(π(2/3)1/2 n1/2 ’ c n1/4 ) . (8.37)

61

Since p(k) ¤ p(k + 1), the quantity on the right-hand side of (8.37) is also a lower bound for

p(n) if we increase c , since (n + 1)p(n) ≥ p(0) + · · · + p(n).

The di¬erencing trick described at the introduction to Section 8 could also be used to

estimate p(n), since Theorem 8.1 can be applied to the generating function of p(n + 1) ’ p(n).

However, since the error term is a multiplicative factor of exp(cn 1/4 ), it is simpler to use the

approach above, which bounds p(n) below by (p(0) + · · · + p(n))/(n + 1).

Copyright Design by: Sunlight webdesign