Archive for February, 2009

Curious Constants (part 4 of 4)

February 19, 2009

I hope you have enjoyed at least parts of this series (links at the bottom). I learned quite a bit along the way, and would like to point you to the following…

Resources

As evidenced by the links above, Wikipedia played a huge role in my preparation for this talk, as did Mathworld. The constant I was originally inspired to talk about was the last one in the talk above, Khinchin’s. I first learned about it in the book “Gamma” by Havil, a book I definitely recommend. This book also introduced me to the first integral (1/x^x), and the Kempner series. I have already mentioned Khinchin’s book “Continued Fractions,” which I had quite a great time reading. Once I decided to give a talk on scattered constants, the book “Mathematical Constants” by Finch proved hugely valuable. In addition to bringing many more constants to my attention, each of it’s sections has a huge list of references.

The proof above about the divergence of the series of prime reciprocals was from W. Dunham’s book, “Euler: The Master of Us All,” another great book. The proofs mentioned about the alternating sum of factorials came from the paper “Euler Subdues a Very Obstreperous Series” by E.J. Barbeau, which I found in the book “The Genius of Euler,” (edited by W. Dunham, part of the MAA Tercentenary Euler Celebration). Though I may not have used it explicitly, I regularly peaked into Hardy and Wright’s “An Introduction to the Theory of Numbers” (a book which I now hope to dig into even more). Another fun book, which I consulted but did not use explicitly, is Conway’s “Book of Numbers.”

The results I found the most difficult to grasp were the probability reasonings associated with the twin prime constant and Artin’s conjecture. For Artin’s conjecture, I looked at the paper “Artin’s Conjecture for Primitive Roots”, by M. Ram Murty, available here. As I have already mentioned, the paper “An Amazing Prime Heuristic” by Caldwell (here) was the one that finally made things seem reasonable about the twin prime constant. I also read the papers “The Twin Prime Constant” and “Heuristic Reasoning in the Theory of Numbers”, by Golomb and Polya, respectively. Both of these papers I was able to access through UVA on JSTOR. The book “The Prime Numbers and Their Distribution”, by G. Tenenbaum and M. M. France also had a section on twin primes and also Mertens theorems.

Finally, some journal articles. I apologize for any incorrect formatting.

  • P. Erdos, On the density of the abundant numbers, J. London Math. Soc. 9 (1934), 278-282.
  • G.N. Watson, Theorems stated by Ramanujan. VIII: Theorems on divergent series, J. London Math. Soc. 4 (1929) 82-86.
  • M. Schroeder, How Probable is Fermat’s last theorem?, Math. Intellig. 16 (1994) 19-20.
  • R.A. Knoebel, Exponentials reiterated, Amer. Math. Monthly 88 (1981) 235-252.

Other posts in this series

  1. The First Batch
  2. Continued Fractions
  3. The Second Batch
  4. Resources

Curious Constants (part 3 of 4)

February 19, 2009

With our background on continued fractions behind us, let’s get back into some constants.

The Second Batch

0.10841… & 0.27394419… & 0.48267728…

I include more decimals here (than for previous constants in this talk), because without them, the interest is lost. While the discussion of continued fractions above only deals with simple continued fractions, I’ll take the decimals above as the values for non-simple continued fractions, and then notice that

\begin{array}{rcl} 0.10841 &\approx& \cfrac{1}{1+\cfrac{1}{0+\cfrac{1}{8+\cfrac{1}{4+\cfrac{1}{1+\ddots}}}}} \\ 0.27394419 &\approx& \cfrac{2}{7+\cfrac{3}{9+\cfrac{4}{4+\cfrac{1}{9+\ddots}}}} \\ 0.48267728 &\approx& \cfrac{4}{8+\cfrac{2}{6+\cfrac{7}{7+\cfrac{2}{8+\ddots}}}} \end{array}

These constants go by the name of their discoverer, M. Trott, who wrote about them just in the past decade. Existence and uniqueness of such constants is an open question.

In order to introduce some notation I’ll use in the next section, let me also write the second example above as \frac{2}{7+}\frac{3}{9+}\frac{4}{4+}\frac{1}{9+}.

0.596…

This has continued fraction expansion

0.596 \approx \frac{1}{1+}\frac{1}{1+}\frac{1}{1+}\frac{2}{1+}\frac{2}{1+}\frac{3}{1+}\frac{3}{1+}\frac{4}{1+}\frac{4}{1+}\cdots

That is, after an initial 1, all “numerators” repeate twice and then increment by 1. This expression is obtained from the function

u(x)=\frac{1}{1+}\frac{x}{1+}\frac{x}{1+}\frac{2x}{1+}\frac{2x}{1+}\frac{3x}{1+}\frac{3x}{1+}\frac{4x}{1+}\frac{4x}{1+}\cdots

by evaluating at x=1. So, where did this continued fraction come from? Euler came to this continued fraction as one for

u(x)=1-x+2x^2-6x^3+(4!)x^4-(5!)x^5+\cdots.

That is, u(1)=\sum_{n=0}^{\infty}(-1)^n(n!)\approx 0.596. Euler, in fact, determined this sum in 4 different ways, obtaining approximations near 0.59 each time. Another way he did this was to notice that

s(x)=xu(x)=x-x^2+2x^3-(3!)x^4+(4!)x^5-\cdots

satisfies (formally) the differential equation s'+\frac{s}{x^2}=\frac{1}{x}. Using integrating factors, this means that s(x)=e^{1/x}\int_0^x\frac{e^{-1/t}}{t}dt. Making the substitution v=e^{1-1/t} and then evaluating at 1, we get s(1)=1\cdot u(1)\approx 0.596=\int_0^1 \frac{dv}{1-\ln v}. The infinite sum of alternating factorials can be recovered from this integral by repeatedly applying integration by parts.

3.275… & 2.685

In the exposition on continued fractions we saw lower bounds on the denominators of convergents. It turns out an upper bound exists as well (at least, for the continued fractions of almost all \alpha). Even stronger, it can be shown that there is a constant which I will denote \gamma (following Khinchin, and not to be confused with the Euler-Mascheroni constant) such that for almost all \alpha, \sqrt[n]{q_n}\rightarrow \gamma. Levy showed that \ln(\gamma)=\frac{\pi^2}{12\ln 2}\approx 1.186, and so \gamma\approx 3.275.

Khinchin, in the mid 1930s, proved the following theorem. Suppose that f:\mathbf{N}\to \mathbf{R}_{\geq 0} and there are positive constants C,\delta such that f(r)<Cr^{\frac{1}{2}-\delta}. Then for almost all \alpha\in (0,1),

\displaystyle \lim_{n\to \infty}\frac{1}{n}\sum_{k=1}^{n}f(a_k)=\sum_{r=1}^{\infty}f(r)\frac{\ln(1+1/(r(r+2)))}{\ln(2)}

Take, for example, f(r) to be 1 if r=k and 0 otherwise. Then the sum on the left above counts the number of occurences of k in the first n elements of \alpha, so the average determines the density of ks. The theorem then tells us that for almost any \alpha\in (0,1), the density of ks in the elements of \alpha is a positive constant d(k). For instance, d(1)=\frac{\ln(4)-\ln(3)}{\ln(2)}\approx 0.41503.

A more interesting example is found by taking f(r)=\ln(r). The statement of the theorem then gives that, almost everywhere,

\displaystyle\sqrt[n]{a_1\cdots a_n}\rightarrow \prod_{r=1}^{\infty}\left(1+\frac{1}{r(r+2)}\right)^{\frac{\ln r}{\ln 2}}\approx 2.685

Humorously, “almost everywhere” leaves out two of the examples of continued fractions you might first consider, the golden ratio \phi=[1;1,1,1,\ldots] and e=[2;1,2,1,1,4,1,1,6,1,1,8,\ldots].

Other posts in this series

  1. The First Batch
  2. Continued Fractions
  3. The Second Batch
  4. Resources

Curious Constants (part 2 of 4)

February 19, 2009

This continues my series of notes for my talk on some mathematical constants. I’d like to make a small diversion from talking about these constants to provide some background on continued fractions. What follows is a brief, abridged summary of Khinchin‘s short book, “Continued Fractions,” which I highly recommend.

Continued Fractions

A (simple, regular) continued fraction is an expression (possibly finite)

a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\ddots}}}

Which I abbreviate [a_0;a_1,\ldots,a_n,\ldots]. Here the values a_i are natural numbers (allowing a_0 to be 0, but all others positive), and we call them the elements of the continued fraction. If the continued fraction is finite, ending at a_k, we will say that it has order k.

Finite continued fractions clearly correspond to rational numbers. We can convert any number to a continued fraction, by repeatedly splitting off integer parts, and then inverting. For example

\frac{13}{9}=1+\frac{4}{9}=1+\frac{1}{9/4}=1+\cfrac{1}{2+\frac{1}{4}}

The same process works for real numbers, we just iterate without end.

Fix a continued fraction [a_0;a_1,\ldots,a_n,\ldots], corresponding to the real number \alpha. Let s_k=[a_0;a_1,\ldots,a_k], which we call the kth segment. This is a fraction that is already in lowest terms, which we typically denote p_k/q_k, and call the kth convergent. We highlight, now, a few equations and inequalities concerning convergents:

\displaystyle \frac{p_{k-1}}{q_{k-1}}-\frac{p_k}{q_k} =\frac{(-1)^k}{q_kq_{k+1}}

q_k \geq 2^{\frac{k-1}{2}}

\displaystyle \frac{1}{q_k(q_{k+1}+q_k)}<\left| \alpha-\frac{p_k}{q_k}\right|<\frac{1}{q_kq_{k+1}}

The convergents of even order form an increasing sequence, and those of odd order form a decreasing sequence, with the limit, \alpha in the middle.

One of the most interesting things about continued fractions is their relation to best rational approximations. Loosely, every convergent is a best rational approximation, and vice-versa. For more specifics, see Khinchin’s book. Along the same lines, if a/b is an irreducible fraction with |\alpha-a/b|<1/(2b^2), then a/b is a convergent of the continued fraction for \alpha.

For every irrational \alpha with bounded elements, and small enough value for c, |\alpha-p/q|<c/q^2 has no integer solutions for p and q. However, if the elements are unbounded, then for any c, there are infinitely many solutions. More generally, Liouville showed that if \alpha is an irrational with degree n (i.e., it is algebraic with minimal polynomial n), then there is a c>0 such that for any p,q, |\alpha-p/q|>c/q^n. Thus, if for every positive c, and every positive integer n, there are integers p,q such that |\alpha-p/q|\leq c/q^n, then \alpha is transcendental. In a sense, this is saying that transcendental numbers have a rapidly converging sequence of rational approximations. This led to the first proofs that chosen numbers were transcendental. To construct a transcendental, you can pick any collection of positive integers a_0,\ldots,a_k, and then form the continued fraction [a_0;a_1,\ldots,a_k], find it’s denominator q_k, and choose any a_{k+1}>q_k^{k+1}. Repeating this process will generate an infinite continued fraction with unbounded denominators that is provably transcendental. Considerations like the above, concerning just how good rational approximations can be, earned Klaus Roth a Fields Medal in 1958.

Other posts in this series

  1. The First Batch
  2. Continued Fractions
  3. The Second Batch
  4. Resources

Curious Constants (part 1 of 4)

February 19, 2009

I’m giving a talk tomorrow about a handful of mathematical constants. I’ve been working on digging through references, finding the constants I want to talk about, and seeing how much I can make sense of, and I think I’m getting on towards good shape. I want to not focus on more common constants, like e,\pi,\phi, though, of course, they have a tendency of sneaking up, sometimes apparently out of nowhere. I’d also like not to pick specific integers to talk about, though I will end up talking about sequences and subsets of integers.

Without further ado, here’s my hand-picked collection of curious constants. I’ve truncated (not even bothered rounding) the decimals off at about 3 places, because the number itself doesn’t strike me as the interesting part. Resources are mentioned in the last post in this series.

The First Batch

1.291… & 0.704…

These are the values of the integrals \int_0^1 \frac{1}{x^x}dx and \int_1^{\infty} \frac{1}{x^x}dx. What’s more interesting is some series representations of these integrals. We can write the first as

\begin{array}{rcl}\displaystyle\int_0^1\frac{1}{x^x}dx &=& \displaystyle\int_0^1 e^{-x\ln x}dx\\ &=& \displaystyle\int_0^1 \sum_{n=0}^{\infty} \frac{(-x\ln x)^n}{n!}dx \\ &=& \displaystyle\sum_{n=0}^{\infty}\frac{(-1)^n}{n!}\int_0^1 x^n(\ln x)^n dx \\ &=& \displaystyle1+\sum_{n=1}^{\infty}\frac{(-1)^n}{n!}\cdot \frac{(-1)^nn!}{(n+1)^{(n+1)}}\\ &=&\displaystyle\sum_{n=1}^{\infty} \frac{1}{n^n}\end{array}

Notice that if we just integrated x^x, instead of it’s reciprocal, the minus signs wouldn’t cancel, and we’d just get the alternating series. By the way, Ramanujan rewrote the second integral as \int_1^{\infty} \frac{1}{x^x}dx=\sum_{n=0}^{\infty}(-1)^nn^n, using a technique known as Borel summation. This won’t be the first divergent series we’ll see.

1.444… & 0.0659…

While we’re on iterated exponentials (tetrations), let’s make an infinite tower. But first, some notation. Define {}^1x=x, {}^2x=x^x, and in general {}^nx=x^{({}^{n-1}x)}. So {}^nx is a tower of exponentials, with n (count ’em) xs. Then let h(x)=\lim_{n\to \infty} {}^nx, if the limit exists. That is, h(x)=x^{x^{x^{.^{.^.}}}}. Suppose that a is a point for which this limit exists. Then a=x^a, so a^{1/a}=x. What this means is that if we define g(x)=x^{1/x}, then g and h are inverse functions. This helps us identify the domain of h(x), because g(x) has a maximum value of e^{1/e}\approx 1.444. So this forms the upper limit of the domain of h(x). It turns out that the lower limit is e^{-e}\approx 0.0659. I, personally, like the number e^{1/e}, because it is the unique value a so that a^x and \log_a x have a common tangent.

16.177… & 23.103…

Let me write \sum_n'\frac{1}{n} for the sum of the reciprocals of those natural numbers n whose decimal expansion contains no 1. Let’s first convince ourselves that this series converges. To do so, notice that there are 8\cdot 9^{k-1} values for n that have k digits, none of which is a 1. For such an n, the reciprocal is never greater than 1/10^{k-1}. Therefore, \sum_n'\frac{1}{n} is no greater than the convergent geometric series \sum_{k=1}^{\infty} 8 \left(\frac{9}{10}\right)^{k-1}. In fact, the sum is around 16.177. There was nothing particularly special about choosing to exclude the digit 1, and excluding any digit still gives a convergent series. The greatest of these sub-harmonic series is when 0 is excluded, with a sum around 23.103. The others lie between 16.177 and 23.103. This series is known as the Kempner Series, for his paper on the topic in 1914.

.577…

Perhaps the most well-known constant on the lineup, Euler’s (shared with Mascheroni) constant, \gamma, given by \lim_{n\to \infty} \left(\sum_{i=1}^{n}\frac{1}{n}-\ln n\right). I mention this mostly as a reminder for when it shows up shortly.

.261…

Before getting to what this constant is, I’d like to present Euler’s proof that the sum of the reciprocals of the primes diverges (an easy corollary being that there are infinitely many primes). Set

M=\sum_{n}\frac{1}{n}=\prod_{p}\left(1-(1/p)\right)^{-1}=\prod_{p}\left(1+\frac{1}{p}+\frac{1}{p^2}+\frac{1}{p^3}+\cdots\right)

Here we have expanded each factor \left(1-(1/p)\right)^{-1} using the usual expansion 1/(1-x)=\sum_{n=0}^{\infty} x^n, and the products are taken over all primes. This helps see why the product is the same as the harmonic series, if you stare at it for a few minutes. Now, taking the natural log on both sides of M=\prod_p \left(1-(1/p)\right)^{-1}, and using the Taylor expansion \ln(1-x)=-\sum_{n=1}^{\infty} \frac{x^n}{n}, we get

\begin{array}{rcl}\ln M &=& -\ln(1-1/2)-\ln(1-1/3)-\ln(1-1/4)-\cdots \\ &=& \phantom{+}\frac{1}{2}+\frac{1}{2}\left(\frac{1}{2}\right)^2+\frac{1}{3}\left(\frac{1}{2}\right)^3+\cdots \\ &{} & +\frac{1}{3}+\frac{1}{2}\left(\frac{1}{3}\right)^2+\frac{1}{3}\left(\frac{1}{3}\right)^3+\cdots\\ &{}& + \frac{1}{5}+\frac{1}{2}\left(\frac{1}{5}\right)^2+\frac{1}{3}\left(\frac{1}{5}\right)^3+\cdots\\ &{}& + \ddots\end{array}

Here, the rows are indexed by the primes, and the columns by the natural numbers. Next, sum down columns to obtain

\ln M=\sum_p \frac{1}{p}+\frac{1}{2}\sum_p \frac{1}{p^2}+\frac{1}{3}\sum_p \frac{1}{p^3}+\frac{1}{4}\sum_p \frac{1}{p^4}+\cdots

Thinking about the integral test, if s>1 then \sum_{k=2}^{\infty} \frac{1}{k^s}\leq \frac{1}{s-1}, so if we only sum over primes, the inequality still holds. Therefore

\begin{array}{rcl}\ln M &=& \sum_p \frac{1}{p}+\sum_{n=2}^{\infty}\left( \frac{1}{n}\sum_p \frac{1}{p^n}\right) \\ &\leq & \sum_p \frac{1}{p}+\sum_{n=2}^{\infty} \frac{1}{n(n-1)}\end{array}

Since the final sum on the right is convergent, and we know the harmonic series diverges (so \ln M “diverges”) we must determine that \sum_p \frac{1}{p} diverges.

In 1874, German mathematician Franz Mertens established the following two results (p indicates we index over primes):

  1. \sum_{p\leq x}\frac{\ln p}{p}=\ln(x)+O(1)
  2. \prod_{p\leq x}\left(1-\frac{1}{p}\right)^{-1}=e^{\gamma}\ln(x)+O(1)

He also determined that \lim_{n\rightarrow \infty}\left(\sum_{p\leq n}\frac{1}{p}-\ln(\ln(n))\right)\approx .261. He was beaten to this result, though he obtained it independantly, by Ernst Meissel, who determined the same limit in 1866.

1.902… & 0.660…

Though the sum of the reciprocals of all primes diverges, Brun, in 1919, showed that the sum of all prime pairs converges. In fact,

\left(\frac{1}{3}+\frac{1}{5}\right)+\left(\frac{1}{5}+\frac{1}{7}\right)+\left(\frac{1}{11}+\frac{1}{13}\right)+\cdots\approx 1.902

This value is known as Brun’s constant. Let \pi(x) denote the number of primes less than x and \pi_2(x) denote the number of primes, p, less than x, for which p+2 is also a prime. One way to state the prime number theorem is to say \pi(x)/x\sim 1/\ln(x). Similarly, it is reasonably conjectured that \pi_2(x)/x\sim A/(\ln x)^2 for a constant A. If the events “p is prime” and “p+2 is prime” were independant, we would expect A=1. However, this seems to not be the case, and in fact A=2C_2 where C_2=\prod_{p\geq 3} \left(1-\frac{1}{(p-1)^2}\right)\approx 0.660 is the “twin primes constant.” Before discussing where this constant comes from, it is worth mentioning that computer work with twin primes in the mid ’90s led to the discovery of the Pentium bug.

Several papers have aimed at motivating the appearance of C_2 in the asymptotic estimates for \pi_2(x). The one that finally made things click for me is this one, which I found here, and whose methods I discuss here. Suppose f_1,\ldots f_k are polynomials with integer coefficients, and let’s see what we can say about the probability that they are simultaneously prime when evaluated at n. If they were independant, the probability would be the product of the probabilities that the f_i(n) were prime, which is 1/\ln(f_i(n)), by the prime number theorem. However, this is, in general, too much to ask for, and we introduce a scaling factor.

Define w(p) to be the number of values x\in \{0,\ldots,p-1\} for which f_1(x)\cdot f_2(x)\cdots f_k(x)\equiv 0\mod p. That means x gets counted toward w(p) if its value under any of the polynomials is a multiple of p. Now we recognize that the probability that p does not divide any f_i(n), for a randomly chosen n, is (p-w(p))/p. This gets divided by the probability that p does not divide any k randomly chosen integers, which is \left(\frac{p-1}{p}\right)^k. This gives us an appropriate scaling factor for each prime p, and we take the product over all of them to obtain \prod_p \frac{1-w(p)/p}{(1-1/p)^k}.

Let’s check if any of this made sense, by checking the prime number theorem. We want the probability that a random n is prime to be 1/\ln(n), so we want the scaling factor above to be 1. The only polynomial is f_1(n)=n, so k=w(p)=1 and we see that each term in the product above is simply 1. That’s comforting. To apply these ideas to the case of twin primes, let f_1(n)=n, f_2(n)=n+2. So k=2,w(2)=1, and w(p)=2 for p>2. So we write the above product as:

\begin{array}{rcl} \displaystyle\prod_p \frac{1-w(p)/p}{(1-1/p)^k} &=& \displaystyle\frac{1-1/2}{(1-1/2)^2}\cdot \prod_{p>2}\frac{1-2/p}{(1-1/p)^2} \\ &=& \displaystyle 2\prod_{p>2}\frac{p(p-2)}{(p-1)^2}=2\prod_{p>2}\left(1-\frac{1}{(p-1)^2}\right)\end{array}

0.373.. & 0.247…

For any prime p, if 1/p is a repeating decimal with period k, then we can write 1/p=M/(10^k-1) for some integer M, which we rearrange to say 10^k\equiv 1\mod p. Say that 10 has order k, mod p, if k is the least value for which this equivalence is satisfied. Fermat‘s Little Theorem tells us that 10^{p-1}\equiv 1\mod p, and so we know that the order (and hence, length of the decimal period) will never be greater than p-1. One might call a prime p with maximal decimal period p-1 a long prime, an example is 7, whose decimal period is 142857. There’s nothing particularly special about using 10 – let us say that a has order k, mod p, if a^k\equiv 1\mod p and k is the least (positive integer) value satisfying this condition. Call a a primitive root, mod p, if it is not a multiple of p and has order p-1, mod p.

Define N_a(x) to be the number of primes no larger than x for which a is a primitive root. Artin, in 1927, conjectured that N_a(x)\sim A(a)\frac{x}{\ln x} and, furthermore, that A(a) was independant of a, and had a value \prod_p \left(1-\frac{1}{p(p-1)}\right)\approx 0.373. This can be interpreted as a sort of probability – if you pick a value for a and then a prime p randomly, the probability that a will be a primitive root for p is about 0.373. The conjecture can also be justified by thinking about how probable it is to be a primitive root.

While we’re talking about probabilities and densities like this, I can hardly resist mentioning abundant numbers. Remember that a number is abundant if the sum of its proper divisors is greater than the number itself. So, for example, 12 is abundant, because 1+2+3+4+6=16, while 7 is not abundant (making it deficient), because it’s only proper divisor is 1. What is the probability that a randomly chosen integer is abundant? Approximately 0.247.

Other posts in this series

  1. The First Batch
  2. Continued Fractions
  3. The Second Batch
  4. Resources