Yesterday, I related the logarithm of to a piecewise linear function . You may recall that was defined for positive reals by setting it equal to 0 at , and then jumping by whenever , for some prime and integer . At the end of the day, we got to

where . Today, we’ll analyze some more, and re-write the formula above.

When I introduced , I ended with the following formula:

where the bounds on that integral are supposed to represent a curve that “starts” at the right-hand “end” of the real line, loops around 0, and then goes back out the positive axis to infinity. I’m not good enough at complex line integrals at this point to say any more about this. But apparently if you are good at these sorts of integrals, using Cauchy’s integral formula and things you can find the so-called “functional equation”

If you then use the relation I mentioned previously:

(well, you use this for ), and one I haven’t mentioned:

and move some symbols around, you arrive at a more symmetric equation:

Notice that if you plug in the formula on the left-hand side, you obtain the right-hand side.

This function on the left-hand side apparently has poles at 0 and 1, so if we define

then we obtain an entire analytic function satisfying . Using the factorial relation for , we can re-write as

I get the impression that if you know what you are doing, then the things above aren’t tooo hard to justify. Apparently the next part is a bit trickier. Apparently, you can write

where the product is indexed over the roots, , of (so ).

If you’ve heard anything about the Riemann hypothesis, you know that the roots (the “non-trivial” ones, I didn’t talk about the trivial ones) of are a big deal. Our second formula for shows that they are (basically) the same as the roots of , and so they are the that the sum above is indexed over. The symmetric equation from earlier has a little something to say about the zeroes, and it has been shown that all of the zeroes have real part bigger than 0 and less than 1 (this is called the “critical strip”). The hypothesis (whose truth won’t affect what we’re saying below) is that all of the zeroes have real part 1/2 (this is the “critical line”). Apparently Riemann didn’t need this hypothesis for the things in his paper that introduced it, so I don’t really have much more to say about it right now. Although, honestly, I still don’t see what all the fuss is about 🙂 The formulas we’ll get below and tomorrow work even if the roots aren’t on the critical line (unless I’m missing something important. If I am, please comment).

Anyway, back to the topic at hand. Let me try to convince you that it isn’t horribly unreasonable to think about writing a function as a product over its roots, as I’ve done above. For the sake of example, let (or pick your own favorite polynomial). The usual way this would get factored, in all the classes I’ve ever taken or taught, is (up to permutation) , showing that the roots are . However, if you factor a 4 out of the term, and -1 and -2 out of the other terms, you can also write . You still see all the zeroes when you write the polynomial this way. You can also see that the coefficient in the front is . So we’ve written , which is the same goal as what we’re doing with above. Incidentally, the idea of writing a function this way was also used by Euler to establish (I’ve mentioned this briefly elsewhere).

We now have two formulas for , so we can put them together to get

Recalling that our formula for , at the beginning, involved , let’s take the log of the equation above and solve for the term:

The idea is now to plug this in the formula for . Apparently if you do, though, you’ll have some issues with convergence. So actually try to do the integral in , using integration by parts (hint: ). The “” term goes to 0 and you obtain

where, as before . Now plug in the 5 terms we’ve got above for , and you get a formula for . What happens to the terms? Can you actually work out any of the integrals?

Well, you might be able to. I’m not. Not right now anyway. But I can tell you about what others have figured out (rather like I’ve been doing all along, in fact)…

It’s clear that the term drops out, because you divide by and then take the derivative of a constant and just get 0. The term with ends up just giving you , which is .

The term corresponding to the term with in it can be rewritten as

(as if that were helpful).

The important terms seem to involve the function

Of course, this integrand has a bit of an asymptote at 1, so really (in Edwards’ book, anyway) is the “Cauchy principal value” of this integral, namely

This function is, rather famously, related to approximating the number of primes less than a given bound. In fact, tomorrow I plan on having more to say about this. But back to the terms in our integral for .

The term corresponding to the sum over the roots ends up giving you

.

But apparently the dominant term is the term corresponding to . It actually gives you

So, finally, we have written

Doesn’t that make you feel better? We started with the reasonably understandable

,

and created the monstrosity above. I guess this is why I’m not an analyst. To me, it seems worse to write as this terrible combination of lots of integrals. But apparently it’s useful in analysis to have such formulas. I guess we’ll see a use tomorrow…

Tags: gamma, mablowrimo, riemann, zeta

November 24, 2009 at 11:55 pm |

[…] ∑idiot's Blog The math fork of sumidiot.blogspot.com « Another Formula for J(x) […]