Let us consider linear fractional transformations where the coefficients are all integers and . Since multiplying all of the coefficients by a constant doesn’t change the transformation, we may multiply everything by -1 whenever we want (for instance, to make a chosen coefficient positive). Recall, also, that writing the transformation as a matrix, is not an entirely unreasonable thing to do, as composites of transformations are correctly determined by matrix multiplication. I’ll be talking about sequence of these matrices, so let my original matrix be

If , then the requirement means that , and we may choose . Then our transformation is simply . Since is an integer, we could write this as .

If , then there is a so that . I’ll leave you to check the following:

Writing this right-most matrix as , we are looking at another matrix with integer coefficients whose determinant is 1. So we may repeat the process, finding integers and LFTs with . The process must stop eventually, because are all non-negative integers. Eventually, one of them will be 0, and we already dealt with transformations like that in the previous paragraph.

Let , and note that , which we’ve used before. Also, let (you might note that is the identity matrix). Then our relation above is . That is,

The collection of matrices that we have been working with, 2×2 matrices with integer coefficients whose determinant is 1, tend to get called the **modular group** and we have just shown that and can be multiplied together to give you any element of the group. More officially, they “generate” the group. You might fuss and say that technically these matrices are really the **special linear group** , and note that the modular group is really the quotient of this group where you say two matrices are the same if one is -1 times the other.

One interesting thing about this process is that it is really just the process you go through to find the greatest common divisor of and . The standard algorithm (Euclid’s) is to note that, if , there is an integer such that where . The next step is to repeat the process, writing , with . Iterating, you find so that , and eventually a is 0, and the process stops.

You might be a little worried (justifiably) that this process only seems to look at and , leaving and out of the picture. If you look at what we did with the matrices, our process took and wrote it as , and the and depended on and . In the final step of the process, where the -coefficient is 0, we ended up with a -coefficient of 1, but the -coefficient will be a function of the values and , so those values do show up and get used, eventually. Of course, since , knowing one of or is enough to determine the other (assuming you know and ). You should be thinking that the and are the extra variables that run around in Euclid’s alrgorithm when you do the “extended” version.

We’ve been talking about all of this in terms of matrices, but remember that the matrices represent linear fractional transformations. The matrix is the transformation , and the power is then . These are just horizontal translations. The matrix is the transformation . With our relations among the above, we see that we could write

Hurray for continued fractions! Of course, I feel like I set something up backwards. Plugging in will give me a continued fraction for , but I was saying that the come from thinking about the greatest common divisor of and . Ah well.

[**Update** 20091203: The error I’m concerned about above stems from some loose dealings with the end of the iterative procedure. Letting , we will get to something like where , i.e. . So we end up saying . And that does depend on the initial , presumably in a manner such that plugging in makes the fraction make sense.]

What I’ve said is that every 2×2 matrix with coefficients in and determinant 1 can be written in terms of and . However, you might want to use instead of . Going through essentially the same process as before, with some minus signs sprinkled in appropriately, one can determine that and can be used, instead of and , to generate any of the matrices we are considering. What is the advantage of over ? One way to state the advantage, for our story, is that applying to points in the upper half plane leaves them in the upper half plane (and similaly for the lower half), whereas flips points in upper half plane to the lower half plane. We should think of this an advantage, because all of our Ford circles lie in the upper half plane. If you go back to yesterday’s discussion, you’ll note that I used in the factorization.

So, anyway, and are the only LFTs you need. You can quickly check that and also (not quite as quickly) that . If you write , then I guess you get to say that the modular group is generated by and , and can establish an isomorphism between the modular group and the free product of the cyclic group of order 2 with the cyclic group of order 3. That’s something.

Tags: continued fraction, gcd, linear fractional transformation, mablowrimo, modular group

## Leave a Reply