# Rosio Pavoris a blog

## Julia settee

I’ve written this, so I might as well share it.

In my post on the Mandelbrot set earlier, I mentioned the Julia sets of the quadratic polynomial fc(z) = z2 + c where c is a given (constant) complex number and z are the points of the complex plane. Because I wanted to visualise how those Julia sets changed as c varied, I’ve written a short program to do that for me.
You can find it here. As usual, you’ll need Allegro, and the compilation instruction is on the first line.

What it does is take two complex numbers as parameters, plus the number of steps it should take to go from the first to the second. At each step, it will calculate and display the Julia set of the quadratic polynomial with that complex number as c, and hopefully your computer is fast enough that the successive Julia sets look like an animation.
For example, if you invoke it as:

./julia -0.8 -1 -0.8 1 200

you’ll see the following:

Though probably not at the same speed. I’ve made no effort to maintain a certain frame rate; the whole thing moves as quickly as your CPU can keep up, because I just wanted a visualisation of how Julia sets change, not a screensaver. If it’s moving too slowly for you, you can try reducing the number of steps, or lowering the numbers in the ZOOM or ITERS #defines, though the first one will make the image smaller and the second will make it darker. If you aren’t interested in the window title, you can also remove the snprintf and set_window_title steps for a significant speed-up.
If it’s too fast, you can do the reverse, or you can build in a delay with Allegro’s install_timer and rest, or POSIX’s usleep or nanosleep.

(Once it’s done, it will just pause at the last Julia set. Press any key to close it. If you want to close it before it’s done, you’ll have to kill it manually.)

The interesting points to explore are the ones inside the Mandelbrot set, as anything else will just be Fatou dust (though you’ll probably still be able to see it because of the grey). For those points, the salient area is the one within 2 unit lengths of the origin, which is why the field displayed ranges from (-2, 2) in the top left corner to (2, -2) in the bottom right (or probably (-2, -2) to (2, 2), I don’t remember). If you need a bigger plane, replace all instances of ZOOM * 4 with ZOOM * (bigger number), and all instances of ZOOM * 2 with ZOOM * (half of bigger number) (if you want to keep the origin in the center of the window).
If you actually want to save the animations, man 3alleg save_bitmap and assemble the images yourself in something like the GIMP. I initially started out doing it this way, but animated GIFs get really big really quickly, so I went with this instead.

Enjoy.

## Mandelbrots

I was bored, so I made this.

Basic introduction to the Mandelbrot set and what this image represents follows.

## Xlib hates me

Having finished another popsci book on chaos theory recently (Ian Stewart’s Does God Play Dice?), I thought it’d be an interesting exercise to visualise the Lorenz attractor, and since it’s been a while since I’ve done anything new in programming, to take the opportunity to get into Xlib, the X Window System C library. Results aren’t very encouraging.

I mean, I got something to work easily enough, but any attempt at introducing color beyond black and white for clarity fails miserably and in non-deterministic ways. Eventually I gave up and redid it using something I know.
Compare:

(It’s prettier animated, so do compile the code yourself and see.)
In both cases, the screen represents the Cartesian plane (X-axis horizontal, Y-axis vertical, origin right in the center; one unit is ten pixels). In the Xlib version (left) the Z component is ignored entirely (so it’s really a projection of the attractor onto the Cartesian plane), in the Allegro version (right) some attempt at representing it using shades of gray has been made, with z=0 being black and z=55 being white (though because it is drawn with no real care, it will happily scribble dark lines over light ones if it has to).
You can mess with the variables and starting condition to see how it behaves, or swap around some Xs and Ys and Zs to get different angles, and at least in the Allegro version, messing with color is trivial enough.

Which brings me to my question: does anyone know of decent introductions to Xlib? The Internet is full of tutorials, and as usual, all of them seem to suck. I know Xlib isn’t really supposed to be used directly, but I want to.

## Quadratic spline interpolation

You’ve had this problem before: you have a bunch of data points, and you want to interpolate between them.
For various reasons, higher order polynomial interpolation (where you try to find an nth-degree polynomial through n + 1 of your data points) can be a bad idea, so you decide that rather than using a simple equation, you’ll use a series of them to connect your data points. These equations are splines, and the simplest form of spline interpolation is just, well, connecting your data points directly:

That’s pretty ugly, though. Is there a way to achieve something like this:

instead?
Yes, obviously, and one of those ways is to use quadratic splines instead. Let’s use a simpler example, though. Suppose we only have four data points, (x0, y0) through (x3, y3):

The black dots are our actual data points, the red lines are our linear splines. What we’d actually like, though, is this:

Turns out that’s not that hard to do. As you can see, every spline is a quadratic equation, which obviously is of the form f(x) = ax² + bx + c. So each spline equation has three unknowns (a, b, and c), and there are three splines, for a total of nine unknowns (let’s call them a1 through a3 and so on).

Since two points are known for each spline equation, that gives us the following six equations:

To solve for nine unknowns, obviously we need nine equations. So what else do we know?

Well, the reason the linear spline interpolation looks like crap is because of the sharp breaks at the spline edges, so we would like our neighboring quadratic splines to have the same slope in the point that they share. In other words, if our spline equations are f, g, and h, we want the derivative of f to equal the derivative of g in the point (x1, y1), and we want the derivative of g to equal the derivative of h in the point (x2, y2).
The derivative is easy enough to find:

Filling in, this gives us two more equations:

Or equivalently:

Which brings our total to eight equations. We aren’t going to squeeze another legitimate equation out of this, so let’s just fill in one of the unknowns ourselves. If we make one of the as equal to 0, one of the quadratic splines becomes a linear spline, which is fine. Let’s take a1 for simplicity’s sake.
This enables us to construct the following matrix:

The first three columns are the as, the next three the bs, the next three the cs, and the final column will hold the solutions after reduction.
Filled in and solved for our particular dataset:

Which gives us the following equations for our splines:

Obviously this is a lot of work, but it’s mechanical work that doesn’t require a lot of judgement. Which is why I’ve written this Python script to do it for you. Feed it data points and it’ll produce gnuplot code to plot your splines:

\$ python qsi.py < data.txt
plot 1.000000 <= x && x <= 3.000000 ? 0.000000 * x * x + 1.500000 * x + 1.500000 :\
3.000000 <= x && x <= 5.000000 ? -1.250000 * x * x + 9.000000 * x + -9.750000 :\
5.000000 <= x && x <= 9.000000 ? 1.125000 * x * x + -14.750000 * x + 49.625000 : 0/0 notitle

As you can tell, it’s not necessarily gorgeous, but it (probably) works, and it’s not like anyone has to see the code itself.
Format for the input file is as you’d expect: two numbers per line, first being x and second y, sorted. If gnuplot‘s output is jagged, increase the sampling (set sample 1000).
And if it doesn’t work, fix it.

Edit: In light of overwhelming demand, this is a script that interpolates using a higher-order polynomial, as mentioned above. Here’s how the approaches compare for our sample dataset:

This script will fail if you only have one datapoint and its x value is 0, but everything else should work.

## Strange attractor

You know Sierpiński gaskets, right? I used one in my Christmas tree last December. They’re fractals created by taking a triangle, connecting the midpoints of the sides to divide it into four, removing the middle one, and then repeating that on the remaining triangles, ad infinitum (literally). They have an area of 0 and a Hausdorff dimension of log2(3).

There’s another, more interesting way of constructing them, though: take the three corner points of a triangle, and a random starting point x. Roll a three-sided die1 to select a random corner point, and mark the midpoint between that point and x. Then, this midpoint becomes x. Repeat forever.

It turns out the Sierpiński gasket is the attractor for this system. I’ve written a Python script to save you some paper and a large number of pencils.2 Here is the result I got after 10,000 iterations:

To run the script, you’ll need to have the Python Imaging Library installed. It takes three optional arguments: the side of the triangle in pixels (defaults to 1,000), the number of iterations (default 10,000), and the output filename (default out.png).

Strange attractors are fun. Coming up: more of the same.

Edit: If you’d prefer something you can see on the screen to something that dumps to a file, this may interest you. You can change the WIDTH and HEIGHT #defines to your actual resolution if you like (or basically any value, really; it should produce an equilateral gasket for any realistic resolution, though it might not for odd ones, including ones that are higher than they are wide).
You’ll need (besides a C compiler) the Allegro libraries. If you’re using a Debian-based distro, the package is liballegro-dev, IIRC, or you can get them here.

1 Or a six-sided one where you divide the result by two, rounding down, if you like.

2 It’s not exactly the same thing: raster images don’t have an infinite resolution, IEEE floating point numbers don’t have infinite precision, and you (probably) don’t have the patience to let your computer run forever.3

3 If you do, consider cracking tripcodes instead.

## Here’s a math problem for you

$\frac{64^8!}{64^{8n}(64^8-n)!}=\frac{1}{2}$

Solve for n.

Edit: This is basically the reverse birthday problem, with the fixed probability being 50%. Just applying that approximating formula is a bit easier than working out the problem above:

$n(p)\approx\sqrt{2\times 64^8\times \ln\left(2\right)}\approx 19753662$

Which is much lower than I expected and also crap. It means that the algorithm I’m using for my work-in-progress distributable tripcode searcher is broken. There are some obvious ways to fix it, but finding a way that’s both good and not likely to slow things down too much1 (though obviously the bottleneck is still going to be crypt itself) requires some thought.

1 Right now it’s running at about 330,000 tripcodes per second on my laptop, which is over four times the speed of tripper+ and over sixteen times the speed of my Haskell tripcode implementation, so I guess some slow-down wouldn’t kill it.
Though Asztal claimed 4 million tripcodes per second with Tripcode Explorer on his machine, so I suppose there’s still room for improvement, too.

## The Sieve of Eratosthenes

Prime numbers are ridiculously important in cryptography, and I recently found myself needing a way to generate them (for a toy implementation of the Diffie-Hellman key exchange algorithm). It keeps amazing me how few languages actually have libraries for generating prime numbers, or even just for primality testing.
Because I didn’t feel like thinking, and I didn’t need incredibly massive primes, I just wrote a quick implementation of the sieve of Eratosthenes, which is an interesting enough algorithm that I’d thought I’d share it (this is first-year stuff even for us, but I guess I have non-programmers in my audience).

The basic idea is that you start with a list of integers from 2 up to the number you want to find the largest prime smaller than. You then take the smallest number in this list (2), and cross off all multiples of this number (except for 2 itself). You repeat this for every other number, in order, and you’ll be left with just a list of primes.

Wikipedia has a visualisation which could be clearer but is still pretty good:

The implementation (in Python) is dead simple:

def sieve1(n):
l = range(n)
l[0], l[1] = None, None
for i in xrange(int(__import__('math').sqrt(n)) + 1):
if l[i] is not None:
j = i * 2
while j < n:
l[j] = None
j += i
return filter(None, l)

We could just use range(2, n) and then work with an offset, but the convenience of having the indices equal to the values at those indices is too useful to complain about the extra memory required for two additional integers. Note also the obvious optimisation of only going up to sqrt(n) + 1 (no apologies for the unorthodox import).
filter(None, l) gets rid of the Nones in our list. Passing None as the function argument just applies the identity function, which has the effect of getting rid of None, False, 0, and empty lists, tuples, and dictionaries (anything which would evaluate to False in an if statement).

This is nice, but it does have the disadvantage of having to generate the whole list every time you want more primes. You can’t just input a list of pregenerated primes and have it continue from there.
The fix is easy enough:

def sieve2(n, primes = [2, 3]):
if n <= primes[-1]:
return filter(lambda m: m <= n, primes)

offset = primes[-1] + 1
l = range(offset, n)

max = int(__import__('math').sqrt(n))
for p in primes:
if p > max:
break
i = p * 2 - offset
while i < 0:
i += p
while i < len(l):
l[i] = None
i += p

for k in xrange(max - offset + 1):
if l[k] is not None:
i = k * 2 + offset
while i < len(l):
l[i] = None
i += l[k]

l = filter(None, l)
primes.extend(l)

return primes

This function uses memoisation, and takes advantage of a feature of Python’s function defaults which some believe to be a bug: if mutable default arguments are altered, they will remain altered for the next time the function is called.
This is a side-effect of the fact that functions have a single field that holds the default arguments (a tuple called func_defaults in the function’s namespace), which isn’t duplicated into a working copy when the function is called. Most default arguments will be things like integers and strings, which are immutable, so most people never even run into this. Being able to use it for memoisation is really handy, though. Alternatively, you could just use a global variable and access it from inside the function.

The algorithm is basically the same, except that if we’ve already generated the primes requested, we can just return them. Then we construct a list of all integers larger than our largest prime, and sieve it with our existing primes (now we really do have to work with an offset). Then we just sieve the rest classically. At the end, we add our newly sieved list to our old primes, and return that.
Next time we invoke the function, primes will still have the new primes. You can access them from outside the function at sieve2.func_defaults[0].

Evaluating sieve2(100) and then sieve2(1000) will be slower than just evaluating sieve1(1000), but it will be faster than evaluating sieve1(100) and then sieve1(1000). And if you happen to have a large number of pregenerated primes, you can save some more time.

The sieve of Eratosthenes is just one of a number of sieve-based prime number generators, but it’s the oldest one, and it compares favorably to modern improvements.

## Diffie-Hellman Key Exchange

I mentioned Diffie-Hellman key exchange in the context of asymmetric cryptography. I think it’s time to look at the algorithm a bit more closely.

As usual, Alice and Bob are trying to securely exchange some information, and they’re trying to agree on a key they can use for a symmetric algorithm. Perhaps they don’t know about asymmetric encryption, or they’re trying to exchange too much data for it to be feasible, or they need a stream cipher. Either way, they want to use symmetric encryption.
They have no secure way to exchange keys, because Eve is listening. After all, this is why they want to use the symmetric cipher as well. This is a very common situation on the internets.

## 0.9999999999999999999999999…

This is something most people will have seen in high school, but since this is neat and some people haven’t (like me, though obviously I skipped a year) and it’s a discussion that comes up surprisingly often for a concept so simple, I thought I’d talk about it:
The repeating decimal 0.9999999999999… (which can also be written as 0.(9), which I’ll be doing from now on), and why it’s equal to 1.

This usually comes up in the context of limits.
A lot of people have misgivings over it, usually because they don’t really understand infinity. Maybe they instinctively believe there has to be a last 9, so it doesn’t go on forever, or they don’t think an infinitely small number really is equal to 0, or they feel that a single number can only have a single decimal expansion (which, for 1, would be 1.00000…, I guess).

The simplest and least controversial way to show 0.(9) = 1 seems to be:

$\begin{array}\frac{1}{3}&=&0.(3)\\\frac{1}{3}*3&=&0.(3)*3\\1&=&0.(9)\end{array}$

(Where 0.(3) is another way of writing 0.333…, as you’ll remember.)

If this seems too easy, or you feel like this just moves the problem to whether or not 0.(3) is equal to 1/3, here’s a fancier one:

$\begin{array}x&=&0.(9)\\10x&=&9.(9)\\10x-x&=&9.(9)-0.(9)\\9x&=&9\\x&=&1\\0.(9)&=&1\end{array}$

The bit that may not be immediately obvious is why 0.(9) * 10 = 9.(9), in the second step.
When you multiply a number by 10 in a base 10 system, you move the decimal separator a step to the right. The number of 9s to the right of the separator is infinite, and ∞ – 1 is still ∞, so all that happens is that a nine gets moved to the position before the period.

There are other, less obvious proofs, but they’re not that important.

The consequences aren’t as fascinating as all that (numbers don’t have a unique decimal expansion, the same principle applies in every base, not just base 10 (0.11111111… = 1 in binary, for example)), and practical applications are few (I’ve been told the Cantor set makes use of it, and you could say Zeno’s paradoxes are kind of like it), but by now the question has become a classic internet argument for reasons that are entirely beyond me, so I thought it was worth addressing.
At least I got to use Mimetex again.

## More John Allen Paulos

I finished these ages ago, but I never got around to reviewing them.
John Allen Paulos is the guy who wrote Innumeracy, as I’m sure you’ll remember.

Once Upon A Number: The Hidden Mathematical Logic of Stories talks about the relation between statistics and storytelling, in a rather loose sense. He discusses the difference between information and meaning, and how logic and language hang together.
It’s hard to describe, obviously, but quite entertaining to read. It’s sprinkled liberally with random math problems, and along the way he also talks about how probability affects religion, which was a nice unexpected bonus (Paulos, like most intelligent, educated people, is an atheist).

The book’s only about two hundred pages, but it’s definitely worth picking up. It’s not as good or important as Innumeracy, but still quite awesome.

A Mathematician Plays the Market (one edition is titled “A Mathematician Plays the Stock Market”, for some reason) is a completely different book. It describes Paulos’ real-life experience with the stock market, and how he lost a significant sum of money in the WorldCom debacle in 2002, due to a combination of poor planning, bad luck, and ignoring his own advice.

He explores the psychological driving forces behind human decision-making when it comes to probability in general, and how they apply to the stock market itself.
Along the way, he explains some things about how the stock market itself works, and why stock market analysts are a boil on the face of society, and economists are mostly idiots.
Well, maybe he didn’t intend to say that, but it’s quite clear from his writing.

Of course, there are the usual mathematical problems as well, but significant parts of them (as well as his plot suggestion for a movie centered around probability) have just been lifted from Once Upon a Number, which was disappointing.
Still, it’s a very interesting book, and it’s one that should be required reading for anyone planning on buying some stock.

## Imagining Numbers

Imagining Numbers (Particularly the Square Root of Minus Fifteen), by Barry Mazur, is about the history and mathematics of imaginary numbers, and how mathematical imagination lines up with the more classic, “poetic” imagination.
That’s quite an ambitious undertaking, and I don’t think the book quite lives up to it.

Maybe I just have a really idiosyncratic way of looking at poetry, but most of the poems he brings into play don’t seem very interesting to me at all, and his interpretations of them strike me as too personal to be of much use in this general kind of topic. I could be wrong.

Either way, the bit I bought the book for was, of course, the history of mathematics, and the mathematics itself.
It’s possible there just isn’t a lot of history to imaginary numbers, but I was disappointed to find he only talks about a handful of European mathematicians, and never even mentions similar concepts in other civilisations. Maybe there just aren’t any.
The mathematics themselves are kind of all over the place, too. It’s like Mazur either couldn’t decide between an entry-level book and a “proper” work on mathematics, or he just got tired of explaining things somewhere along the way. He devotes most of a chapter to very tediously explaining the associative and distributive properties of addition and multiplication, and later on just breezes past important concepts with a simple “Here is an exercise for you”.

The various exercises throughout the book are pretty interesting and fun to do, though, but it does mean it’s hard to read it between classes and on the train and whatnot, which I tend to do.
Still, figuring out what $\left(\frac{1+\sqrt{-3}}{2}\right)^3$ equals (and what that means), while not particularly hard, is the type of mathematics I haven’t been able to do in a long time, and it’s a nice change of pace.

So, on the whole, I thought Imagining Numbers was a pretty good book, though I doubt most people would enjoy it much. It’s not as good as some other popular science type mathematics books I’ve read, but still.
I do think it could have been much better if it had been twice as long, though. It’s about 230 pages (not including notes), and it feels kind of superficial and rushed in places.

## √2 and irrationality

I like this proof. It’s simple and easy to follow, but it still gives an idea of the power of mathematics. It’s the sort of thing most people probably saw in high school, but most people forget.

What we’re trying to prove is that the square root of 2 (1.4142135623&c.) is an irrational number. A number is irrational if it cannot be expressed as a fraction; that is, the ratio of two whole numbers.
To prove this, we’ll use a reductio ad absurdum—we’ll assume √2 is rational, and we’ll see if this leads us to a contradiction.

If it is rational, the expression

$\sqrt{2}=\frac{A}{B}$,

where A and B are integers without common divisors, must be true.
Squaring both sides, we get

$2=\frac{A^2}{B^2}$.

Let’s also multiply both sides by B².

$2B^2=A^2$

So, we clearly see A² is even, and since the square of an odd number is always odd itself (right?), it follows that A itself is even as well.
So A is the double of a whole number, let’s say C. Let’s substitute C in the equation.

$2B^2=(2C)^2=4C^2$

Dividing both sides by 2, we get

$B^2=2C^2$

So B is an even number too!
We’ve said that A and B didn’t have any common divisors, and if they’re both even they have 2 as a common divisor, so this is clearly a contradiction. It follows that √2 cannot be expressed as the ratio of two whole number.
As such, √2 is irrational. Quantum electrodynamics Quod erat demonstrandum.

By posting this, I’ve probably done someone’s homework for them. Oh well.
Another interesting thing about √2, and really the square root of any positive integer, is that it can be expressed as a periodic continued fraction.

$\sqrt{2}=1+\frac{1}{2+\frac{1}{2+\frac{1}{2+\frac{1}{2+\frac{1}{2+...}}}}}$

In a sense, they are the simplest of the irrational numbers. This is rather beyond my ability to prove at the moment, though.

## Innumeracy

Innumeracy, by John Allen Paulos, is about (surprisingly) innumeracy, or mathematical illiteracy. Distinct from the more pathological dyscalcia, it instead implies something closer to functional analphabetism: an inability to grasp simple mathematical concepts, not because one isn’t smart enough or the concepts are too arcane, but simply because whatever mathematical skills the person once might have had have eroded from lack of use.

Most of the book deals with simple probability and how misunderstanding it is both quite harmful (both to individuals and to society as a whole) and extremely widespread.
The problem, as Paulos sees it, is that it’s harder and harder to get away with being (functionally) illiterate, but society almost encourages innumeracy, with a lot of people seeing no shame in declaring they’re “not a math person” (even taking some perverse pride in it, sometimes), in part because so many people see mathematics as dry and boring, and not a good avenue for creativity. Much of the blame falls on shitty education, of course (and it’s getting worse).

It’s a great little book. Despite the very serious subject, Paulos manages to keep a light-hearted tone, and manages to be pretty engaging and funny. He clearly loves mathematics, and manages to convey this love to the reader quite readily.
There are a lot of small math problems interspersed throughout the book, and even though I didn’t have any problems with it (and nobody should, really), I can clearly imagine the average person just not getting it at all. And that, of course, is why everyone should probably read this.

## Oh yeah?

Well I have an Erdős number of infinity! O\__/O

## The Emperor’s New Mind

This book took far longer to finish than it should have. This is in large part due to the silly conclusion Penrose tries to reach, which just makes my brain bleed.
The thesis of the book is simply this:

Consciousness is not deterministic, therefore the brain works by the grace of God through quantum.

It starts promisingly enough, with an explanation of Turing machines, computability, and the Turing test. Having explained these concepts, Penrose then tries to argue that the brain is, in fact, not a deterministic Turing machine.
Central to this claim seem to be the Chinese Room experiment and similar thought experiments (which I’ll grant can be difficult to grasp properly), and various minor things like “flashes of insight” (which clearly can’t be explained deterministically!) and whatnot.

Fortunately, he quickly abandons this line of reasoning and starts talking about mathematics and quantum physics for a few hundred pages. I’m not sure why he does this, since neither has any relevance to the subject at hand, but I’m not complaining.
He tries to return to the brain in the final chapters, but he just makes an ass of himself in the process.

Penrose doesn’t understand psychology, physiology, fetal development, evolution or natural selection (at some points coming perilously close to endorsing ID), the (often counter-intuitive) capabilities of actual computers, or cognitive science, but he tries to venture into each of these fields to make his point and fails spectacularly.
His whole point is essentially a giant, infuriatingly dense argument from incredulity and personal pride (the human brain can’t be a deterministic Turing machine, that would make it too common!), and his attempts to involve quantum physics are more reminiscent of Deepak fucking Chopra than of a theoretical physicist of Penrose’s stature.

Now, does this make The Emperor’s New Mind a bad book? Well, yes. Let me rephrase.
Does this mean The Emperor’s New Mind isn’t worth reading? Absolutely not.

Like I said, most of the book is just seemingly irrelevant stuff about mathematics and quantum physics, and it really is quite interesting. It’s worth keeping in mind that Penrose isn’t just some random woo artist, but an accomplished mathematician and actually a rather competent theoretical physicist.
He talks about Turing, fractals (including the Mandelbrot set), Penrose tilings, the history of physics, and plenty of fascinating concepts in theoretical physics ranging from well-known to rather obscure. If you’re willing to gloss over his forays into cognitive science and AI, and maybe skip the last chapter entirely, it’s actually a very good read.

As far as philosophy of the mind goes, though, I’d just leave that to people like Daniel Dennett.

## The Nothing That Is

I read too quickly. The Nothing That Is, by Robert Kaplan, is about the history of zero.

Most of the book is, obviously, about the concept of zero in mathematics. It starts with the Sumerians, who came up with the concept, and then goes back and forth between the Greeks and the Indians, to figure out who came up with the symbol for it, and at which point it went from a type of punctuation to an actual number.
He pauses briefly on the Mayans and their psychopathic obsessions regarding zero and its significance in their calendar, and then moves on to Western Europe. It took a ridiculous amount of time for zero to be accepted as an actual number, apparently.

Like Barrow’s Book of Infinity, Kaplan has to talk about religion and theology a lot because of the nature of the subject matter, but unlike Barrow, he manages to remain neutral about it; not the faux neutrality that affords theology the same kind of credibility reality-based philosophies deserve, but actual neutrality, examining where the march of zero was slowed down because of it, and where it was accelerated.

Near the end, he tries to move away from mathematics and into physics, but it doesn’t really work. He tries to crowbar the concept into a number of places where it doesn’t belong, and is quickly reduced to weak philosophising.
There’s a chapter on the psychological implications of zero that’s really just painful to read as well.

Still, those are only a small part of the book, and the vast majority of it is extremely interesting and very well-written. Somewhere along the way he manages to talk about every mathematical concept twelve years of education tried to address, and explain it in a way that made considerably more sense than anything our teachers ever tried to tell us.

If you’re at all interested in history or mathematics (but aren’t an expert mathematician, probably), you’ll enjoy this book. Buy it.

## The Infinite Book

I managed to finish a book before buying new ones!
The Infinite Book, by John D. Barrow, tries to explain the concept of infinity in several fields, and talks about how it has historically been regarded.

The first half of the book deals with infinity in mathematics. It starts with the obvious — Zeno’s paradoxes — and works its way through the history of mathematics all the way up to Georg Cantor and his infinite sets (א is a neat symbol), taking care to introduce complicated concepts in a way anyone could understand.

The second half deals with infinity in physics: infinite density, temperature, &c. in singularities, the infinity of space and time, and the infinity of the multiverse. It touches on things like the Big Bang (obviously), whether or not the universe will continue to expand forever, time travel, &c.
All of it is at least moderately interesting, though it does get repetitive.

The final chapter tries to philosophise a bit about what life would be like if we could live forever, in a stoner stream-of-consciousness kind of way. Barrow may be a good mathematician and theoretical physicist (though if he is, the scope of this book didn’t exactly allow him to show it off), but he’s no great philosopher.

But other than that, it was a pretty decent book. It made for easy reading, but it doesn’t treat readers like idiots, which is a hard balance to find. I certainly learned a few things.

One thing that did bother me, though: he touches on theology rather more than I thought was needed (though some mention is obviously going to be necessary, given the subject matter and the historical context), and he seemed to be extremely careful not to comment on its inanities.
Dunno. Maybe I’m more sensitive to that sort of thing than most. Still, since Barrow apparently won the Templeton Prize in 2006, I don’t think it’s just in my head.

One thing that did amuse me, though: at one point, he points out how advances in science and a deeper understanding of the world around us meant that the concept of God retreated further and further over the course of history, being confined to things science could not yet explain, time and time again.
The punchline? John D. Barrow is a deist.

Anyway. If you’re willing to ignore all that, it really isn’t a bad book. I’ll probably buy more of his books at some point, at least.

## Fun fact

Israeli schools are teaching kids to use ﬩ instead of + (and have been since at least the 1950s), because + looks too much like a Christian cross.
Unicode has this symbol at position U+FB29, as “Hebrew letter alternative plus sign”. It’s not just the Saudis, apparently.

I could understand if they did it to avoid confusion with other uses of + (such as for positiveness, or in some types of logic notations, or for concatenation in some cases), but this is quite possibly the worst excuse to waste a Unicode codepoint they could’ve come up with.

## Descartes’ Theorem

I was reading the xkcd archives a while ago, and I came across this comic. This got me thinking about how to go about constructing this sort of layout; when you have three circles that are tangent to each other, how to you construct a fourth that’s tangent to all three circles?
(This is the type of person I am. I’m sorry. I can’t help it.)

I played with it for a bit, but I got distracted before I figured it out and forgot about it. Then, yesterday, my Sierpinski gasket thing left me browsing Wikipedia looking for random other fractals I could construct. Eventually, I found the Apollonian gasket, and from there, behold, Descartes’ theorem.
I’m not going to prove the theorem, or even say anything the wiki article doesn’t, but I want to talk about it because it’s interesting.

The theorem states the following:

k, in this case, is the curvature of the circle, which is just $\pm\frac{1}{r}$.
Why plus and minus? Because there are two circles that are tangent to all three circles: one inscribed and one circumscribed.
For example:

Three circles, tangent to each other. As you can see, their radii are 1, 2, and 3, and their curvatures are 1, .5, and 1/3, respectively. (I picked these radii because they make for pretty round numbers in the results, but it works with any numbers.)
According to the theorem:

So the radii for the tangent circles are 6 and $\frac{6}{23}$ (absolute value of the inverse of the curvature). When we construct this, it looks like this:

Ta-dah~
(To construct those circles, you first have to find the center. You can do this by constructing circles with the center of each of the three circles, and a radius equal to the radius of the fourth circle plus or minus the radius of the circle whose center you’re using (actually, the absolute value of the sum of the inverse of the curvature of the fourth circle and the radius of the circle whose center you’re using). These three circles will intersect in the center of the fourth circle.
In this case, the center of the circumscribed circle is on the largest of the initial three circles, because 6 – 3 = 3, obviously.)

I like this. Geometry is intuitive and circles are pretty.

(Told you I’d find a use for that CGI script.)