Category Archives: math

robert Infinity is Weird by

Infinity is weird.

This post is about an odd little thing I learned about involving infinite sets quite recently. First, let’s introduce some notation. We let N = {0, 1, 2, … } denote the set of natural numbers, and Q denote the set of rational numbers (recall a number is rational if we can write it as a fraction of two integers in lowest terms. So 1/2 is rational while  2 is not). If A and B are sets then a function f mapping A to B is one-to-one if everything in B is mapped to by a unique thing in A (so, for every y in B there is at most one element x in A such that f(x) = y), and it is onto if everything in B is mapped to by something in A (so for every y in B there is some element x in A such that f(x) = y). Finally, f is bijective if it is one-to-one and onto. In other words, f is bijective if everything in B is mapped to by a unique element in A.

In elementary set theory we use bijections to define what we mean by the “size” of a set. In other words, two sets A and B have the “same size” (now called cardinality) if there is some bijection f mapping A to B. For example, if A = {1, 2, 3} and B = {a, b, c}, then we can say that A and B have the same size since the function f mapping f(1) = a, f(2) = b, f(3) = c is a bijection.

What do we get by defining “size” in this manner? Well, clearly we recover “size” in the “regular” sense. If A and B are finite sets with a bijection between them and A has 10 elements, then B certainly has 10 elements as well. The nice thing about this definition of size is that it generalizes to infinite sets in a clean way. Once you realize this you can get some remarkable observations. Here is a nice one:

Theorem 1: Let N be the set of natural numbers and Q be the set of rational numbers. Then there is a bijection f from N to Q.

So, even though there are “clearly more” rational numbers than natural numbers, really the two sets have the same size. Do all infinite sets have the same size? Cantor showed that this is false via, famously, the diagonal argument.

Theorem 2 (Cantor’s Theorem): Let N be the set of natural numbers and R be the set of real numbers. Then there is no bijection from N to R.

Now, Theorem 1 tells us that there is a bijection f from N to Q. This gives us a natural way of ordering the set of rational numbers: just order them according to f! That is, we can define Q by

Q = {f(0), f(1), f(2), …}.

In particular, for every natural number x, this gives us a finite set Q(x) defined by

Q(x) = {f(0), f(1), …, f(x-1), f(x)}.

Note that for any x < y we have Q(x) is strictly contained in Q(y), and the union of Q(x) over all natural numbers x gives us every rational number! In the language of order theory the sequence of sets

{}, Q(0), Q(1), …, Q(n), …

yields an infinite chain in the lattice of all subsets of rational numbers. Moreover, this infinite chain is countable: there is a bijection between it and the natural numbers (just map any natural x to the set Q(x)).

Now, let us consider different subsets of rational numbers. For any real number t, define the set P(t) to be the collection of all rational numbers x < t.

Now things are getting interesting. Like before, for any pair of real numbers t, u with t < u we have that P(t) is strictly contained in P(u). In the language of order theory the collection

{P(t) : t >= 0}

is an infinite chain in the lattice of all subsets of rational numbers. Also, like before, if we take the union of every set P(t) for every positive real number t, we get Q again. So, we get that there is a natural bijection from non-negative real numbers to this new infinite chain (just map t to P(t)).

But, by combining Theorem 1 and Theorem 2, there is no bijection from the set of rational numbers to the set of real numbers. Why is this weird? Well, if we consider the set of all subsets of rational numbers, we get that

  • There is an infinite chain {{}, Q(0), Q(1), …} starting from the empty set that (in the limit) covers all rational numbers, which is also countable — it can be ordered (for all i, j with i < j, Q(i) is strictly contained in Q(j)), and the elements of the chain are bijective with the natural numbers.
  • There is another infinite chain {P(t) : t >= 0}, starting from the empty set that (in the limit) covers all rational numbers, which is provably not countable — it can be ordered (for all real numbers t, u with t < u we have P(t) is strictly contained in P(u)), but there is no bijection with the natural numbers.

 

It is not as paradoxical as it first seems, once you realize that the rational numbers are dense in the real numbers. Moreover, these sets P(t) can (in a loose sense) be used to define the real numbers (this uses the notion of a Dedekind cut). A similar construction was used by John Conway to define the surreal numbers, which is detailed in a fairly entertaining novel by Donald Knuth. But, it is late and I have already ranted for too long. Math is cool.

G’night!

robert Codes and Gödel by

SHORT POST TIME. Which means I thought of this off-hand (and it’s certainly not new information, but is kind of fun).

Coding theory is concerned with encoding messages in a way so as to minimize their length for transmission over some sort of channel. The mathematical formalization of this goes all the way back to Shannon’s Information Theory, so I’ll give some basics and then mention the RANDOM CONNECTION.

Here’s the idea. We have two parties, Alice and Bob, who are trying to communicate over some sort of digital channel. (For convenience, let’s assume that the channel communicates every message that is sent across without corruption). Alice has a message M that she wants to send, and the message is drawn from some alphabet \Sigma. Concretely, let’s assume the message is drawn from the English alphabet

\displaystyle \Sigma = \{a, b, c, \ldots, x, y, z, \#\},

where we use # as a placeholder for a blank space. Let \Sigma^* denote the set of all messages we can compose out of the symbols in the alphabet \Sigma. For example,

what\#up\#dog \in \Sigma^*.

Now, suppose the channel is binary, so it can only send 0s and 1s. Obviously, Alice needs some way to encode her alphabet \Sigma into the alphabet \{0,1\} to send over pressing messages to Bob.

alice

 

To bring this about, let’s define a binary code to be a function

\displaystyle C: \Sigma \rightarrow \{0,1\}^*.

That is, a binary code is any map from our source alphabet to a sequence of bits. Note that if we have a binary code C we can easily extend it to messages (i.e. to elements of \Sigma^*) by defining, for any sequence of symbols \alpha_1 \alpha_2 \cdots \alpha_n \in \Sigma^*, the map

\displaystyle C(\alpha_1 \alpha_2 \cdots \alpha_n) = C(\alpha_1) C(\alpha_2) \cdots C(\alpha_n).

Now, most codes are useless. Indeed, under our above definition, the map C(\alpha) = 0 for every English letter \alpha is a code. Unfortunately, if Alice used this code over their channel Bob would have a tough time decoding it.alicebad

So, we need some condition that allows us to actually decode the bloody things! We’ll start with a useful type of code called a prefix-free code.

Definition: A binary code C: \Sigma \rightarrow \{0,1\}^* is prefix-free if, for every pair of symbols \alpha, \beta \in \Sigma neither C(\alpha) is a prefix of C(\beta) nor vice-versa.

An example of a prefix-free binary code (for the first four letters of the English alphabet) could be the following:

\displaystyle C(a) = 0, C(b) = 10, C(c) = 110, C(d) = 111.

Let’s encode a message with C: if Alice encoded the message badcab via C and sent it to Bob, Bob would receive

100111110010.

Now, the beautiful property of prefix-free codes is the following: Bob can actually decode this message online. That is, he can do the following: iterate through each of the bits in sequence, and store what order they came in. Once his stored bit sequence matches a sequence in the code, he can automatically decode that character and keep going!

To illustrate, Bob first reads a 1 off the string. He convinces himself that 1 is not the code for anything, so he reads the next bit, a 0. He now has the string “10″, which is a code for b. Now, is it possible that this could be the beginning of a code for another letter? NO! Because “10″ is the code for b and is not the prefix of any other code. So Bob can translate the b, and move on.

We define nonsingular codes to be the set of codes that can actually be decoded. After seeing the above example, it’s clear that prefix-free codes are non-singular. However, is it possible for there to be non-prefix-free, non-singular codes? That is, are there codes that are decodable, but require us to read the entire message before we can decode them? (NOTE: These codes are practically useless, from an efficiency point of view. This is just a thought experiment to test the definition.)

The answer is YES, and a natural example are Gödel numberings! Here is how it works: for each letter \alpha in the alphabet \Sigma choose a distinct positive integer z_\alpha. Now, to encode a message

\displaystyle \alpha_1 \alpha_2 \cdots \alpha_n

let M be the positive integer defined as

\displaystyle M = 2^{z_1}3^{z_2}5^{z_3}\cdots p_n^{z_n}

where p_n is the nth prime number. We then send the binary expansion of M as our message.

How does Bob decrypt it? Easily: he reads ALL of M, factors it, and reads off the powers of the exponents: the order of the message is preserved if we read off in order of lowest prime to highest, where the power of the ith prime is the code of the ith symbol in the message. Bob has to read all of the message (and he has to make sure he’s transcribed it correctly), or else he cannot recover any of it! Marvelously useless.

OR IS IT USELESS? Similar ideas lurk under regular RSA encryption which everyone uses a billion times a day without even realizing it (thank you blaggerwebs). If factoring integers is as hard as complexity theorists believe it is, then Alice has just sent Bob a frustratingly uncrackable message.

aliceprime

robert Two Principles of Mathematics by

I was explaining something in probability theory to somebody last night, and I offhandedly said the following remark:

You know, it’s interesting what sorts of mathematics come up. For example, a usual exercise in undergraduate probability is the following: Flip a coin repeatedly until a heads comes up. What’s the expected number of coin flips required?

The person asked me what the number was, and I realized that I actually didn’t know. I gave an offhand guess of three, since we’re asking about a very particular sequence of coin flips (which has exponentially small density in the measure of all sequences of coin flips, and so it should be small). I sat down to work on it before bed, and rather quickly derived the following expression.

Let X be the random variable in \{1,2, \ldots\} = \mathbb{N} with the interpretation that X = i if the ith coin flip in a sequence of flips is a head after i-1 tails. It’s straightforward to calculate \Pr[X = i] — assuming we’re flipping a fair coin, the probability of getting i-1 tails followed by a single head is (1/2)^{i}. This means our expected value will be E[X] = \sum_{i=1}^{\infty} i \Pr[X = i] = \sum_{i=1}^{\infty} i 2^{-i}.

And, wait a minute, but this sum is not trivial to evaluate! At first I did what any self-respecting mathematician/computer scientist would do (i.e. HIT IT WITH YOUR HARDEST SLEDGEHAMMERULTRATOOL AND DYNAMITE THE PROBLEM TO ACCESS IT’S SWEET GOOEY INSIDES) and applied generating functions.

MMMMMMMMMMMMM

This (alas) didn’t work and I fell asleep dejected.

And I woke up with the cutest solution!

To begin, here’s a secret that your math teacher just won’t ever bloody tell you:

(1) Every inequality/sum/identity in the history of mathematics just comes from writing the same thing in two different ways.

Of course, with our friend hindsight bias this is obvious — once we have the identity x = y in front of us, it’s easy to say “oh, well of COURSE x = y, it’s so obvious, duh!”.

Now, here is a second secret that your math teacher won’t ever bloody tell you:

(2) Every result ever obtained in mathematics can be broken down to a sequence of tiny, local, or otherwise easy steps.

When you say something as simple as I did in these two principles the questions of mathematics suddenly become significantly less daunting. To illustrate both of these principles, I’ll use them to evaluate our sum \sum_{i=1}^{\infty} i2^{-i} from the probabilistic puzzle above. First, let’s recall what an infinite sum actually is, as it’s kind of easy to forget: the sum

\displaystyle \sum_{i=1}^\infty i2^{-i}

is really defined as a limit of partial sums

\displaystyle \lim_{n \rightarrow \infty} \sum_{i=1}^n i2^{-i}.

So, applying our first principle from above, we’re going to rewrite \sum_{i=1}^n i2^{-i} as another function f(n) so that we can actually evaluate the limit above.

Now, how do we do this? First, just to simplify notation for ourselves, let f(n) = \sum_{i=1}^n i2^{-i}. Let’s apply our second principle from above — what are some really stupendously obvious facts about the sum f(n) = \sum_{i=1}^n i2^{-i}? Well, since it’s a frigging sum, we know that

\displaystyle f(n+1) = \sum_{i=1}^{n+1} i2^{-(i+1)} = f(n) + (n+1)2^{-(n+1)}.

Alright, here is a start. If we can apply our first principle to the sum f(n+1) and write it down in another way then maybe we’ll end up somewhere interesting. Well, what about this sum? Let’s write it down explicitly, so that we can actually see some of the structure of the terms. I’m also going to make the substitution r = 1/2 and instead write

\displaystyle f(n) = \sum_{i=1}^{n} ir^i.

Time for a side rant. Now, a math teacher, jerks as they are, will tell you to do this kind of substitution because your result is more general (or, even worse, tell you nothing at all, leaving you swimming in a soup of variables/indeterminates with no flotation device).

Everyone in any math class, ever. THE OCEAN IS VARIABLES

As usual, this is the correct information but stated in a way so that humans can’t understand it. Another way to say this “generality” assumption is, simply, people hate magic numbers! Notice that NOTHING! about the sums we’ve considered so far have needed the 2 to be there (other than the fact that our problem happens to be about coins). Well, if there’s no reason for it to be there, then why should it be there? The sum \sum_{i=1}^n ir^i is even a bit easier to swallow visually. Anyways, side rant over.

Back on track, here are the sums f(n) and f(n+1), both written down explicitly:

\displaystyle f(n) = r + 2r^2 + 3r^3 + \cdots + nr^n

\displaystyle f(n+1) = r + 2r^2 + 3r^3 + \cdots + nr^n + (n+1)r^{n+1}.

Well, recall that I said that we were trying to rewrite f(n+1) in a way other than

\displaystyle f(n+1) = f(n) + (n+1)r^{n+1}.

Applying our first principle — and this is really the leap of intuition — let’s just transform f(n) into f(n+1) in another way! How? Well, multiply f(n) by r and compare it to f(n+1):

\displaystyle rf(n) = r^2 + 2r^3 + \cdots + (n-1)r^n + nr^{n+1}.

\displaystyle f(n+1) = r + 2r^2 + 3r^3 + \cdots + nr^n + (n+1)r^{n+1}.

We’ve almost got f(n+1)! The only thing that’s missing is a single copy of each term in the sum! Phrased mathematically, we now have the identity

\displaystyle f(n+1) = rf(n) + (r + r^2 + r^3 + \ldots + r^n + r^{n+1}).

Now, the sum \sum_{i=1}^n r^i is a geometric sum which has a simple formula (fact: this simple formula can be derived in a way similar to our current investigation):

\displaystyle \sum_{i=1}^n r^i = \frac{1 - r^{n+1}}{1 - r} - 1.

So, substituting in this new simple formula gives

\displaystyle f(n+1) = rf(n) + \frac{1 - r^{n+2}}{1 - r} - 1

and then, finally finishing our application of the first principle, we can apply our early “stupid” identity for f(n+1) and get

\displaystyle f(n) + (n+1)r^{n+1} = rf(n) + \frac{1 - r^{n+2}}{1 - r} - 1.

The rest is algebra/boilerplate. Collecting the f(n) terms on the left hand side, we get

\displaystyle (1- r)f(n) = \frac{1 - r^{n+2}}{1 - r} - 1 - (n+1)r^{n+1},

then dividing both sides by (1-r) finally gives

\displaystyle f(n) = (1-r)^{-1}\left(\frac{1 - r^{n+2}}{1 - r} - 1 - (n+1)r^{n+1}\right).

Taking the limit as n \rightarrow \infty and using our knowledge that r = 1/2 < 1, we see that the terms involving r^{n+1} will disappear. This leaves

\displaystyle \lim_{n \rightarrow \infty} f(n) = \frac{1}{1-r}\left(\frac{1}{1 - r} - 1\right).

Substiting in r = 1/2, we get

\displaystyle \lim_{n \rightarrow \infty} f(n) = 2(2 - 1) = 2.

And we’re done. In expectation, you will see a heads after 2 coin flips.

You see, math is not mystical. Unless you’re a Newton or an Euler (viz. an absolutely genius), math proceeds pretty much the same for everybody. There are underlying principles and heuristics that help you do math that every established mathematician actually uses — the secret is that no one ever tells you them. Of course, I have a sneaking suspicion that this due to the fact that our high school math teachers don’t actually understand the principles themselves (while this may seem like a bit of an attack, I did graduate with people who were going to be math teachers. Most of those people should not have been math teachers).

robert Pet Peeves and Inductive Horses by

Put your math pants on ladies and gentleman, this one’s gonna be a vomitbuster.

This post will be about mathematical induction (if you aren’t familiar with induction, it’s a simple and powerful proof technique that is ubiquitous in mathematics. For the intuitive picture, go here). Well, it will sort of be about mathematical induction. This post will use mathematical induction to help express my hatred for incomplete explanations. We will prove the following theorem (anybody who has seen induction will have seen this theorem and the proof before.)

Theorem All horses are the same colour.

Proof. We will proceed by induction on the size of the set of horses. Suppose that we have a single horse. Then, clearly all (one) of our horses have the same colour. Assume inductively that for every collection of n horses, all the horses in A have the same colour. We will show that in every collection of n+1 horses, all the horses have the same colour.

Let H = {h(1), h(2), …, h(n+1)} be a collection of n+1 horses. Then we can choose two sub-collections of H: A = {h(1), h(2), …, h(n)} and B = {h(2), h(3), …, h(n+1)}. By our inductive assumption, all the horses in A and B have the same colour. It follows that the colour of h(1) is the same as the colour of h(2) which is the same as the colour of h(n+1), and so all the horses in H must have the same colour. Q.E.D.

Now, clearly all horses are not the same colour.

And so pretty!

Not the same colour.

Let’s name those two horses Flowerpot and Daffodil. Most explanations of why the above theorem is wrong go like this:

Teacher: Clearly, the set of horses P = {Flowerpot, Daffodil} is a contradiction to our theorem. So, class, what have we learned? When you’re doing induction, always check your base cases!

Right, but quite an unsatisfying explanation. This explains why the theorem is wrong. But why is the proof wrong? That’s actually a bit of a head scratcher.

Mmmmmm. Yellow.

Go on. Do it. Scratch that head. Get them lices out.

I mean, we followed all the steps of mathematical induction — we chose a base case, we proved that theorem was correct. We made an inductive assumption for some n, and using the assumption we proved the theorem holds for n+1. Where’s the problem?

Well, lets look at our proof. We have a set H of size n+1, which clearly can be done. We chose two subsets of H, A and B, both of size n. Also can be done. Next, we apply the inductive hypothesis to A and B, and so all the horses in A and B are the same colour. Now, since A and B have a non-empty intersection… Ahah. There’s the problem. What if n = 1? Then H = {h(1), h(2)}, and A = {h(1)}, B = {h(2)}. But then A and B have an empty intersection. If A is the set containing Flowerpot and B is the set containing Daffodil, then we cannot say that Flowerpot is the same colour as Daffodil. Okay! Great!

I think that this is fundamentally different from the broad sweep of “You didn’t check all your base cases”. This is really saying your logic used in the inductive step does not apply to all of the base cases. It’s subtle, but a very important difference. The pedagogical conclusion: teachers, think hard about what you’re presenting. When you think you understand something, it turns out that you usually don’t.

One more thing. In a way, the problem really comes from how I wrote H in the first place! Remember I wrote H = {h(1), h(2), …, h(n+1)}. When performing the partitioning of H into A = {h(1), h(2), …, h(n)} and B = {h(2), h(3), …, h(n+1)}, this suggests that the intersection of A and B is non-empty. This is an interesting instance of the form of our text affecting the content of our text (at least to us). This is an idea I will hopefully return to in later posts.

Finally, a moment of silence. Searching for an image of a head scratcher led me deep into a rabbit hole from which I may never return.

And you can barely even see it!

A nose straightener.