Category Archives: rant

Emily Giant Title That Broke My Formatting by

Remember the time I worked at a gym. Who could have seen that in my future, eh? So, I’ve picked up some shifts on the front desk to pad my cheque because, contrary to popular opinion, personal training is not a lucrative business. Unfortunately. So I am currently bored out of my tree because it’s 6am at the gym and I’m sitting on my ass serving my only purpose, which is smiling at people as they walk in… thank God for wikipedia. Sooooo, since I’ve thoroughly exhausted my daily wiki (Happy Polish Mother’s Day) I figured writing to youse guys would be more productive than picking my teeth (although I did have loose-leaf tea this morning…). Who knows, maybe it’ll become a regular thang.

So the other day we had an interesting…encounter…at work. Guy comes in to the front desk, seems to be having trouble articulating his problem but from what I can gather he stopped authorizing payments for his gym membership a few months ago. Yet he kept coming to the gym…not much, but nonetheless… Anyways, turns out he wanted us to wave his service charges. Probably not going to happen but our Assistant Manager is rad so she was at least humouring the poor guy. She asks him why he thought we would be able to do that, to which he responds along the lines of, “Well, I don’t have any money. I’m obliged to give everything away.” “Oh?” “Yes. You see, I’m the Lord and Savior. I was sent by God to save man from the asteroids.”

I shit you not. You would think a false prophet would be more concerned about his health…

And this was a good day.

Had a woman come in this morning for a tan (comes in every second morning, like clockwork, 5:30am). She always asks for a towel to put over her face so she has this super tanned body and ghostly white face. I saw her come in this morning so I said, “Good Morning! Nine minutes in the stand up?” She smiled and said, “You’ve got such a good memory!” Of course I remember you, you look like a goblin.

Chances are only three people might read this (maybe 4 if Soucy’s Dad still looks us up once in a while) so it’s a good thing I know my audience. I miss you guys. Bro-cation needed. Will you guys move to Scotland with me? Before you say no, I have two words for you….Eggplant Lasagna.

And I figure it’s only fitting to wish my three favorite computer scientists a joyous Alan Turing’s Birthday:) and if I had even a moderate amount of technical ability I would have been able to figure out how to insert a picture of him shopped with a birthday hat, but I don’t so I couldn’t. So you’ll just have to use your imagination.

robert Two Principles of Mathematics by

I was explaining something in probability theory to somebody last night, and I offhandedly said the following remark:

You know, it’s interesting what sorts of mathematics come up. For example, a usual exercise in undergraduate probability is the following: Flip a coin repeatedly until a heads comes up. What’s the expected number of coin flips required?

The person asked me what the number was, and I realized that I actually didn’t know. I gave an offhand guess of three, since we’re asking about a very particular sequence of coin flips (which has exponentially small density in the measure of all sequences of coin flips, and so it should be small). I sat down to work on it before bed, and rather quickly derived the following expression.

Let X be the random variable in \{1,2, \ldots\} = \mathbb{N} with the interpretation that X = i if the ith coin flip in a sequence of flips is a head after i-1 tails. It’s straightforward to calculate \Pr[X = i] — assuming we’re flipping a fair coin, the probability of getting i-1 tails followed by a single head is (1/2)^{i}. This means our expected value will be E[X] = \sum_{i=1}^{\infty} i \Pr[X = i] = \sum_{i=1}^{\infty} i 2^{-i}.

And, wait a minute, but this sum is not trivial to evaluate! At first I did what any self-respecting mathematician/computer scientist would do (i.e. HIT IT WITH YOUR HARDEST SLEDGEHAMMERULTRATOOL AND DYNAMITE THE PROBLEM TO ACCESS IT’S SWEET GOOEY INSIDES) and applied generating functions.

MMMMMMMMMMMMM

This (alas) didn’t work and I fell asleep dejected.

And I woke up with the cutest solution!

To begin, here’s a secret that your math teacher just won’t ever bloody tell you:

(1) Every inequality/sum/identity in the history of mathematics just comes from writing the same thing in two different ways.

Of course, with our friend hindsight bias this is obvious — once we have the identity x = y in front of us, it’s easy to say “oh, well of COURSE x = y, it’s so obvious, duh!”.

Now, here is a second secret that your math teacher won’t ever bloody tell you:

(2) Every result ever obtained in mathematics can be broken down to a sequence of tiny, local, or otherwise easy steps.

When you say something as simple as I did in these two principles the questions of mathematics suddenly become significantly less daunting. To illustrate both of these principles, I’ll use them to evaluate our sum \sum_{i=1}^{\infty} i2^{-i} from the probabilistic puzzle above. First, let’s recall what an infinite sum actually is, as it’s kind of easy to forget: the sum

\displaystyle \sum_{i=1}^\infty i2^{-i}

is really defined as a limit of partial sums

\displaystyle \lim_{n \rightarrow \infty} \sum_{i=1}^n i2^{-i}.

So, applying our first principle from above, we’re going to rewrite \sum_{i=1}^n i2^{-i} as another function f(n) so that we can actually evaluate the limit above.

Now, how do we do this? First, just to simplify notation for ourselves, let f(n) = \sum_{i=1}^n i2^{-i}. Let’s apply our second principle from above — what are some really stupendously obvious facts about the sum f(n) = \sum_{i=1}^n i2^{-i}? Well, since it’s a frigging sum, we know that

\displaystyle f(n+1) = \sum_{i=1}^{n+1} i2^{-(i+1)} = f(n) + (n+1)2^{-(n+1)}.

Alright, here is a start. If we can apply our first principle to the sum f(n+1) and write it down in another way then maybe we’ll end up somewhere interesting. Well, what about this sum? Let’s write it down explicitly, so that we can actually see some of the structure of the terms. I’m also going to make the substitution r = 1/2 and instead write

\displaystyle f(n) = \sum_{i=1}^{n} ir^i.

Time for a side rant. Now, a math teacher, jerks as they are, will tell you to do this kind of substitution because your result is more general (or, even worse, tell you nothing at all, leaving you swimming in a soup of variables/indeterminates with no flotation device).

Everyone in any math class, ever. THE OCEAN IS VARIABLES

As usual, this is the correct information but stated in a way so that humans can’t understand it. Another way to say this “generality” assumption is, simply, people hate magic numbers! Notice that NOTHING! about the sums we’ve considered so far have needed the 2 to be there (other than the fact that our problem happens to be about coins). Well, if there’s no reason for it to be there, then why should it be there? The sum \sum_{i=1}^n ir^i is even a bit easier to swallow visually. Anyways, side rant over.

Back on track, here are the sums f(n) and f(n+1), both written down explicitly:

\displaystyle f(n) = r + 2r^2 + 3r^3 + \cdots + nr^n

\displaystyle f(n+1) = r + 2r^2 + 3r^3 + \cdots + nr^n + (n+1)r^{n+1}.

Well, recall that I said that we were trying to rewrite f(n+1) in a way other than

\displaystyle f(n+1) = f(n) + (n+1)r^{n+1}.

Applying our first principle — and this is really the leap of intuition — let’s just transform f(n) into f(n+1) in another way! How? Well, multiply f(n) by r and compare it to f(n+1):

\displaystyle rf(n) = r^2 + 2r^3 + \cdots + (n-1)r^n + nr^{n+1}.

\displaystyle f(n+1) = r + 2r^2 + 3r^3 + \cdots + nr^n + (n+1)r^{n+1}.

We’ve almost got f(n+1)! The only thing that’s missing is a single copy of each term in the sum! Phrased mathematically, we now have the identity

\displaystyle f(n+1) = rf(n) + (r + r^2 + r^3 + \ldots + r^n + r^{n+1}).

Now, the sum \sum_{i=1}^n r^i is a geometric sum which has a simple formula (fact: this simple formula can be derived in a way similar to our current investigation):

\displaystyle \sum_{i=1}^n r^i = \frac{1 - r^{n+1}}{1 - r} - 1.

So, substituting in this new simple formula gives

\displaystyle f(n+1) = rf(n) + \frac{1 - r^{n+2}}{1 - r} - 1

and then, finally finishing our application of the first principle, we can apply our early “stupid” identity for f(n+1) and get

\displaystyle f(n) + (n+1)r^{n+1} = rf(n) + \frac{1 - r^{n+2}}{1 - r} - 1.

The rest is algebra/boilerplate. Collecting the f(n) terms on the left hand side, we get

\displaystyle (1- r)f(n) = \frac{1 - r^{n+2}}{1 - r} - 1 - (n+1)r^{n+1},

then dividing both sides by (1-r) finally gives

\displaystyle f(n) = (1-r)^{-1}\left(\frac{1 - r^{n+2}}{1 - r} - 1 - (n+1)r^{n+1}\right).

Taking the limit as n \rightarrow \infty and using our knowledge that r = 1/2 < 1, we see that the terms involving r^{n+1} will disappear. This leaves

\displaystyle \lim_{n \rightarrow \infty} f(n) = \frac{1}{1-r}\left(\frac{1}{1 - r} - 1\right).

Substiting in r = 1/2, we get

\displaystyle \lim_{n \rightarrow \infty} f(n) = 2(2 - 1) = 2.

And we’re done. In expectation, you will see a heads after 2 coin flips.

You see, math is not mystical. Unless you’re a Newton or an Euler (viz. an absolutely genius), math proceeds pretty much the same for everybody. There are underlying principles and heuristics that help you do math that every established mathematician actually uses — the secret is that no one ever tells you them. Of course, I have a sneaking suspicion that this due to the fact that our high school math teachers don’t actually understand the principles themselves (while this may seem like a bit of an attack, I did graduate with people who were going to be math teachers. Most of those people should not have been math teachers).

robert Pet Peeves and Inductive Horses by

Put your math pants on ladies and gentleman, this one’s gonna be a vomitbuster.

This post will be about mathematical induction (if you aren’t familiar with induction, it’s a simple and powerful proof technique that is ubiquitous in mathematics. For the intuitive picture, go here). Well, it will sort of be about mathematical induction. This post will use mathematical induction to help express my hatred for incomplete explanations. We will prove the following theorem (anybody who has seen induction will have seen this theorem and the proof before.)

Theorem All horses are the same colour.

Proof. We will proceed by induction on the size of the set of horses. Suppose that we have a single horse. Then, clearly all (one) of our horses have the same colour. Assume inductively that for every collection of n horses, all the horses in A have the same colour. We will show that in every collection of n+1 horses, all the horses have the same colour.

Let H = {h(1), h(2), …, h(n+1)} be a collection of n+1 horses. Then we can choose two sub-collections of H: A = {h(1), h(2), …, h(n)} and B = {h(2), h(3), …, h(n+1)}. By our inductive assumption, all the horses in A and B have the same colour. It follows that the colour of h(1) is the same as the colour of h(2) which is the same as the colour of h(n+1), and so all the horses in H must have the same colour. Q.E.D.

Now, clearly all horses are not the same colour.

And so pretty!

Not the same colour.

Let’s name those two horses Flowerpot and Daffodil. Most explanations of why the above theorem is wrong go like this:

Teacher: Clearly, the set of horses P = {Flowerpot, Daffodil} is a contradiction to our theorem. So, class, what have we learned? When you’re doing induction, always check your base cases!

Right, but quite an unsatisfying explanation. This explains why the theorem is wrong. But why is the proof wrong? That’s actually a bit of a head scratcher.

Mmmmmm. Yellow.

Go on. Do it. Scratch that head. Get them lices out.

I mean, we followed all the steps of mathematical induction — we chose a base case, we proved that theorem was correct. We made an inductive assumption for some n, and using the assumption we proved the theorem holds for n+1. Where’s the problem?

Well, lets look at our proof. We have a set H of size n+1, which clearly can be done. We chose two subsets of H, A and B, both of size n. Also can be done. Next, we apply the inductive hypothesis to A and B, and so all the horses in A and B are the same colour. Now, since A and B have a non-empty intersection… Ahah. There’s the problem. What if n = 1? Then H = {h(1), h(2)}, and A = {h(1)}, B = {h(2)}. But then A and B have an empty intersection. If A is the set containing Flowerpot and B is the set containing Daffodil, then we cannot say that Flowerpot is the same colour as Daffodil. Okay! Great!

I think that this is fundamentally different from the broad sweep of “You didn’t check all your base cases”. This is really saying your logic used in the inductive step does not apply to all of the base cases. It’s subtle, but a very important difference. The pedagogical conclusion: teachers, think hard about what you’re presenting. When you think you understand something, it turns out that you usually don’t.

One more thing. In a way, the problem really comes from how I wrote H in the first place! Remember I wrote H = {h(1), h(2), …, h(n+1)}. When performing the partitioning of H into A = {h(1), h(2), …, h(n)} and B = {h(2), h(3), …, h(n+1)}, this suggests that the intersection of A and B is non-empty. This is an interesting instance of the form of our text affecting the content of our text (at least to us). This is an idea I will hopefully return to in later posts.

Finally, a moment of silence. Searching for an image of a head scratcher led me deep into a rabbit hole from which I may never return.

And you can barely even see it!

A nose straightener.