Cauchy’s Functional Equation and Zorn’s Lemma

This is a draft of an appendix chapter for my Napkin project.

In the world of olympiad math, there’s a famous functional equation that goes as follows:

\displaystyle f : {\mathbb R} \rightarrow {\mathbb R} \qquad f(x+y) = f(x) + f(y).

Everyone knows what its solutions are! There’s an obvious family of solutions {f(x) = cx}. Then there’s also this family of… uh… noncontinuous solutions (mumble grumble) pathological (mumble mumble) Axiom of Choice (grumble).

Don’t worry, I know what I’m doing!

There’s also this thing called Zorn’s Lemma. It sounds terrifying, because it’s equivalent to the Axiom of Choice, which is also terrifying because why not.

In this post I will try to de-terrify these things, because they’re really not terrifying and I’m not sure why no one bothered to explain this properly yet. I have yet to see an olympiad handout that explains how you would construct a pathological solution, even though it’s really quite natural. So let me fix this problem now…

1.Let’s Construct a Monster

Let’s try to construct a “bad” function {f} and see what happens.

By scaling, let’s assume WLOG that {f(1) = 1}. Thus {f(n) = n} for every integer {n}, and you can easily show from here that

\displaystyle f\left( \frac mn \right) = \frac mn.

So {f} is determined for all rationals. And then you get stuck.

None of this is useful for determining, say, {f(\sqrt 2)}. You could add and subtract rational numbers all day and, say, {\sqrt 2} isn’t going to show up at all.

Well, we’re trying to set things on fire anyways, so let’s set

\displaystyle f(\sqrt 2) = 2015

because why not? By the same induction, we get {f(n\sqrt2) = 2015n}, and then that

\displaystyle f\left( a + b \sqrt 2 \right) = a + 2015b.

Here {a} and {b} are rationals. Well, so far so good — as written, this is a perfectly good solution, other than the fact that we’ve only defined {f} on a tiny portion of the real numbers.

Well, we can do this all day:

\displaystyle f\left( a + b \sqrt 2 + c \sqrt 3 + d \pi \right) = a + 2015b + 1337c - 999d.

Perfectly consistent.

You can kind of see how we should keep going now. Just keep throwing in new real numbers which are “independent” to the previous few, assigning them to whatever junk we want. It feels like it should be workable. . .

In a moment I’ll explain what “independent” means (though you might be able to guess already), but at the moment there’s a bigger issue: no matter how many numbers we throw, it seems like we’ll never finish. Let’s address the second issue first.

2. Review of Finite Induction

When you do induction, you get to count off {1}, {2}, {3}, … and so on. So for example, suppose we had a “problem” such as the following: Prove that the intersection of {n} open intervals is either {\varnothing} or an open interval. You can do this by induction easily: it’s true for {n = 2}, and for the larger cases it’s similarly easy.

But you can’t conclude from this that infinitely many open intervals intersect at some open interval. Indeed, this is false: consider the intervals

\displaystyle \left( -1, 1 \right), \quad \left( -\frac12, \frac12 \right), \quad \left( -\frac13, \frac13 \right), \quad \left( -\frac14, \frac14 \right), \quad \dots

This infinite set of intervals intersects at a single point {\{0\}}!

The moral of the story is that induction doesn’t let us reach infinity. Too bad, because we’d have loved to use induction to help us construct a monster. That’s what we’re doing, after all — adding things in one by one.

3. Transfinite Induction

Well, it turns out we can, but we need a new notion of number.

For this we need a notion of an ordinal number. I defined these in their full glory a previous blog post, but I don’t need the full details of that. Here’s what I want to say: after all the natural numbers

\displaystyle 0, \; 1, \; \dots,

I’ll put a new number called {\omega}, representing how large the natural numbers are. After that there’s a number called

\displaystyle \omega+1, \; \omega+2, \; \dots

and eventually a number called {2\omega}.

The list goes on:

\displaystyle \begin{aligned} 0, & 1, 2, 3, \dots, \omega \\ & \omega+1, \omega+2, \dots, \omega+\omega \\ & 2\omega+1, 2\omega+2, \dots, 3\omega \\ & \vdots \\ & \omega^2 + 1, \omega^2+2, \dots \\ & \vdots \\ & \omega^3, \dots, \omega^4, \dots, \omega^\omega \\ & \vdots, \\ & \omega^{\omega^{\omega^{\dots}}} \\ \end{aligned}

Pictorially, it kind of looks like this:

500px-Omega-exp-omega-labeled

Anyways, in the same way that natural numbers dominate all finite sets, the ordinals dominate all the sets.

Theorem 1 For every set {S} there’s some ordinal {\alpha} which is bigger than it.

But it turns out (and you can intuitively see) that as large as the ordinals grow, there is no infinite descending chain. Meaning: if I start at an ordinal (like {2 \omega + 4}) and jump down, I can only take finitely many jumps before I hit {0}. (To see this, try writing down a chain starting at {2 \omega + 4} yourself.) Hence, induction and recursion still work verbatim:

Theorem 2 Given a statement {P(-)}, suppose that

  • {P(0)} is true, and
  • If {P(\alpha)} is true for all {\alpha < \beta}, then {P(\beta)} is true.

Then {P(\beta)} is true.

Similarly, you’re allowed to do recursion to define {x_\beta} if you know the value of {x_\alpha} for all {\alpha < \beta}.

The difference from normal induction or recursion is that we’ll often only do things like “define {x_{n+1} = \dots}”. But this is not enough to define {x_\alpha} for all {\alpha}. To see this, try using our normal induction and see how far we can climb up the ladder.

Answer: you can’t get {\omega}! It’s not of the form {n+1} for any of our natural numbers {n} — our finite induction only lets us get up to the ordinals less than {\omega}. Similarly, the simple {+1} doesn’t let us hit the ordinal {2\omega}, even if we already have {\omega+n} for all {n}. Such ordinals are called limit ordinals. The ordinal that are of the form {\alpha+1} are called successor ordinals.

So a transfinite induction or recursion is very often broken up into three cases. In the induction phrasing, it looks like

  • (Zero Case) First, resolve {P(0)}.
  • (Successor Case) Show that from {P(\alpha)} we can get {P(\alpha+1)}.
  • (Limit Case) Show that {P(\lambda)} holds given {P(\alpha)} for all {\alpha < \lambda}, where {\lambda} is a limit ordinal.

Similarly, transfinite recursion often is split into cases too.

  • (Zero Case) First, define {x_0}.
  • (Successor Case) Define {x_{\alpha+1}} from {x_\alpha}.
  • (Limit Case) Define {x_\lambda} from {x_\alpha} for all {\alpha < \lambda}, where {\lambda} is a limit ordinal.

In both situations, finite induction only does the first two cases, but if we’re able to do the third case we can climb far above the barrier {\omega}.

4. Wrapping Up Functional Equations

Let’s return to solving our problem.

Let {S_n} denote the set of “base” numbers we have at the {n} the step. In our example, we might have

\displaystyle S_1 = \left\{ 1 \right\}, \quad S_2 = \left\{ 1, \sqrt 2 \right\}, \quad S_3 = \left\{ 1, \sqrt 2, \sqrt 3 \right\}, \quad S_4 = \left\{ 1, \sqrt 2, \sqrt 3, \pi \right\}, \quad \dots

and we’d like to keep building up {S_i} until we can express all real numbers. For completeness, let me declare {S_0 = \varnothing}.

First, I need to be more precise about “independent”. Intuitively, this construction is working because

\displaystyle a + b \sqrt 2 + c \sqrt 3 + d \pi

is never going to equal zero for rational numbers {a}, {b}, {c}, {d} (other than all zeros). In general, a set {X} of numbers is “independent” if the combination

\displaystyle c_1x_1 + c_2x_2 + \dots + c_mx_m = 0

never occurs for rational numbers {{\mathbb Q}} unless {c_1 = c_2 = \dots = c_m = 0}. Here {x_i \in X} are distinct. Note that even if {X} is infinite, I can only take finite sums! (This notion has a name: we want {X} to be linearly independent over {{\mathbb Q}}.)

When do we stop? We’d like to stop when we have a set {S_{\text{something}}} that’s so big, every real number can be written in terms of the independent numbers. (This notion also has a name: it’s called a {{\mathbb Q}}-basis.) Let’s call such a set spanning; we stop once we hit a spanning set.

The idea that we can induct still seems okay: suppose {S_\alpha} isn’t spanning. Then there’s some number that is independent of {S_\alpha}, say {\sqrt{2015}\pi} or something. Then we just add it to get {S_{\alpha+1}}. And we keep going.

Unfortunately, as I said before it’s not enough to be able to go from {S_\alpha} to {S_{\alpha+1}} (successor case); we need to handle the limit case as well. But it turns out there’s a trick we can do. Suppose we’ve constructed all the sets {S_0}, {S_1}, {S_2}, …, one for each positive integer {n}, and none of them are spanning. The next thing I want to construct is {S_\omega}; somehow I have to “jump”. To do this, I now take the infinite union

\displaystyle S_\omega \overset{\text{def}}{=} S_0 \cup S_1 \cup S_2 \cup \dots.

The elements of this set are also independent (why?).

Ta-da! With the simple trick of “union all the existing sets”, we’ve just jumped the hurdle to the first limit ordinal {\omega}. Then we can construct {S_{\omega+1}}, {S_{\omega+2}}, …, and for the next limit we just do the same trick of “union-ing” all the previous sets.

So we can formalize the process as follows:

  1. Let {S_0 = \varnothing}.
  2. For a successor stage {S_{\alpha+1}}, add any element to {S_\alpha} to obtain {S_{\alpha+1}}.
  3. For a limit stage {S_{\lambda}}, take the union {\bigcup_{\gamma < \lambda} S_\gamma}.

How do we know that we’ll stop eventually? Well, the thing is that this process consumes a lot of real numbers. In particular, the ordinals get larger than the size of {{\mathbb R}}. Hence if we don’t stop we will quite literally reach a point where we have used up every single real number. Clearly that’s impossible, because by then the elements can’t possibly be independent! (EDIT Dec 20 2015: To be clear, the claim that “ordinals get larger than the size of the reals” requires the Axiom of Choice; one can’t do this construction using transfinite induction alone. Thanks reddit for calling me out on this.)

So by transfinite recursion (and Choice), we eventually hit some {S_\gamma} which is spanning: the elements are all independent, but every real number can be expressed using it. Done! This set has a name: a Hamel basis.

Thus this problem is dead, dead, dead. Any questions?

squidzilla

5. Zorn’s Lemma

Now I can tell you what Zorn’s Lemma is: it lets us do the same thing in any poset.

We can think of the above example as follows: consider all sets of independent elements. These form a partially ordered set by inclusion, and what we did was quite literally climb up a chain

\displaystyle S_0 \subset S_1 \subset S_2 \subset \dots.

It’s not quite climbing since we weren’t just going one step at a time: we had to do “jumps” to get up to {S_\omega} and resume climbing. But the main idea is to climb up a poset until we’re at the very top; in the previous case, when we reached the spanning set.

The same thing works verbatim with any partially ordered set {\mathcal P}. Let’s define some terminology. A local maximum (or maximal element) of the entire poset {\mathcal P} is an element which has no other elements strictly greater than it.

Now a chain of length {\gamma} is a set of elements {p_\alpha} for every {\alpha < \gamma} such that {p_0 < p_1 < p_2 < \dots}. (Observe that a chain has a last element if and only if {\gamma} is a successor ordinal, like {\omega+3}.) An upper bound to a chain is an element {\tilde p} which is greater than or equal to all elements of the chain. In particular, if {\gamma} is a successor ordinal, then just taking the last element of the chain works.

In this language, Zorn’s Lemma states that

Theorem 3 (Zorn’s Lemma) Let {\mathcal P} be a nonempty partially ordered set. If every chain has an upper bound, then {\mathcal P} has a local maximum.

Chains with length equal to a successor ordinal always have upper bounds, but this is not true in the limit case. So the hypothesis of Zorn’s Lemma is exactly what lets us “jump” up to define {p_\omega} and other limit ordinals. And the proof of Zorn’s Lemma is straightforward: keep climbing up the poset at successor stages, using Zorn’s condition to jump up at limit stages, and thus building a really long chain. But we have to eventually stop, or we literally run out of elements of {\mathcal P}. And the only possible stopping point is a local maximum.

If we want to phrase our previous solution in terms of Zorn’s Lemma, we’d say: Proof: Look at the poset whose elements are sets of independent real numbers. Every chain {S_0 \subset S_1 \subset \dots} has an upper bound {\bigcup S_\alpha} (which you have to check is actually an element of the poset). Thus by Zorn, there is a local maximum {S}. Then {S} must be spanning, because otherwise we could add an element to it. \Box

So really, Zorn’s Lemma is encoding all of the work of climbing that I argued earlier. It’s a neat little package that captures all the boilerplate, and tells you exactly what you need to check.

One last thing you might ask: where is the Axiom of Choice used? Well, the idea is that for any chain there could be lots of {\tilde p}‘s, and you need to pick one of them. Since you are making arbitrary choices infinitely many times, you need the Axiom of Choice. But really, it’s nothing special. [EDIT: AM points out that in order to talk about cardinalities in Theorem 1, one also needs the Axiom of Choice.]

6. Conclusion

In the words of Timothy Gowers,

If you are building a mathematical object in stages and find that (i) you have not finished even after infinitely many stages, and (ii) there seems to be nothing to stop you continuing to build, then Zorn’s lemma may well be able to help you.

Really, there’s nothing tricky at all here. People seem scared of Zorn’s Lemma, and claim it’s not intuitive or something. But really, all we’re doing is climbing up a poset. Nothing tricky at all.

14 thoughts on “Cauchy’s Functional Equation and Zorn’s Lemma”

  1. Hi, thank a lot for this post. I am not a mathematician but an economist (phd). I do not get why the Zorn’s lemma works only on partial ordered set and not any posets. I will really appreciate your help in explaining why the antisymmetry is required and if there is a way around it.

    Like

    1. Actually, perhaps by antisymmetry you mean

      {a \le b \text{ and } b \le a \implies a = b}.

      The reason you need this condition is to guarantee that as you climb up the poset, all the elements in your chain are distinct. Otherwise you could just be going in circles:

      \displaystyle  a \le b \le a \le b \le a \le \dots

      and the argument that you’d eventually run out of elements in {\mathcal P} doesn’t hold!

      I’m not sure if there’s a way around this, because I think if you have a relation that does not satisfy antisymmetry, a poset isn’t the right way to think about it — put another way, I can’t think of any relation without this property where I would still want to think about a “maximal” element. But if you have a specific example in mind, I would love to hear it! :)

      Like

  2. Hi, thanks a lot for the replay you got my question perefectly.
    The example:
    Let D be a acylcic binary realtion on a set X, and let P be the set of all reflxive and transtive realtions on X. The beinary relation M on P is defined by the rule (≽,≽′)∈M if for all x,y:(x,y)∈D, x≽′y implies x≽y. M is reflxive and transtive on P but is not antisymatric. For example: for X={x,y,z}, D={(x,y)}, ≽={(x,y)}, and ≽’={(x,y),(x,z)}, we have (≽,≽′)∈M and (≽′,≽)∈M but ≽≠≽′.
    As for your argument about no running out of elments, if a=b then you simply cut this chain (and any other chain with two sequantial indentical elments), right? but then can’t we cut, in the same way, any sequantial pairs (≽,≽′)∈M and (≽′,≽)∈M ?
    If you have time I can be more presice about this last idea.
    Thanks a gain,
    Guy

    Like

  3. Sorry for writing too much but I was to eager to answer earlier that I immediately gave my example instead of a much simpler one. Let ≽ on ℝ² defined by the rule A≽B if a₁⋅a₂≥b₁⋅b₂. While ≽ is not antisymmetric it has a natural meaning of ≽- maximal element.

    Like

    1. OK, I see. :)

      You do *need* the antisymmetry condition; for example, without it the ordering \prec on a two-element set \{a,b\} with a \prec b and b \prec a is a counterexample. This poset has no “maximal elements” yet every element is an “upper bound” to any chain!

      I’ll have to think about what new hypotheses you’d have to add in order to get this to work.

      Like

    2. When I was taking this course, a reflexive and transitive relation was called a preorder.

      The thing is that a preorder generates a partial order, by taking the quotient by the equivalence relation $a\leq b$ and $b\leq a$, so you can apply Zorn lemma to the induced partial order and get:

      If a preordered set has an upper bound for every chain, then it has a “maximal” element, where “maximal” element $a$ means that $a\leq b$ imply $b\leq a$.

      Like

  4. It seems that this construction will fail to yield a non-linear additive map from \mathbf{R} to \mathbf{R} if you pick specific values for the coefficients of the terms that make up f(x), no?

    To explain what I mean, if we set f(\sqrt{2}) = \sqrt{2}, we must have that f(a + b \sqrt{2}) = a + \sqrt{2} b, which means that f is linear, at least for numbers of the form a + b \sqrt{2}. Thus we must add the requirement that f(x_{i}) \neq x_{i} , no?

    Liked by 1 person

  5. I guess I am not getting a simple fact about ordinals here:

    If I start at \omega and jump down, why will I reach 0 in a finite number of steps? \omega has no immediate predecessor.

    Liked by 1 person

  6. Hi Evan, thanks for the post. I don’t get how the Axiom of Choice is used in the transfinite induction. Why do you need it in Theorem 1 and to show that the ordinals eventually exceed the size of the reals?

    Like

Leave a comment