August 1, 2019

When Does Nothing Mean Something?


I thought I was done writing about this topic, but it just keeps coming back.  The internet just cannot seem to leave this sort of problem alone:
I don't know what it is about expressions of the form \(a\div b(c+d)\) that fascinates us as a species, but fascinate it does.  I've written about this before (as well as why "PEMDAS" is terrible), but the more I've thought about it, the more sympathy I've found with those in the minority of the debate, and as a result my position has evolved somewhat.

So I'm going to go out on a limb, and claim that the answer should be \(1\).

Before you walk away shaking your head and saying "he's lost it, he doesn't know what he's talking about", let me assure you that I'm obviouly not denying the left-to-right convention for how to do explicit multiplication and division.  Nobody's arguing that.*  Rather, there's something much more subtle going on here.

What we may be seeing here is evidence of a mathematical "language shift".

It's easy to forget that mathematics did not always look as it does today, but has arrived at its current form through very human processes of invention and revision.  There's an excellent page by Jeff Miller that catalogues the earliest recorded uses of symbols like the operations and the equals sign -- symbols that seem timeless, symbols we take for granted every day.

People also often don't realize that this process of invention and revision still happens to this day.  The modern notation for the floor function is a great example that was only developed within the last century.  Even today on the internet, you occasionally see discussions in which people debate on how mathematical notation can be improved.  (I'm still holding out hope that my alternative notation for logarithms will one day catch on.)

Of particular note is the evolution of grouping symbols.  We usually think only of parentheses (as well as their variations like square brackets and curly braces) as denoting grouping, but an even earlier symbol used to group expressions was the vinculum -- a horizontal bar found over or under an expression.  Consider the following expression: \[3-(1+2)\] If we wrote the same expression with a vinculum, it would look like this: \[3-\overline{1+2}\] Vincula can even be stacked: \[13-\overline{\overline{1+2}\cdot 3}=4\] This may seem like a quaint way of grouping, but it does in fact survive in our notation for fractions and radicals!  You can even see both uses in the quadratic formula: \[x=\dfrac{-b\pm\sqrt{b^2-4ac}}{2a}\]

Getting back to the original problem, what I think we're seeing is evidence that concatenation -- placing symbols next to each other with no sort of explicit symbol -- has become another way to represent grouping.

"But wait", you might say, "concatenation is used to represent multiplication, not grouping!"  That's certainly true in many cases, for example in how we write polynomials.  However, there are a few places in mathematics that provide evidence that there's more to it than that.

First of all, as a beautifully-written Twitter thread by EnchantressOfNumbers (@EoN_tweets) points out, we use concatenation to show a special importance of grouping when we write out certain trigonometric expressions without putting their arguments in parentheses.  Consider the following identity:
\[\sin 4u=2\sin 2u\cos 2u\] When we write such an equation, we're saying that not only do \(4u\) and \(2u\) represent multiplications, but that this grouping is so tight that they constitute the entire arguments of the sine and cosine functions.  In fact, the space between \(\sin 2x\) and \(\cos 2x\) can also be seen as a somewhat looser form of concatention.  Then again, so does the space between \(\sin\) and \(x\), which represents a different thing -- the connection of a function to its argument.  Perhaps this is why the popular (and amazing) online graphing calculator Desmos is only so permissive when it comes to parsing concatenation:

In contrast, where we do draw the line is with an expression like the following:\[\sin x+y\] We always interpret this as \((\sin(x))+y\), never \(\sin(x+y)\).  To drive home just how much stronger implicit multiplication feels to us than explicit multiplication, just take a look at the following expression: \[\sin x\cdot y\] Does this mean \((\sin(x))\cdot y\) or \(\sin(x\cdot y)\)?  If that expression makes you writhe uncomfortably, while if it had been written as \(\sin xy\) it would be fine, then you might see what I'm getting at.

An even more curious case is mixed numbers.  When writing mixed numbers, concatenation actually stands for addition, not multiplication. \[3\tfrac{1}{2}=3+\tfrac{1}{2}\] In fact, concatenation actually makes addition come before multiplication when we multiply mixed numbers! \[3\tfrac{1}{2}\cdot 5\tfrac{5}{6}=(3+\tfrac{1}{2})\cdot(5+\tfrac{5}{6})=20\tfrac{5}{12}\]

Now, you may feel that this example shows how mixed numbers are an inelegance in mathematical notation (and I would agree with you).  Even so, I argue that this is evidence that we fundamentally view concatenation as a way to represent grouping.  It just so happens that, since multiplication takes precedence over addition anyway in the absence of other grouping symbols, we use concatenation when we write it.  This all stems from a sort of "laziness" in how we write things -- - laying out precedence rules allows us to avoid writing parentheses, and once we've established those precedence rules, we don't even need to write out the multiplication at all.

So how does the internet's favorite math problem fit into all this?

The most striking feature of the expression \(8\div 2(2+2)\) is that it's written all in one line.

Mathematical typesetting is difficult.  LaTeX is powerful, but has a steep learning curve, though various other editors have made it a bit easier, such as Microsoft Word's Equation Editor (which has much improved since when I first used it!).  Calculators have also recognized this difficulty, which is why TI calculators now have MathPrint templates (though its entry is quite clunky compared to Desmos's "as-you-type" formatting via MathQuill).

Even so, all of these input methods exist in very specific applications.  What about when you're writing an email?  Or sending a text?  Or a Facebook message?  (If you're wondering "who the heck writes about math in a Facebook message", the answer at least includes "students who are trying to study for a test".)  The evolution of these sorts of media has led to the importance of one-line representations of mathematics with easily-accessible symbols.  When you don't have the ability (or the time) to neatly typeset a fraction, you're going to find a way to use the tools you've got.  And that's even more important as we realize that everybody can (and should!) engage with mathematics, not just mathematicians or educators.

So that might explain why a physics student might type "hbar = h / 2pi", and others would know that this clearly means \(\hbar=\dfrac{h}{2\pi}\) rather than \(\hbar=\dfrac{h}{2}\pi\).  Remember, mathematics is not about just answer-getting.  It's about communication of those ideas.  And when the medium of communication limits how those ideas can be represented, the method of communication often changes to accomodate it.

What the infamous problem points out is that while almost nobody has laid out any explicit rules for how to deal with concatenation, we seem to have developed some implicit ones, which we use without thinking about them.  We just never had to deal with them until recently, as more "everyday" people communicate mathematics on more "everyday" media.

Perhaps it's time that we address this convention explicitly and admit that concatenation really has become a way to represent grouping, just like parentheses or the vinculum.  This is akin to taking a more descriptivist, rather than prescriptivist, approach to language: all we would be doing is recognizing that this is already how we do things everywhere else.

Of course, this would throw a wrench in PEMDAS, but that just means we'd need to actually talk about the mathematics behind it rather than memorizing a silly mnemonic.  After all, as inane as these internet math problems can be, they've shown that (whether they admit it or not) people really do want to get to the bottom of mathematics, to truly understand it.

I'd say that's a good thing.


* If your argument for why the answer is \(16\) starts with "Well, \(2(2+2)\) means \(2\cdot(2+2)\), so...", then you have missed the point entirely.

January 12, 2019

Who Says You Can't Do That? --- Trig Identities


Ahh, trig identities...  a rite of passage for any precalculus student.

This is a huge stumbling block for many students, because up until this point, many have been perfectly successful (or at least have gotten by) in their classes by learning canned formulas and procedures and then doing a bunch of exercises that just change a \(2\) to a \(3\) here and a plus to a minus there.  Now, all of a sudden, there's no set way of going about things.  No "step 1 do this, step 2 do that".  Now they have to rely on their intuition and "play" with an identity until they prove that it's correct.

And to make matters worse, many textbooks --- and, as a result, many teachers --- make this subject arbitrarily and artificially harder for the students.

They insist that students are not allowed to work on both sides of the equation, but instead must specifically start at one end and work their way to the other.  I myself once subscribed to this "rule", because it's how I'd always been taught, and I always fed students the old line of "you can't assume the thing you're trying to prove because that's a logical fallacy".

Then one of my Honors Precalculus students called me on it.

He asked me to come up with an example of a trig non-identity where adding the same thing on both sides would lead to a false proof that the identity was correct.  After some thought, I realized that not only couldn't I think of one, but that mathematically, there's no reason that one should exist.

To begin with, one valid way to prove an identity is to work with each side of the equation separately and show that they are both equal to the same thing.  For example, suppose you want to verify the following identity:

\[\dfrac{\cot^2{\theta}}{1+\csc{\theta}}=\dfrac{1-\sin{\theta}}{\sin{\theta}}\]
Trying to work from one side to the other would be a nightmare, but it's much simpler to show that each side is equal to \(\csc{\theta}-1\).  This in fact demonstrates one of the oldest axioms in mathematics, as written by Euclid:  "things which are equal to the same thing are equal to each other."

But what about doing the same thing to both sides of an equation?

There are two important points to realize about what's going on behind the scenes here.

The first is that if your "thing you do to both sides" is a reversible step --- that is, if you're applying a one-to-one function to both sides of an equation --- then it's perfectly valid to use that as part of your proof because it establishes an if-and-only-if relationship.  If that function is not one-to-one, all bets are off.  You can't prove that \(2=-2\) by squaring both sides to get \(4=4\), because the function \(x\mapsto x^2\) maps multiple inputs to the same output.

It baffles me that most Precalculus textbooks mention one-to-one functions in the first chapter or two, yet completely fail to understand how this applies to solving equations.*  A notable exception is UCSMP's Precalculus and Discrete Mathematics book, which establishes the following on p. 169:


Reversible Steps Theorem
Let \(f\), \(g\), and \(h\) be functions.  Then, for all \(x\) in the intersection of the domains of functions \(f\), \(g\), and \(h\),
  1. \(f(x)=g(x) \Leftrightarrow f(x)+h(x)=g(x)+h(x)\)
  2. \(f(x)=g(x) \Leftrightarrow f(x)\cdot h(x)=g(x)\cdot h(x)\) [We'll actually come back to this one in a bit -- there's a slight issue with it.]
  3. If \(h\) is 1-1, then for all \(x\) in the domains of \(f\) and \(g\) for which \(f(x)\) and \(g(x)\) are in the domain of \(h\), \[f(x)=g(x) \Leftrightarrow h(f(x))=h(g(x)).\]

Later on p. 318, the book says:

"...there is no new or special logic for proving identities.  Identities are equations and all the logic that was discussed with equation-solving applies to them."

Yes, that whole "math isn't just a bunch of arbitrary rules" thing applies here too.

The second important point, which you may have noticed while looking at the statement of the Reversible Steps Theorem, is that the implied domain of an identity matters a great deal.  When you're proving a trig identity, you are trying to establish that it is true for all inputs that are in the domain of both sides.  Most textbooks at least pay lip service to this fact, even though they don't follow it to its logical conclusion.

To illustrate why domain is so important, consider this example:

\[\dfrac{\cos{x}}{1-\sin{x}} = \dfrac{1+\sin{x}}{\cos{x}}\]
To verify this identity, I'm going to do something that may give you a visceral reaction:  I'm going to "cross-multiply".  Or, more properly, I'm going to multiply both sides by the expression \((1 - \sin x)\cos x\).  I claim that this is a perfectly valid step to take, and what's more, it makes the rest of the proof downright easy by reducing to everyone's favorite Pythagorean identity:

\[
\begin{align*}
(\cos{x})(\cos{x}) &= (1+\sin{x})(1-\sin{x})\\
\cos^2{x} &= 1-\sin^2{x}\\
\sin^2{x} + \cos^2{x} &= 1 \quad\blacksquare
\end{align*}
\]
"But wait," you ask, "what if \(x=\pi/2\)?  Then you're multiplying both sides by zero, and that's certainly not reversible!"

True.  But if \(x=\pi/2\), then the denominators of both sides of the equation are zero, so the identity isn't even true in the first place.  For any value of \(x\) that does not yield a zero in either denominator, though, multiplying both sides of an equation by that value is a reversible operation and therefore completely valid.

Now, this isn't to say that multiplying both sides of an equation by a function can't lead to problems --- for example, if \(h(x)=0\) (as in the zero function), then \(f(x)\cdot h(x)=g(x)\cdot h(x)\) no matter what.  This can even lead to problems in more subtle cases: suppose \(f\) and \(g\) are equal everywhere but a single point \(a\); for example, perhaps \(f(a)=1\) and \(g(a)=2\).  If it just so happens that \(h(a)=0\), then \(f\cdot h\) and \(g\cdot h\) will be equal as functions, even though \(f\) and \(g\) are not themselves equal.

The real issue here can be explained via a quick foray into higher mathematics.  Functions form what's called a ring -- basically meaning you can add, subtract, and multiply them, and these operations have all the nice properties we'd expect.  But being able to preserve that if-and-only-if relationship when multiplying a function by both sides of an equation requires a special kind of ring called an integral domain, which means that it's impossible to multiply two nonzero functions together and get a zero function.

Unfortunately, functions in general don't form an integral domain --- not even continuous functions, or differentiable functions, or even infinitely differentiable functions do!  But if we move up to the complex numbers (where everything works better!), then the set of analytic functions --- functions that can be written as power series (infinite polynomials) on an open domain --- is an integral domain.  And most of the functions that precalculus students encounter generally turn out to be analytic**:  polynomial, rational, exponential, logarithmic, trigonometric, and even inverse trigonometric.  This means that when proving trigonometric identities, multiplying both sides by the same function is a "safe" operation.

So in sum, when proving trigonometric identities, as long as you're careful to only use reversible steps (what a great time to spiral back to one-to-one functions, by the way!), you are welcome to apply all the same algebraic operations that you would when solving equations, and the chain of equalities you establish will prove the identity.  Even "cross-multiplying" is fair game, because any input that would make the denominator zero would invalidate the identity anyway.***  Since trigonometric functions are generally "safe" (analytic), we're guaranteed to never run into any issues.

Now, none of this is to say that there isn't intrinsic merit to learning how to prove an identity by working from one side to the other.  Algebraic "tricks" --- like multiplying by an expression over itself (\(1\) in disguise!) to conveniently simplify certain expressions --- are important tools for students to have under their belts, especially when they encounter limits and integrals next year in calculus.

What we need to do, then, is encourage our students to come up with multiple solution methods, and perhaps present working from one side to the other as an added challenge to build their mathematical muscles.  And if students are going to work on both sides of an equation at once, then we need to hold them to high standards and make them explicitly state in their proofs that all the steps they have taken are reversible!  If they're unsure on whether or not a step is valid, have them investigate it until they're convinced one way or the other.

If we're artificially limiting our students by claiming that only one solution method is correct, we're sending the wrong message about what mathematics really is.  Instead, celebrating and cultivating our students' creativity is the best way to prepare them for problem-solving in the real world.

--

* Rather, I would say it baffles me, but actually I'm quite used to seeing textbooks treat mathematical topics as disparate and unconnected, like how a number of Precalculus books teach vectors in one chapter and matrices in the next, yet never once mentione how they are so beautifully tied together via transformations.

** Except perhaps at a few points.  The more correct term for rational functions and certain trigonometric functions is actually meromorphic, which describes functions that are analytic everywhere except a discrete set of points, called the poles of the function, where the function blows up to infinity because of division by zero.

*** If you extend the domains of the trig functions to allow for division by zero, you do need to be more careful.  Not because there's anything intrinsically wrong with dividing by zero, but because \(0\cdot\infty\) is an indeterminate expression and causes problems that algebra simply can't handle.

January 2, 2019

When Math is in Jeopardy!

(Forgive the somewhat dramatic title, but it was too good to pass up.)

I absolutely love game shows.  It's always fun to imagine you're there on stage ... or at least to yell at the people on TV when they don't know a question that was just so easy!

One of my favorite game shows, of course, is Jeopardy --- it has an air of intellectualism about it, with such an eclectic collection of topics.  I don't even mind that I'm admittedly pretty terrible at it... I always find myself thinking, "Man, if they just had some math questions, I could knock those out of the park!"

Well, a couple of days ago, as we were getting ready to celebrate the advent of 2019, I was elated to see that there was in fact a math question!  (Or math answer, rather.  You know what, I'm just going to call them "clue" and "response" to avoid confusion.)

The category was "Ends in 'ITE'", and the question clue was as follows:
"In math, when the number of elements in a set is countable, it's this type of set."
Image may contain: text
Thanks to @GoGoGadgetGabe on Twitter for the screenshot!
As I was trying to think of what answer response would end in those three letters, a contestant buzzed in and said:
"What is 'finite'?"
Alex Trebek notified the contestant that they were correct.

Meanwhile, I was speechless because I knew they weren't.

Well, at least not completely correct.  See, in math, words have very specific definitions in order to describe very specific phenomena.  In this case, the word "countable" is used in set theory to describe a set --- a collection of things --- that can be put into one-to-one correspondence with a subset of the natural numbers, \(\mathbb{N}=\{0,1,2,3,\ldots\}\).*  So, while finite sets are indeed countable, there are also infinite countable sets --- the integers, the even numbers, and the rational numbers are all well-known examples.  (That last one still amazes me --- in some sense, there are exactly as many fractions as there are whole numbers, even though it seems like the former should outnumber the latter!)

That means that the statement "When the number of elements in a set is countable, it's a finite set" is actually incorrect.

Naturally, I took to the internet to voice my displeasure with how my favorite subject was represented on national television.  When talking with a friend of mine (who knows more about game shows than I ever will), I learned that there had been two other cases recently where math-centric Jeopardy questions had issues with them in the past couple of months.
  • In one case, the clue was, "If \(x^2+2=18\), then \(x\) equals this."  The response that was judged to be correct was "What is '\(4\)'?".  Any algebra teacher reading this right now is shaking their head, because that answer won't get you full credit on any test.  There are two possible values of \(x\): \(4\) and \(-4\).
  • In another case, the response to the clue was supposed to be "What is the commutative property?".  Nobody got it correct (which makes me sad in itself), but when Alex Trebek read the correct response out loud, he said it as "COM-myoo-TAY-tive" instead of "com-MYOO-tuh-TIVE".
Now, in isolation, any one of these would be only a minor annoyance.  After all, there are plenty of clues on Jeopardy that have multiple possible correct responses, and contestants aren't expected to give all correct responses, but rather just one.

But the fact that questions about mathematics appear so infrequently on the show compared to topics such as history, combined with the fact that these kinds of details are not attended to, seems to send a message that mathematics is considered to be not as important, not worth researching fully in the spirit of the subject.

We already live in a culture in which any time I tell somebody I teach math, the inevitable response is "Oh, haha, I was never any good at math."  Somehow people seem proud to admit and even proclaim this.  I'm willing to wager (maybe even make this a true Daily Double) that those people would be much more reluctant to say something like "Oh, haha, I was never any good at reading."  There's a pervasive attitude that mathematics is a torturous and frivolous subject, devoid of the awe-inspiring beauty and sheer fun that those who embrace the subject know it to have.

With that said, I'd like to challenge the writers of Jeopardy --- and perhaps other game shows as well --- to make a conscious effort not only to ask more questions about mathematics, but to take care to do them well, perhaps even consulting one or more mathematicians to make sure the precision and nuance of the subject are properly represented.  (I know math teachers who have come up with versions of game shows for their classes with only mathematically-oriented questions... the students love it!)

It doesn't have to be something like "\(1\times 2\times 3\times 4\times 5\)"** either.  There's such a rich amount of material to pull from --- why not ask questions about, say, fractals?  Or famous mathematicians?  Or even unsolved problems (and those who eventually solved them)?  I would be giddy to see something like,
"Shot and killed in a duel when he was only twenty, this mathematician spent the last night of his life writing down what he'd discovered about quintic equations."
Doesn't that make you want to find out what the story was, why he felt fifth-degree equations were so important that he just had to share it, knowing he would soon die?  Mathematics is full of stories like this, and perhaps letting people know those stories exist, that there's more to math than doing arithmetic problems, might change how people view the subject.

Of course, if winning prize money is what you like about game shows, there's always the million-dollar Millennium Prize Problems...

--

* You might be thinking, "But zero isn't a natural number!"  As it turns out, there's no real consensus on whether zero is considered a natural number.  Some mathematicians choose to include it, while others don't.  To some extent it depends on your field of study --- for example, number theorists may be more likely to disinclude zero because it doesn't play very nicely with things like prime factorizations, while computer scientists are used to counting from zero instead of one.  Peano actually started off his axiomatization of the natural numbers with one being the starting point, but then changed his mind later and started with zero!

** This was actually a clue on a Kid's Jeopardy episode, under the category "Non-Common Core Math".  That will eventually be the impetus for another blog post in the future on the way we currently view mathematics teaching.

October 21, 2017

Discreet Discrete Calculus

Over the past week, I went to the Georgia Mathematics Conference (GMC) at Rock Eagle, held by the Georgia Council of Teachers of Mathematics (GCTM).  The GMC is one of the events I look forward to most every year --- tons of math educators and advocates sharing lessons, techniques, and ideas about how to best teach math to students from kindergarten through college.  I always enjoy sharing my own perspectives as well (even when they do get a bit bizarre!)

This time, I got to share the results of a lesson that I guinea-pigged on my Honors Precalculus class last year, where they explored the relationships between polynomial sequences, common differences, and partial sums.  The presentation from the GMC uses the techniques we looked at to develop the formula for the sum of the first \(n\) perfect squares:

\[1^2+2^2+3^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}\]

At the GMC, we did go a bit further than my class did --- they didn't do the full development of discrete calculus --- but some knowledge of where the ideas lead is never a bad thing, and a different class at a different school may even be able to go further!

Here is the PowerPoint from the presentation.  If you find it useful, or have any questions, please don't hesitate to leave a comment!

(I recommend downloading the PPTX file and viewing the slide show in PowerPoint, rather than using Google's online viewer.)

September 15, 2017

I'm Done with Two Column Proofs.

Wow.

It really has been a while since I've posted here, hasn't it?

But I suppose having a mini-crisis in my geometry class that has forced me to reject our textbook's philosophy on what "proofs" should be is a good enough reason to resurrect this blog.  I finally have the time this year, and I feel the need to share my probably overly opinionated beliefs about math education.

This is the first year I have tried to integrate proofs into our school's geometry curriculum across the board.  (In the past, proofs were only discussed in honors classes, but I felt somehow that reasoning through how you knew something was true was important for everyone.)  I was trying to justify the two-column format, as much as I hate it, as a way to "scaffold" student thinking --- yay educational buzzwords!  But when I actually did it, it got exactly the reaction that I knew it would --- it just served to overly obfuscate the material and utterly drain the life out of it.  I realized I should have stuck to my guns and listened to the likes of Paul Lockhart and Ben Orlin.

So, after some reflection and course correction, here's the email I just sent my students.

---
Hello mathematicians. I have a rather bizarre request. 
Don't do your geometry homework this weekend. 
Yes, you read that right. Don't. 
Let me explain.
We've spent the past couple of days looking at "proofs" in geometry. The reason I say "proofs" in quotations is that, in all honesty, I don't believe the two-column proofs that our book does are are all that useful. I have actually been long opposed to them, but against my better judgment, decided to give them a shot anyway and make them sound reasonable. But you know what they say ... if you put lipstick on a chazir*, it's still a chazir. (They do say that, right?) The thing is, that style of proof just ends up sounding like an overly repetitive magical incantation rather than an actual logical argument --- as some of you pointed out in class today. I truly do value that honesty, by the way, and I hope you continue to be that honest with me. 
Here is what I actually will expect of you going forward. It's quite simple: 
I expect you to be able to tell me how you know something is true, and back it up with evidence. 
That's it. 
It may be a big-picture kind of question, or it may be telling me how we get from step A to step B, but when it really comes down to it, it's all just "here's why I know this is true, based on this evidence". It doesn't have to be some stilted-sounding name like the "Congruent Supplements Theorem" either --- just explain it in your own words. That doesn't mean that any explanation is correct --- it still has to be valid mathematical reasoning. You can't tell me that two segments on a page are congruent because they're drawn in the same color, or something silly like that. But it doesn't have to be in some prescribed way --- just as long as you show me you really do understand it. 
With that in mind, by the way, I'm also not going to be giving you a quiz on Monday, either. Instead, we're going to focus on how to make arguments that are a lot more convincing than just saying the same thing in different words. I think you'll find that Monday's class will make a lot more sense than the past few classes combined. 
So, relax, take a much-deserved Shabbat, and when we come back, I hope to invite you to see geometry the way I see it --- not as a set of arbitrary rules, but as something both logical and beautiful. 
Shabbat Shalom.

*  I teach at a Jewish private school.  "Chazir" is Hebrew for "pig", which has the added bonus of being non-Kosher.  Two-column proofs are treif... at least in the context of introductory geometry.

P.S.  I am not saying that two-column proofs NEVER have a place in mathematics.  I am merely saying that introductory geometry, when kids are still getting used to much of geometry as a subject, is not the proper place to introduce the building of an axiomatic system.  Save that for later courses for the students who choose to become STEM majors.

August 14, 2015

A Real-Life Paradox: The Banach-Tarski Burrito

Who knew the Axiom of Choice could help me decide whether to get guacamole for an extra $1.95?

A couple of weeks ago, the popular YouTube channel Vsauce released a video that tackles what it details as “one of the strangest theorems in modern mathematics”: the Banach-Tarski Paradox.  In the video, Michael Stevens explains how a single sphere can be decomposed into peculiar-looking sets, after which those sets can be recombined to form two spheres, each perfectly identical to the original in every way.  If you haven’t had a chance to watch the video, go ahead and do so here:


Although this seems like a purely theoretical abstraction of mathematics, the video leaves us wondering if perhaps there could be a real-world application of such a bizarre phenomenon.  Stevens asks, “is [the Banach-Tarski paradox] a place where math and physics separate?  We still don’t know … The Banach-Tarski Paradox could actually happen in our real world … some scientists think it may be physically valid.”

Well, my friends, I would like to make the bold claim that I have indeed discovered a physical manifestation of this paradox.

And it happened a few years ago at my local Chipotle.

Let me start off by saying this:  I love Chipotle.  It’s a particularly good day for me when I walk in and get my burrito with brown rice, fajita veggies, steak, hot salsa, cheese, pico de gallo, corn, sour cream, guacamole (yes I know it’s extra, just put it on my burrito already!), and a bit of lettuce.  No chips, Coke, and about a half hour later I’m one happily stuffed math teacher.

The only thing that I don’t like about Chipotle is that the construction of said burritos often ends up failing at the most crucial step – the rolling into one coherent, tasty package.  Given the sheer amount of food that gets crammed into a Chipotle burrito, it’s unsurprising that they eventually lose their structural integrity and burst, somewhat defeating the purpose of ordering a burrito in the first place.

If you have ever felt the pain of seeing your glorious Mexican monstrosity explode with toppings like something out of an Alien movie because of an unlucky burrito-roller, you have probably been offered the opportunity to “double-wrap” your burrito for no extra charge, giving it an extra layer of tortilla to ensure the safe deliverance of guacamole-and-assorted-other-ingredients into your hungry maw.

Now, being a mathematically-minded kind of guy, I asked the employee who made me this generous offer:

“Well, could I just get my ingredients split between two tortillas instead?”

The destroyer-of-burritos gave that look that you always get from anybody who works at a business that bandies about words like “company policy” when they realize they have to deny a customer’s request even in the face of logic, and said:

“If you do that, we’ll have to charge you for two burritos.”

I was dumbfounded.

“Wait … so you’re saying that if you put a second tortilla around my burrito, you’ll charge me for one burrito, but if you rearrange the exact same ingredients, you’ll charge me for two?”

“Yes sir – company policy.”

Utterly defeated, I begrudgingly accepted the offer to give my burrito its extra layer of protection, doing my best to smile at the girl who probably knew as well as I did the sheer absurdity of the words that had come out of her mouth.  I paid the cashier, let out an audible “oof” as I lifted the noticeably heavy paper bag covered with trendy lettering, and exited the store.

When I arrived home, I took what looked like an aluminum foil-wrapped football out of the bag (which was a great source of amusement for my housemates), laid it out on the kitchen table, and decided to dismantle the burrito myself and arrange it into two much more manageable Mexican morsels.  I wondered whether I should have done this juggling of ingredients right there at Chipotle, just to see whether the staff’s heads would explode.

It was in that moment, with my head still throbbing from the madness of the entire experience, that I began to realize what had just happened.  How was it possible that a given mass of food could cost one amount one moment and another amount the next?  I immediately began to deconstruct my burrito, laying out the extra tortilla onto a plate and carefully making sure that precisely one-half of the ingredients – especially the guacamole – found their way into their new home.  As I carefully re-wrapped both tortillas, my suspicions were confirmed.  Sitting right in front of me were two delicious burritos, each identical in price to my original.

I had discovered the Banach-Tarski Burrito.

April 24, 2015

A Radical New Look for Logarithms

"A good notation has a subtlety and suggestiveness which at times make it almost seem like a live teacher." — Bertrand Russell

"We could, of course, use any notation we want; do not laugh at notations; invent them, they are powerful. In fact, mathematics is, to a large extent, invention of better notations." — Richard Feynman

"By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and in effect increases the mental power of the race." — Alfred North Whitehead


Notation is perhaps one of the most important aspects of mathematics.  The right choice of notation can make a concept clear as day; the wrong choice can make extracting its meaning hopeless.  Of course, one great thing about notation is that even if there's a poor choice of notation out there (such as \(\left[x\right]\) or \(\pi\)), often someone comes along and creates a better one (such as \(\lfloor x\rfloor\) for the floor function or multiples of tau, \(\tau\approx 6.28318\), for radian measure of angles).

Which brings me to one such poor choice of notation, one that I believe needs fixing:  the rather asymmetrical notation of powers, roots, and logarithms.

Here we have three very closely related concepts — both roots and logarithms are ways to invert exponentiation, the former returning the base and the latter returning the exponent.  And yet their notation couldn't be more different:
\[2^3=8\\
\sqrt[3]{8}=2\\
\log_2{8}=3\]This always struck me as annoyingly inelegant.  Wouldn't it be nice if these notations bore at least some resemblance to each other?

After giving it some thought, I believe I have found a possible solution.  As an alternative to writing \(\log_2 {8}\), I propose the following notation:


This notation makes use of a reflected radical symbol, such that the base of the logarithm is written in a similar manner to the index of a radical but below the "point" (the pointy part of the radical symbol), and the argument of the logarithm is written "inside".  The use of this notation has a number of advantages:

  1. The symmetry between the normal radical for roots and the reflected radical for logarithms highlights both their similarities and their differences — each one "undoes" an exponential expression, but each one gives a different part of the expression (the base and the exponent, respectively.)
  2. The radical symbol can be looked at as a modified lowercase letter "r".  (This may actually be the origin of the symbol, where the "r" stands for radix, the Latin word for "root".)  In a similar way, the new symbol for logarithms resembles a capital "L".
  3. The placement of the "small number" and the "point" can take on a secondary spatial meaning: 
    • The "small number" represents a piece of information we know about an exponential expression, and its placement indicates which part we know.
      • For a root, the "small number" is on top, so we know the exponent.
      • For a logarithm, the "small number" is on bottom, so we know the base.
    • The symbol seems to "point" to the piece of information that we are looking for.
      • For a root, the "point" is pointing downward, so we are looking for the base.
      • For a logarithm, the "point" is pointing upward, so we are looking for the exponent.
    • Looking at the image above, the new notation seems to say "We know the base is 2, so what's the exponent that will get us to 8?"
    • Similarly, the expression \(\sqrt[3]{8}\) now can be interpreted as saying "We know the exponent is 3, so what's the base that will get us to 8?"

This notation would obviously not make much of a difference for seasoned mathematicians who are perfectly comfortable with the \(\log\) and \(\ln\) functions.  But from a pedagogical standpoint, the reflected radical, with its multi-layered meaning and auto-mnemonic properties, could help students become more comfortable with a concept that many look at as just meaningless manipulation of symbols.

When I first came up with this reflected-radical notation, I had originally imagined that it should replace the current notation.  However, after some feedback from various people and some further consideration, I think a better course of action would be to have this notation be used alongside the current notation, much in the way that we have multiple notations for other concepts in math (such as the many ways to write derivatives).  However, I would suggest that, if it were to become commonplace*, this notation would be best to use when first introducing the concept in schools.  The current notation isn't wrong per se — it's just not very evocative of the underlying concept.  Anything that can better elucidate that concept can't be a bad thing when it comes to students learning mathematics!

It may seem like a radical idea.
But it's a logical one.


* Of course, for this notation to become commonplace, somebody would need to figure out how to replicate it in LaTeX.  Any takers?

Contact Form

Name

Email *

Message *