August 14, 2015

A Real-Life Paradox: The Banach-Tarski Burrito

Who knew the Axiom of Choice could help me decide whether to get guacamole for an extra $1.95?

A couple of weeks ago, the popular YouTube channel Vsauce released a video that tackles what it details as “one of the strangest theorems in modern mathematics”: the Banach-Tarski Paradox.  In the video, Michael Stevens explains how a single sphere can be decomposed into peculiar-looking sets, after which those sets can be recombined to form two spheres, each perfectly identical to the original in every way.  If you haven’t had a chance to watch the video, go ahead and do so here:


Although this seems like a purely theoretical abstraction of mathematics, the video leaves us wondering if perhaps there could be a real-world application of such a bizarre phenomenon.  Stevens asks, “is [the Banach-Tarski paradox] a place where math and physics separate?  We still don’t know … The Banach-Tarski Paradox could actually happen in our real world … some scientists think it may be physically valid.”

Well, my friends, I would like to make the bold claim that I have indeed discovered a physical manifestation of this paradox.

And it happened a few years ago at my local Chipotle.

Let me start off by saying this:  I love Chipotle.  It’s a particularly good day for me when I walk in and get my burrito with brown rice, fajita veggies, steak, hot salsa, cheese, pico de gallo, corn, sour cream, guacamole (yes I know it’s extra, just put it on my burrito already!), and a bit of lettuce.  No chips, Coke, and about a half hour later I’m one happily stuffed math teacher.

The only thing that I don’t like about Chipotle is that the construction of said burritos often ends up failing at the most crucial step – the rolling into one coherent, tasty package.  Given the sheer amount of food that gets crammed into a Chipotle burrito, it’s unsurprising that they eventually lose their structural integrity and burst, somewhat defeating the purpose of ordering a burrito in the first place.

If you have ever felt the pain of seeing your glorious Mexican monstrosity explode with toppings like something out of an Alien movie because of an unlucky burrito-roller, you have probably been offered the opportunity to “double-wrap” your burrito for no extra charge, giving it an extra layer of tortilla to ensure the safe deliverance of guacamole-and-assorted-other-ingredients into your hungry maw.

Now, being a mathematically-minded kind of guy, I asked the employee who made me this generous offer:

“Well, could I just get my ingredients split between two tortillas instead?”

The destroyer-of-burritos gave that look that you always get from anybody who works at a business that bandies about words like “company policy” when they realize they have to deny a customer’s request even in the face of logic, and said:

“If you do that, we’ll have to charge you for two burritos.”

I was dumbfounded.

“Wait … so you’re saying that if you put a second tortilla around my burrito, you’ll charge me for one burrito, but if you rearrange the exact same ingredients, you’ll charge me for two?”

“Yes sir – company policy.”

Utterly defeated, I begrudgingly accepted the offer to give my burrito its extra layer of protection, doing my best to smile at the girl who probably knew as well as I did the sheer absurdity of the words that had come out of her mouth.  I paid the cashier, let out an audible “oof” as I lifted the noticeably heavy paper bag covered with trendy lettering, and exited the store.

When I arrived home, I took what looked like an aluminum foil-wrapped football out of the bag (which was a great source of amusement for my housemates), laid it out on the kitchen table, and decided to dismantle the burrito myself and arrange it into two much more manageable Mexican morsels.  I wondered whether I should have done this juggling of ingredients right there at Chipotle, just to see whether the staff’s heads would explode.

It was in that moment, with my head still throbbing from the madness of the entire experience, that I began to realize what had just happened.  How was it possible that a given mass of food could cost one amount one moment and another amount the next?  I immediately began to deconstruct my burrito, laying out the extra tortilla onto a plate and carefully making sure that precisely one-half of the ingredients – especially the guacamole – found their way into their new home.  As I carefully re-wrapped both tortillas, my suspicions were confirmed.  Sitting right in front of me were two delicious burritos, each identical in price to my original.

I had discovered the Banach-Tarski Burrito.

April 24, 2015

A Radical New Look for Logarithms

"A good notation has a subtlety and suggestiveness which at times make it almost seem like a live teacher." — Bertrand Russell

"We could, of course, use any notation we want; do not laugh at notations; invent them, they are powerful. In fact, mathematics is, to a large extent, invention of better notations." — Richard Feynman

"By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and in effect increases the mental power of the race." — Alfred North Whitehead


Notation is perhaps one of the most important aspects of mathematics.  The right choice of notation can make a concept clear as day; the wrong choice can make extracting its meaning hopeless.  Of course, one great thing about notation is that even if there's a poor choice of notation out there (such as \(\left[x\right]\) or \(\pi\)), often someone comes along and creates a better one (such as \(\lfloor x\rfloor\) for the floor function or multiples of tau, \(\tau\approx 6.28318\), for radian measure of angles).

Which brings me to one such poor choice of notation, one that I believe needs fixing:  the rather asymmetrical notation of powers, roots, and logarithms.

Here we have three very closely related concepts — both roots and logarithms are ways to invert exponentiation, the former returning the base and the latter returning the exponent.  And yet their notation couldn't be more different:
\[2^3=8\\
\sqrt[3]{8}=2\\
\log_2{8}=3\]This always struck me as annoyingly inelegant.  Wouldn't it be nice if these notations bore at least some resemblance to each other?

After giving it some thought, I believe I have found a possible solution.  As an alternative to writing \(\log_2 {8}\), I propose the following notation:


This notation makes use of a reflected radical symbol, such that the base of the logarithm is written in a similar manner to the index of a radical but below the "point" (the pointy part of the radical symbol), and the argument of the logarithm is written "inside".  The use of this notation has a number of advantages:

  1. The symmetry between the normal radical for roots and the reflected radical for logarithms highlights both their similarities and their differences — each one "undoes" an exponential expression, but each one gives a different part of the expression (the base and the exponent, respectively.)
  2. The radical symbol can be looked at as a modified lowercase letter "r".  (This may actually be the origin of the symbol, where the "r" stands for radix, the Latin word for "root".)  In a similar way, the new symbol for logarithms resembles a capital "L".
  3. The placement of the "small number" and the "point" can take on a secondary spatial meaning: 
    • The "small number" represents a piece of information we know about an exponential expression, and its placement indicates which part we know.
      • For a root, the "small number" is on top, so we know the exponent.
      • For a logarithm, the "small number" is on bottom, so we know the base.
    • The symbol seems to "point" to the piece of information that we are looking for.
      • For a root, the "point" is pointing downward, so we are looking for the base.
      • For a logarithm, the "point" is pointing upward, so we are looking for the exponent.
    • Looking at the image above, the new notation seems to say "We know the base is 2, so what's the exponent that will get us to 8?"
    • Similarly, the expression \(\sqrt[3]{8}\) now can be interpreted as saying "We know the exponent is 3, so what's the base that will get us to 8?"

This notation would obviously not make much of a difference for seasoned mathematicians who are perfectly comfortable with the \(\log\) and \(\ln\) functions.  But from a pedagogical standpoint, the reflected radical, with its multi-layered meaning and auto-mnemonic properties, could help students become more comfortable with a concept that many look at as just meaningless manipulation of symbols.

When I first came up with this reflected-radical notation, I had originally imagined that it should replace the current notation.  However, after some feedback from various people and some further consideration, I think a better course of action would be to have this notation be used alongside the current notation, much in the way that we have multiple notations for other concepts in math (such as the many ways to write derivatives).  However, I would suggest that, if it were to become commonplace*, this notation would be best to use when first introducing the concept in schools.  The current notation isn't wrong per se — it's just not very evocative of the underlying concept.  Anything that can better elucidate that concept can't be a bad thing when it comes to students learning mathematics!

It may seem like a radical idea.
But it's a logical one.


* Of course, for this notation to become commonplace, somebody would need to figure out how to replicate it in LaTeX.  Any takers?

September 15, 2014

Infinity is my favorite number.



Yes, you read that right.

I've recently been embroiled in a lovely debate on Numberphile's video, "Infinity is bigger than you think", in which Dr. James Grime starts off:  "We're going to break a rule.  We're breaking one of the rules of Numberphile.  We're talking about something that isn't a number.  We're going to talk about infinity."



I, too, was a longtime believer of what high school students all over are told:  "Infinity is not a number; infinity is a concept."  As my studies of mathematics progressed, however, I began to see that perhaps the things I had always taken for granted were not as black-and-white as they had seemed.  There was a lot more nuance to mathematics than I had ever realized, and learning those nuances opened up an entire new level of understanding, unlocking all sorts of links between concepts that had previously seemed worlds apart.  So it's no wonder that "Infinity Is Not A Number" (which I will occasionally abbreviate as "IINAN") was one of the first claims to which I took a fine-tooth comb.  What I learned changed my stance on infinity and firmly cemented it as my favorite number - not just concept, but honest-to-god number.*

The most common argument made IINAN proponents involves the curious property that \(\infty +1=\infty\).  This, they say, leads to all sorts of contradictions, because all one has to do is simply subtract \(\infty\) from both sides:
\[
\infty+1=\infty\\
\underline{-\infty\ \ \ \ \ \ \ \ -\infty}\ \ \\
\ \ \ \ \ \ 1=0\]Oh no!  We know that the statement \(1=0\) is obviously false, so there must be a false assumption somewhere.  Many IINAN defenders claim that the false assumption was that we tried to treat \(\infty\) as a number.  But that's not actually where the problem with infinity lies.

The problem is that we tried to do algebra with it.

For mathematicians, the most convenient place to do algebra is in a structure called a field.  If you're already familiar with what a field is, great, but if not, you can think of a field as a number system in which the age-old operations of addition, subtraction, multiplication, and division  the four operations that my father often notes are the only ones he ever needs when I talk about the kinds of math I teach  work exactly as we'd like them to.  The fields with which we are most familiar are the rational numbers (\(\mathbb{Q}\)), the badly-named so-called "real" numbers (\(\mathbb{R}\)), and often the complex numbers (\(\mathbb{C}\)).  One basic thing about a field is that the subtraction property of equality holds:  For any numbers \(a\), \(b\), and \(c\) in our field, if \(a=b\), then \(a-c=b-c\).

What about \(\infty\) though?  When we attempted to use \(\infty\) in an algebra problem, we got back complete garbage.  And we know that the subtraction property of equality should hold for any numbers in a field.  What this means, then, is that \(\infty\) is not part of that field (or any field as far as I'm aware).  So, when someone says "Infinity is not a number", what they really mean is "Infinity is not a real number."  (It's not a complex number, either, for that matter.)  It doesn't follow the same rules that the real numbers do.

But that doesn't mean it's not a number at all.

We've seen this sort of thing happen before.  The Greek mathematician Diophantus, when faced the equation \(4x+20=0\), called its solution of \(-5\) "absurd" — yet now students learn about negative numbers as early as elementary school, and we barely blink an eye at their use in everyday life.  Square roots of negative numbers seemed equally preposterous to the Italian mathematician Gerolamo Cardano, and the French mathematician RenĂ© Descartes called them "imaginary", a term that we're unfortunately stuck with today.  But imaginary numbers — and the complex numbers we build from them — are a vital part of physics, from alternating currents to quantum mechanics.

So what makes infinity any different from \(i\)?

Sure, it seems bizarre that a number plus one could equal itself.  But it's equally bizarre that the square of a number could be negative.  And sure, we can get a contradiction if we do certain things to infinity.  But that happens with \(i\) as well!  If we attempt to use the identity \(\sqrt{a}\cdot\sqrt{b}=\sqrt{ab}\), we can arrive at a similar contradiction:
\[\sqrt{-1}\cdot\sqrt{-1}=\sqrt{-1\cdot-1}\\
\ \ \ i\cdot i=\sqrt{1}\\
-1=1\ \]When this equation fails, you don't see mathematicians clamoring that "\(i\) isn't a number"!  Instead, the response is that the original equation doesn't work like we thought it did when we extend our real number system to include the complex numbers — instead, the square root function takes on a new life as a multi-valued function.  There's that nuance again!  For the same reason, infinity makes us look closer at something as simple as subtraction, at which point we find that \(\infty-\infty\) is an indeterminate form, something that we need the tools of calculus to properly deal with.

The truth is, mathematicians have been treating \(\infty\) as a number** for quite some time now.

In real analysis, which was developed to give the techniques of calculus a rigorous footing, points labelled \(+\infty\) and \(-\infty\) can be added to either end of the real number line to give what we call the extended real number line, often denoted \(\overline{\mathbb{R}}\) or \(\left[-\infty,+\infty\right]\).  The extended real number line is useful in describing concepts in measure theory and integration, and it has algebraic rules of its own, though analysts are still careful to mention that these two extra points are not real numbers.  What's more, the extended real line is not a field, because it doesn't satisfy all the nice properties that a field does.  (But that just makes us appreciate working in a field that much more!)

Projective geometry gives us a different sort of infinity, what I like to call an "unsigned infinity", one that is obtained by letting \(-\infty\) and \(+\infty\) overlap and creating what is known as the real projective line.  And complex analysis, which extends calculus to the complex plane, takes it even further, letting all the different infinities in all directions overlap to create a sort of "complex infinity", sometimes written \(\tilde{\infty}\), sitting atop the Riemann sphere.  What I particularly like about these projective infinities is that, using them, you can actually divide by zero! ***

So, since there are actually a number of different kinds of infinity that can be referred to, I would say that, more specifically, complex infinity is my favorite number.

The tough thing about this situation is that the concept of "number" is a very difficult one to precisely and universally define — similar to how linguists still struggle to come up with a universal definition of "word".  By trying to come up with such a description, you end up either including things that you don't want to be numbers (such as matrices) or excluding things that you do want to be numbers (such as complex numbers).  The best we can really do is keep an open mind about what a "number" is.

After all, there's infinitely many of them already — so there's bound to be new ones we haven't seen yet sooner or later.





* I'm not saying that infinity isn't a concept.  When it really comes down to it, every number is a concept.  That's the beauty of having abstracted the number "two" as an adjective, as in "two sheep", to "two" as a noun.

** There's an argument to be made that treating something like a number doesn't mean it is a number.  But at some point, the semantic distinction between these two becomes somewhat blurred.

*** Don't worry, I'll make a post about how to legitimately divide by zero in the near future!

September 8, 2014

Throw Outdated Notation to the Floor

There's no excuse for bad notation.

Well, there used to be.  In older times, mathematical notation and terminology were anything but standard - especially when the mathematics behind them were just coming into existence.  What Newton called fluxions, Leibniz called derivatives.  Even the now-ubiquitous \(\pi\) was one of many symbols used to represent the constant \(3.14159...\) by various mathematicians - and even then they weren't aware they only had half of the most fundamental circle constant.*  But as the human species has progressed, so has our ability to communicate with each other, allowing us to collaborate and spread ideas more quickly and more extensively than ever before.  It's no wonder that with the acceleration in communications technology, mathematicians have replaced clunky symbols with more elegant and consistent notations.

The only problem is that the math textbooks haven't quite caught up.

Let's take a look at a function that causes glazed-over eyes for many high schoolers every time it shows up in an Algebra II or Precalculus book:  the greatest integer function, usually notated as \(\left[x\right]\) or occasionally \(\left[\!\left[x\right]\!\right]\).  This function is defined as "the greatest integer that is less than or equal to \(x\)."  For example:

\[\left[5\right]=5\\\left[5.1\right]=5\\\left[5.9\right]=5\\\left[-5.1\right]=-6\]
The bracket notation was invented by Carl Friedrich Gauss, who used it in his 1808 proof of the law of quadratic reciprocity.**

If you've taught high school students this function, you're probably familiar with the looks of confusion that come shortly after writing down the definition.  "So wait, do we want something less than the number or greater than the number?"  You could break down that definition piece by piece and hope your students can follow the roundabout logic, but perhaps you've found it's easier to link it to something that students are more familiar with - rounding.  To find \(\left[x\right]\), all we need to do is round \(x\) down to the nearest integer (meaning to the left on the real number line).  And of course, if \(x\) is already an integer, then \(\left[x\right]=x\), since no rounding needs to happen.

So now it's a bit easier to describe \(\left[x\right]\), but the notation seems rather arbitrary, doesn't it?  We already use square brackets for other things - mainly for grouping terms together in large expressions and to denote closed intervals.  Writing \(\left[\!\left[x\right]\!\right]\) is at least unique, but even more clunky.  And that's what Kenneth Iverson, the Canadian computer scientist who invented APL in 1962, must have thought as well.  In his book A Programming Language, he gave the function a new name - the floor function - and a new notation:

\[\lfloor x\rfloor\]
Now there's some solid notation!  The name and the bottom half-brackets suggest exactly what to do: round \(x\) down to the nearest integer, just as described earlier.  Take a look:

\[\lfloor5\rfloor=5\\\lfloor5.1\rfloor=5\\\lfloor5.9\rfloor=5\\\lfloor-5.1\rfloor=-6\]
I showed this to my own students, most of whom vaguely remembered the "greatest integer function" and only half of whom knew which direction it went.  It stuck.  The light bulbs went off all around the room, and the comments were to the effect of "well, that makes a lot more sense!" and "why didn't they teach it this way in the first place?"

But it gets better.  Mathematics is all about patterns and symmetry.  One of the students asked, "so if there's a floor function, is there a ceiling function?"  Yes there is.  The ceiling function is defined as "the least integer that is greater than or equal to \(x\)", and does exactly what you'd expect - it rounds the number up to the nearest integer.  Can you guess what the notation is?

\[\lceil x\rceil\]
And can you guess the answers to the following problems?

\[\lceil5\rceil=?\\\lceil5.1\rceil=?\\\lceil5.9\rceil=?\\\lceil-5.1\rceil=?\]
If you guessed \(5\), \(6\), \(6\), and \(-5\), you're correct.  See how easy math can be when the notation is evocative of the concept behind it?

At this point, you may be wondering, as you should:  why aren't textbooks aren't using this notation, given its obvious pedagogical advantage?  If this notation were only invented in the past few years or so, it might be excusable that the publishers haven't caught up yet.  But come on.  It's been over 50 years.  And with the way that textbook companies churn out new editions as often as they can, you'd think that one reason to do so would be to keep their math and their teaching up-to-date.

...right?

It seems that many people are under the unfortunate impression that math, unlike science or social studies, is already set in stone and nothing new really ever comes out of it.  This impression couldn't be further from the truth.  Mathematics is an ever-growing and ever-evolving body of knowledge.

And our potential to understand it better - and teach it better - hasn't hit the ceiling yet.




To read further about the floor and ceiling functions, visit the Wikipedia article on the subject.

http://www.tauday.com/tau-manifesto

** The law of quadratic reciprocity is a theorem in number theory.  Gauss thought it was so profound and beautiful that he occasionally referred to it as the "golden theorem".

July 31, 2014

Poorly Executed Mnemonics Definitely Addle Students

If you've read my past two posts, you know by now that PEMDAS* (Parentheses, Exponents, Multiplication, Division, Addition, Subtraction) is one of my most hated mnemonics.  There are two main reasons:
  • It's misleading.
  • It's unnecessary.
First of all, why is PEMDAS misleading?

Let's start with the "P" (Parentheses).  People claim that PEMDAS is "the order of operations."  This is already problematic because parentheses aren't really a mathematical operation.**  Operations do things.  Parentheses don't actually do anything - they just group things together.  This distinction may seem like more of a technicality, but it actually brings to light the main issue:  parentheses aren't the important thing, but the idea of grouping in general.  There's lots of ways to group expressions.
  • You can group expressions using a fraction bar.  \[\frac{1+2}{3+4}\]
  • You can group expressions under a radical.  \[\sqrt{3^2+4^2}\]
  • You can even group expressions inside an exponent!  \[2^{4+1}\](How are you supposed to "do" exponents before addition if there's addition in the exponent and you don't have parentheses to tell you what to do?)
So in terms of grouping, PEMDAS is at best incomplete.

Though the "E" (Exponents) is pretty much unambiguous, the entire rest of the mnemonic causes problems.  By putting the "MDAS" in linear order, a number of students get the idea that all Multiplication should be done before any Division, and that all Addition should be done before any Subtraction.  Thus you get students who will make the following mistakes:\[4-1+2\\=4-3\\=1(?!)\]\[6\div2\times3\\=6\div6\\=1(?!)\]
What's even scarier is that PEMDAS has become so ingrained in our math education culture that some teachers actually teach it this blatantly incorrect way.  Don't believe me?  Take a look at this video from TED-Ed:


Try to tell me that doesn't lead you to believe that MDAS is done in linear order.  And we wonder why kids have trouble.

Of course, many teachers are careful to explain how the order of operations are supposed to work - that multiplication and division (which is just multiplication by the multiplicative inverse) are done in order from left to right as they appear.  Likewise, subtraction is just addition by the additive inverse, so addition and subtraction are done left to right as well.  Some teachers write the mnemonic as PE(MD)(AS) or in some other sort of arrangement to emphasize this fact.  Others further extend the ridiculous mnemonic-for-a-mnemonic to say "Please Excuse My Dear Aunt Sally ... and Let her Rest".

But now why is PEMDAS unnecessary?

There is a way to bypass all of this mnemonic madness and teach the order of operations in a way that actually makes sense.  How?

By teaching why it works that way.

When I've asked fellow teachers why order of operations the way it is, those who have been able to answer often gave something to the effect of "well, we needed to decide on some kind of convention to deal with possible ambiguity, so we decided on what we have today."  This is half correct - getting rid of ambiguity is very much important.  But it wasn't an arbitrary decision.  It's not like we could have just as equally decided that addition and subtraction come first, then multiplication and division, and then exponents.  There's a very good reason that the operations fall naturally in the order that they do.

Think back to elementary school when all you knew about was addition and subtraction.  Eventually you ran into expressions that looked like this:\[3+3+3+3+3+3+3\]You didn't want to write so many 3's, so you were introduced to a shorthand to write this expression.  Since there were seven 3's being added together, you learned you could instead write:\[3\times7\]Thus you learned that multiplication is repeated addition.

Fast forward a few years, when you had multiplication and division under your belt.  Now you saw expressions like this instead:\[4\times4\times4\times4\times4\]Again, you were introduced to a shorthand to keep from having to write all those 4's.  Since there were five 4's being multiplied together, you wrote:\[4^5\]Thus you learned that exponentiation is repeated multiplication.

Now we come to an expression like this.\[5^2+4\times3\]What do we do first?  Well, remembering that exponentiation is repeated multiplication, we rewrite our exponent to say what it really means.\[5\times5+4\times3\]Next, remembering that multiplication is repeated addition, we rewrite our multiplication in even more basic terms.\[5+5+5+5+5+4+4+4\]Now the expression is a cinch to evaluate - anyone can add!  The value just comes out to 37.  But, more remarkably, what we've just done is uncovered the reason why the order of operations is as it is:

The most compact shorthand is evaluated first.

Once students understand this, they won't need to actually write out the additions explicitly - they'll just evaluate things in the order they should be handled.  But they'll know why to do it.

With this in mind, I propose a better way to teach order of operations.  Students need only remember two things.
  1. Pay attention to grouping.
  2. Shorthand comes first.
If we must use an acronym, instead of PEMDAS, let's use something like ... say ... GEMA.  (Grouping, Exponentiation, Multiplication, Addition.)   But if we do use GEMA or something similar, we shouldn't deprive students of the understanding that comes from knowing why the order of operations works.



* In other countries, variations on PEMDAS are used, such as BODMAS or BIDMAS.  The "B" stands for Brackets, another word for parentheses, and "O" and "I" stand for Orders and Indices, respectively, which are both alternate words for exponents.  Note that in BODMAS and BIDMAS, the "D" and "M" are interchanged - think about how much confusion that could cause for students!

** Thanks to Quintopia for pointing this out:  Parentheses in the context of computer science CAN in fact be thought of as operators which let the computer know to make a call to a subroutine.  It would be even harder to make this into an acronym ... unless it's a recursive acronym in which the G stands for GEMA, similar to how WINE stands for WINE Is Not an Emulator!

July 17, 2014

The Implications of Being Implicit

In my previous post, I presented three "tricky" (read: "inane") math problems from the Internet, the last of which was the following:


I promised I'd come back to this one because it merits special discussion.  Now it's time to do exactly that, as this one (as well as its many variations) has piqued my ire every time I've seen it.

Most people who have had at least a basic prealgebra class tend to agree that the \(1+2\) in parentheses should be evaluated first, reducing the problem to \[6\div 2(3).\]
After that the battlefield gets bloody.

Do we do the multiplication first, or do we do the division first?

Some people say the multiplication needs to come first, valiantly shouting "PEMDAS" as their battle cry, arguing that since "M" comes before "D", the answer is \(1\).

Those who have more experience with order of operations and don't just rely on a silly (and wrong) mnemonic say that the division comes first, since multiplication and division are really the same under the hood - after all, division is equivalent to multiplication by the reciprocal - and by convention* are performed from left to right in the order they appear.  For these more seasoned warriors, the answer is "obviously" \(9\).

Those in the latter camp definitely are applying better mathematical reasoning than those in the former.  But do they have "the" correct answer?  If you're like me, though you know that multiplication and division are supposed to happen in order from left to right, there's just something about that \(2(3)\) that catches your eye, that makes you feel like for some reason it "should" come first.

And that's why we need to talk about implicit (or implied) multiplication.

When we first learn multiplication, we write it with a cross (\(\times\)).  But once variables like \(x\) start to come into play, we have to find new, less confusing ways to write multiplication.  So instead of writing \(a\times b\), we have a few options.
  • We can use a dot: \[a\cdot b\]
  • We can use parentheses: \[a(b)\]
  • Or as long as both factors aren't numerals, we can just concatenate (attach) them:  \[ab\]
There's actually a subtle difference between multiplication with a cross or a dot and multiplication with parentheses or concatentation.  The former two are called explicit multiplication, because we've explicitly indicated our operation using a symbol.  The latter two are called implicit multiplication - we know the intended operation is multiplication because no explicit symbol was provided.

Why does this matter?  As it turns out, in some conventions, implicit multiplication may actually take precedence over explicit multiplication and therefore division!  For example, the "Style and Notation Guide" for Physical Review, an American scientific journal, specifies that implicit multiplication should come before division when submitting manuscripts**.  Of course there are other conventions in which this is not the case, but the point to understand here is that multiple conventions do exist.

What's more, we can't even turn to our trusty calculators to tell us which way is "the" correct way, because different calculators may follow different conventions!
Two Casio calculators

Two TI calculators

(Try this on your own calculator and see which convention it uses!)

I'd like to posit one more reason that the \(2(3)\) may feel like it "should" come first.  One unfortunate side effect of trying to use existing punctuation when possible to represent mathematics is that certain symbols become overloaded - the same symbols can represent different things.  In this case, the notation \(2(3)\) for multiplication bears a very strong resemblance to the notation \(f(3)\) for function evaluation!  If the question were \[6\div f(1+2),\] even though we have no idea what function \(f\) is, there's no question that it would be evaluated before the division took place!  This may be a possible reason that the implicit-trumps-explicit convention is followed in some circles.

The inevitable conclusion is that there is no single correct answer - it all depends on what convention you're using.  At this point you may be ready to throw your hands up in despair.  But there is hope.  The best way to solve this kind of problem is ... you guessed it ... to use better notation in the first place!  (I mean who uses the obelus (\(\div\)) anymore past 5th grade anyway?)
  • If you mean for multiplication to be done first, then say so!  \[\frac{6}{2(1+2)}=1\]
  • If you mean for division to be done first, then say so!  \[\frac{6}{2}(1+2)=9\]
And if you're using a calculator, it never hurts to have too many parentheses.***

It all comes back to the point of the previous article, which I will make explicit one more time:  Math isn't about symbols.  Math is about ideas.  If your symbols don't unambiguously convey your ideas, then use better symbols.



* The left-to-right convention is probably so because those who established the convention spoke the sorts of European languages for which \(6\div 2\) would be vocalized in that order - not always the case if you've ever heard how fractions are read out loud in Japanese or Korean!

** The guide may seem like it's claiming that all multiplication should come before division, but this is because they don't use explicit multiplication at all except in specific contexts such as indicating dimensions and performing operations on vectors.

*** If you have a newer calculator, you may have a nifty fraction template that you can use to clear up confusion even further!  If you're using a TI-84+, and you've got the newest operating system on your calculator, try hitting [ALPHA] and then [Y=].  If you see a little menu come up, choose "n/d" and voilĂ  - you can now do fractions without getting lost in a sea of parentheses!

July 5, 2014

99% of People Get This Wrong!

If you've been around social media recently, you've no doubt seen the influx of math questions whose sole purpose is supposedly to test whether people know their order of operations.

Sometimes, it's a simple question of whether people know not to just blindly apply the operations in order from left to right (the results of which are often worrisome).



Sometimes, the spacing is modified to try to trick you into grouping things incorrectly.


But the worst offenders of all are the ones that combine single-line division, either with an obelus (÷) or a slash (/), with implied multiplication, i.e. multiplication without explicitly writing a dot (⋅) or cross (×) symbol.


Most of the time, these pictures are accompanied by a caption such as "99% of People Get This Wrong!" in an attempt to amp up the clickbait factor.  And you know what?  They're right.  But not for the reason you might think.

While the first question isn't too bad, the second and third questions commit a major faux pas in mathematics:  introducing ambiguity.  Even as a math teacher, I had to give the second one a double-take because of the deceptive spacing.  And the ambiguity of the third example is so profound that I'll be dedicating a separate blog post to that problem specifically.

You can cite "order of operations" or "PEMDAS"* all you want - and many people do, with an air of intellectual superiority - but in doing so you're missing the point.  The point of establishing the order of operations is to reduce ambiguity.  And if there's still a possibility of ambiguity after that, well, that's what we have parentheses for!  By deliberately trying to deceive people, those who create and share these images aren't showing how clever they are, since they're actually doing the exact opposite of the thing they're supposedly trying to test people on.

I stress this to my students all the time:  Math isn't about symbols.  It's about ideas.  If the symbols on your page aren't clearly conveying those ideas to the reader, then you need to use better symbols.  (And words don't hurt either.)

So why do 99%** of people get these questions wrong?

Because they bother to answer them at all.



* Please don't cite PEMDAS.  It's a horrible mnemonic, it's not even correct, and it too will be the subject of a future post.
** 99% of statistics are made up on the spot.


Contact Form

Name

Email *

Message *