October 15, 2019

Orbit-Stabilizer Theorem

This is going to be a different sort of post from the ranting-about-math-education kinds that usually I make.

As many of you know, when I'm not doing math (yes, I do take a break sometimes), I write electronic music, some of which makes its way into rhythm games.  Most recently, I wrote a track called Orbit Stabilizer for the Korean dance game Pump It Up XX.


If you didn't figure it out from watching the background animation, there's a mathematical connection to the song.  This isn't the first time I've made references to mathematics in my music:
  • One of my songs on Dance Dance Revolution, called ΔMAX, changes by one BPM every beat.  It goes up to 573 BPM, which is significant because the number 573 can be read out loud as "Konami" , who makes DDR.
  • My track on J-Rave Nation is called π・ρ・maniac, which is really just a play on the word "pyromaniac" but uses the well-known constant \(\pi\).  (Also, there's a mission in Pump It Up Infinity that makes you do math problems in the middle of the song.)
  • Another song from Pump It Up Prime, called Annihilator Method, is a reference to a technique for solving differential equations.  (You have to admit that's a pretty hardcore-sounding name for something from a math class!)
This time, the reference is to something from a branch of mathematics called group theory, which is part of the deep abstract foundations for algebra.  It was pioneered simultaneously by French mathematician Évariste Galois (who shows up in the background of the Orbit Stabilizer video!) and Norwegian mathematician Niels Henrik Abel.  Both are worth reading up on ― heartbreaking stories of geniuses who died tragically young.

Group theory is, in essence, the study of symmetry.  It can be used to describe everything from the shuffling of cards and the positions of a Rubik's cube to the fundamental laws of physics that govern our universe.  I'm not going to go into an in-depth explanation of group theory ― there are plenty of YouTube videos that do that (including my own).  Rather I'm going to go just far enough to explain what the orbit-stabilizer theorem is.

Imagine you have a square sheet of paper in front of you.


You look away, and then I sneak in and potentially move the paper in some way.  When you look back, the square looks the same as it did before.  What could I have done to the paper?

The set of possible transformations I could have done are called symmetries of the square.  In this case, there are three overall types of such symmetries:
  • I could have left the paper alone entirely, doing nothing to it.  This is called the identity transformation.
  • I could have rotated the paper by \(90^\circ\), \(180^\circ\), or \(270^\circ\).
  • I could have flipped the paper over across one of its four axes of symmetry ― the horizontal axis, the vertical axis, or one of the two diagonal axes.
All in all, I have eight possible transformations that would take the square back to itself.  Notice that translations (shifts) aren't included ― if I'd moved the paper a bit to the left, you would have noticed (we assume)!  We're only considering transformations that would leave you completely unable to tell what I did.  To visualize the effects of these transformations better, we can label the square and watch what happens:



These transformations form what mathematicians call a group,  which means that they obey four fundamental laws (axioms):
  1. Combining any two of these transformations ― such as, say, rotating by \(90^\circ\) counterclockwise and then flipping across the horizontal axis ― also takes the square back to itself, and is equivalent to one of the original eight transformations ― in this case, flipping across the "backslash" diagonal axis.  We call this the closure law.
  2. Combining transformations is associative, which means that if I have any transformations \(a\), \(b\), and \(c\), then \((ab)c\) ― that is, doing \(c\) first and then doing whatever \((ab)\) is equivalent to ― gives the same result as \(a(bc)\) ― that is, doing whatever \((bc)\) is equivalent to first and then doing \(a\) afterward.  This is just like how you can move parentheses around when doing addition and multiplication.
  3. The set of transformations includes an identity transformation, which essentially does nothing.  (I say "essentially" because you could also think of a \(360^\circ\) rotation as an identity transformation since it has no overall effect.)
  4. Every transformation has an inverse transformation which "undoes" it.  For example, a rotation of \(90^\circ\) can be undone by a rotation of \(270^\circ\), or a reflection can be undone by doing that same reflection again.
We usually use the letter \(G\) to talk about a group in general.  We'll also use the notation \(|G|\) to talk about the "size" of a group ― so in this case, we have \(|G|=8\).

So we know that these transformations take the square back to itself, but what if we ask what happens to just a single point inside of the square?

Suppose we let \(x\) mark the midpoint of segment \(AB\).  Then we can see where \(x\) lands after each of the transformations:

The set of possible places where \(x\) can land using our transformations is called the orbit of \(x\) under our group, and we denote it as \(\text{orb}(x)\).  Since there are four different places that \(x\) can land ― it can end up on any one of the four midpoints of one of the sides ― we say that \(|\text{orb}(x)|=4\).

Looking a little closer, we can find that there are two particular transformations that do something special.  Focus on the identity transformation and the vertical axis flip.  What happens to \(x\) under these two transformations?  Well, it stays where it is!  The other six transformations, on the other hand, all move \(x\) to a different point of the square.  These two special transformations form what's called the stabilizer of \(x\) under our group, and we denote it as \(\text{stab}(x)\).  The stabilizer forms what's called a subgroup of our original group, because if you limit yourself to only doing those things, you still satisfy the same basic group laws.  Again, we can measure the "size" of the stabilizer, which in this case is \(|\text{stab}(x)|=2\).

Now, look at the three "sizes" we've computed.
\[
\begin{align*}
|G|&=8\\
|\text{orb}(x)|&=4\\
|\text{stab}(x)|&=2
\end{align*}
\]Those three numbers (\(8\), \(4\), and \(2\)) are just begging to be related to each other!

\[|\text{orb}(x)|\cdot|\text{stab}(x)|=|G|\]

Is this a coincidence?  Well, you can find out for yourself by picking different places for \(x\) (like, say, a corner point, or the very center, or just some random point in the square), then calculating the sizes of its orbit and its stabilizer.  Or you can even try it for a different shape, like a triangle or a rhombus or a pentagon, which will have a different size group.  You'll notice that you always get that same relationship!

That relationship is called the Orbit-Stabilizer Theorem.

...well, almost.

We actually run into a bit of a problem if our group is infinite.  If that's the case, then multiplying and dividing using infinite numbers can get a bit hairy and make it a bit difficult to show that the relationship is meaningful.  To deal with that case, we usually write the Orbit-Stabilizer Theorem in a slightly different way:

\[|\text{orb}(x)|=[G : \text{stab}(x)]\]

The quantity \([G : \text{stab}(x)]\) is called the index of the stabilizer as a subgroup of \(G\).  Essentially this stands for how many copies of \(\text{stab}(x)\) can fit inside \(G\).  (Technically it's how many "cosets" the stabilizer has, if you want to look that up.)  This is the version of the equation that shows up in the video.

Okay, so that explains what the Orbit-Stabilizer Theorem is.  But, you might be wondering, is it also useful?

Well, one famous result in combinatorics (the study of counting) that makes great use of the Orbit-Stabilizer Theorem is Burnside's Lemma.  Burnside's Lemma can be used to solve problems like counting the number of possible ways that the sides of a cube can be painted with three different colors.  (Painting the top side blue and all the others white is considered the same as, say, painting the front side blue and all the others white, since you can just rotate one to get the other.)  Looking for something more "real-world"?  Burnside's Lemma has applications in chemistry ― like if you need to find out how many different ways certain groups can be placed around a central carbon atom ― as well as other areas such as electronic circuits and even (to bring things full circle) music theory!

Hopefully this gives you an idea of why the Orbit-Stabilizer Theorem is interesting, and why I would name a song after it (besides just being a cool-sounding name).  Feel free to let me know if you have any questions.  And of course, if you play Pump It Up, give the track a shot!

P.S.  If you want to know more about group theory, I would highly recommend this YouTube series by Dr. Matt Salomone at Bridgewater State University.  He has a knack for making these sorts of abstract algebraic topics very accessible with excellent examples and intuitive explanations!

August 1, 2019

When Does Nothing Mean Something?


I thought I was done writing about this topic, but it just keeps coming back.  The internet just cannot seem to leave this sort of problem alone:
I don't know what it is about expressions of the form \(a\div b(c+d)\) that fascinates us as a species, but fascinate it does.  I've written about this before (as well as why "PEMDAS" is terrible), but the more I've thought about it, the more sympathy I've found with those in the minority of the debate, and as a result my position has evolved somewhat.

So I'm going to go out on a limb, and claim that the answer should be \(1\).

Before you walk away shaking your head and saying "he's lost it, he doesn't know what he's talking about", let me assure you that I'm obviouly not denying the left-to-right convention for how to do explicit multiplication and division.  Nobody's arguing that.*  Rather, there's something much more subtle going on here.

What we may be seeing here is evidence of a mathematical "language shift".

It's easy to forget that mathematics did not always look as it does today, but has arrived at its current form through very human processes of invention and revision.  There's an excellent page by Jeff Miller that catalogues the earliest recorded uses of symbols like the operations and the equals sign -- symbols that seem timeless, symbols we take for granted every day.

People also often don't realize that this process of invention and revision still happens to this day.  The modern notation for the floor function is a great example that was only developed within the last century.  Even today on the internet, you occasionally see discussions in which people debate on how mathematical notation can be improved.  (I'm still holding out hope that my alternative notation for logarithms will one day catch on.)

Of particular note is the evolution of grouping symbols.  We usually think only of parentheses (as well as their variations like square brackets and curly braces) as denoting grouping, but an even earlier symbol used to group expressions was the vinculum -- a horizontal bar found over or under an expression.  Consider the following expression: \[3-(1+2)\] If we wrote the same expression with a vinculum, it would look like this: \[3-\overline{1+2}\] Vincula can even be stacked: \[13-\overline{\overline{1+2}\cdot 3}=4\] This may seem like a quaint way of grouping, but it does in fact survive in our notation for fractions and radicals!  You can even see both uses in the quadratic formula: \[x=\dfrac{-b\pm\sqrt{b^2-4ac}}{2a}\]

Getting back to the original problem, what I think we're seeing is evidence that concatenation -- placing symbols next to each other with no sort of explicit symbol -- has become another way to represent grouping.

"But wait", you might say, "concatenation is used to represent multiplication, not grouping!"  That's certainly true in many cases, for example in how we write polynomials.  However, there are a few places in mathematics that provide evidence that there's more to it than that.

First of all, as a beautifully-written Twitter thread by EnchantressOfNumbers (@EoN_tweets) points out, we use concatenation to show a special importance of grouping when we write out certain trigonometric expressions without putting their arguments in parentheses.  Consider the following identity:
\[\sin 4u=2\sin 2u\cos 2u\] When we write such an equation, we're saying that not only do \(4u\) and \(2u\) represent multiplications, but that this grouping is so tight that they constitute the entire arguments of the sine and cosine functions.  In fact, the space between \(\sin 2x\) and \(\cos 2x\) can also be seen as a somewhat looser form of concatention.  Then again, so does the space between \(\sin\) and \(x\), which represents a different thing -- the connection of a function to its argument.  Perhaps this is why the popular (and amazing) online graphing calculator Desmos is only so permissive when it comes to parsing concatenation:

In contrast, where we do draw the line is with an expression like the following:\[\sin x+y\] We always interpret this as \((\sin(x))+y\), never \(\sin(x+y)\).  To drive home just how much stronger implicit multiplication feels to us than explicit multiplication, just take a look at the following expression: \[\sin x\cdot y\] Does this mean \((\sin(x))\cdot y\) or \(\sin(x\cdot y)\)?  If that expression makes you writhe uncomfortably, while if it had been written as \(\sin xy\) it would be fine, then you might see what I'm getting at.

An even more curious case is mixed numbers.  When writing mixed numbers, concatenation actually stands for addition, not multiplication. \[3\tfrac{1}{2}=3+\tfrac{1}{2}\] In fact, concatenation actually makes addition come before multiplication when we multiply mixed numbers! \[3\tfrac{1}{2}\cdot 5\tfrac{5}{6}=(3+\tfrac{1}{2})\cdot(5+\tfrac{5}{6})=20\tfrac{5}{12}\]

Now, you may feel that this example shows how mixed numbers are an inelegance in mathematical notation (and I would agree with you).  Even so, I argue that this is evidence that we fundamentally view concatenation as a way to represent grouping.  It just so happens that, since multiplication takes precedence over addition anyway in the absence of other grouping symbols, we use concatenation when we write it.  This all stems from a sort of "laziness" in how we write things -- - laying out precedence rules allows us to avoid writing parentheses, and once we've established those precedence rules, we don't even need to write out the multiplication at all.

So how does the internet's favorite math problem fit into all this?

The most striking feature of the expression \(8\div 2(2+2)\) is that it's written all in one line.

Mathematical typesetting is difficult.  LaTeX is powerful, but has a steep learning curve, though various other editors have made it a bit easier, such as Microsoft Word's Equation Editor (which has much improved since when I first used it!).  Calculators have also recognized this difficulty, which is why TI calculators now have MathPrint templates (though its entry is quite clunky compared to Desmos's "as-you-type" formatting via MathQuill).

Even so, all of these input methods exist in very specific applications.  What about when you're writing an email?  Or sending a text?  Or a Facebook message?  (If you're wondering "who the heck writes about math in a Facebook message", the answer at least includes "students who are trying to study for a test".)  The evolution of these sorts of media has led to the importance of one-line representations of mathematics with easily-accessible symbols.  When you don't have the ability (or the time) to neatly typeset a fraction, you're going to find a way to use the tools you've got.  And that's even more important as we realize that everybody can (and should!) engage with mathematics, not just mathematicians or educators.

So that might explain why a physics student might type "hbar = h / 2pi", and others would know that this clearly means \(\hbar=\dfrac{h}{2\pi}\) rather than \(\hbar=\dfrac{h}{2}\pi\).  Remember, mathematics is not about just answer-getting.  It's about communication of those ideas.  And when the medium of communication limits how those ideas can be represented, the method of communication often changes to accomodate it.

What the infamous problem points out is that while almost nobody has laid out any explicit rules for how to deal with concatenation, we seem to have developed some implicit ones, which we use without thinking about them.  We just never had to deal with them until recently, as more "everyday" people communicate mathematics on more "everyday" media.

Perhaps it's time that we address this convention explicitly and admit that concatenation really has become a way to represent grouping, just like parentheses or the vinculum.  This is akin to taking a more descriptivist, rather than prescriptivist, approach to language: all we would be doing is recognizing that this is already how we do things everywhere else.

Of course, this would throw a wrench in PEMDAS, but that just means we'd need to actually talk about the mathematics behind it rather than memorizing a silly mnemonic.  After all, as inane as these internet math problems can be, they've shown that (whether they admit it or not) people really do want to get to the bottom of mathematics, to truly understand it.

I'd say that's a good thing.


* If your argument for why the answer is \(16\) starts with "Well, \(2(2+2)\) means \(2\cdot(2+2)\), so...", then you have missed the point entirely.

January 12, 2019

Who Says You Can't Do That? --- Trig Identities


Ahh, trig identities...  a rite of passage for any precalculus student.

This is a huge stumbling block for many students, because up until this point, many have been perfectly successful (or at least have gotten by) in their classes by learning canned formulas and procedures and then doing a bunch of exercises that just change a \(2\) to a \(3\) here and a plus to a minus there.  Now, all of a sudden, there's no set way of going about things.  No "step 1 do this, step 2 do that".  Now they have to rely on their intuition and "play" with an identity until they prove that it's correct.

And to make matters worse, many textbooks --- and, as a result, many teachers --- make this subject arbitrarily and artificially harder for the students.

They insist that students are not allowed to work on both sides of the equation, but instead must specifically start at one end and work their way to the other.  I myself once subscribed to this "rule", because it's how I'd always been taught, and I always fed students the old line of "you can't assume the thing you're trying to prove because that's a logical fallacy".

Then one of my Honors Precalculus students called me on it.

He asked me to come up with an example of a trig non-identity where adding the same thing on both sides would lead to a false proof that the identity was correct.  After some thought, I realized that not only couldn't I think of one, but that mathematically, there's no reason that one should exist.

To begin with, one valid way to prove an identity is to work with each side of the equation separately and show that they are both equal to the same thing.  For example, suppose you want to verify the following identity:

\[\dfrac{\cot^2{\theta}}{1+\csc{\theta}}=\dfrac{1-\sin{\theta}}{\sin{\theta}}\]
Trying to work from one side to the other would be a nightmare, but it's much simpler to show that each side is equal to \(\csc{\theta}-1\).  This in fact demonstrates one of the oldest axioms in mathematics, as written by Euclid:  "things which are equal to the same thing are equal to each other."

But what about doing the same thing to both sides of an equation?

There are two important points to realize about what's going on behind the scenes here.

The first is that if your "thing you do to both sides" is a reversible step --- that is, if you're applying a one-to-one function to both sides of an equation --- then it's perfectly valid to use that as part of your proof because it establishes an if-and-only-if relationship.  If that function is not one-to-one, all bets are off.  You can't prove that \(2=-2\) by squaring both sides to get \(4=4\), because the function \(x\mapsto x^2\) maps multiple inputs to the same output.

It baffles me that most Precalculus textbooks mention one-to-one functions in the first chapter or two, yet completely fail to understand how this applies to solving equations.*  A notable exception is UCSMP's Precalculus and Discrete Mathematics book, which establishes the following on p. 169:


Reversible Steps Theorem
Let \(f\), \(g\), and \(h\) be functions.  Then, for all \(x\) in the intersection of the domains of functions \(f\), \(g\), and \(h\),
  1. \(f(x)=g(x) \Leftrightarrow f(x)+h(x)=g(x)+h(x)\)
  2. \(f(x)=g(x) \Leftrightarrow f(x)\cdot h(x)=g(x)\cdot h(x)\) [We'll actually come back to this one in a bit -- there's a slight issue with it.]
  3. If \(h\) is 1-1, then for all \(x\) in the domains of \(f\) and \(g\) for which \(f(x)\) and \(g(x)\) are in the domain of \(h\), \[f(x)=g(x) \Leftrightarrow h(f(x))=h(g(x)).\]

Later on p. 318, the book says:

"...there is no new or special logic for proving identities.  Identities are equations and all the logic that was discussed with equation-solving applies to them."

Yes, that whole "math isn't just a bunch of arbitrary rules" thing applies here too.

The second important point, which you may have noticed while looking at the statement of the Reversible Steps Theorem, is that the implied domain of an identity matters a great deal.  When you're proving a trig identity, you are trying to establish that it is true for all inputs that are in the domain of both sides.  Most textbooks at least pay lip service to this fact, even though they don't follow it to its logical conclusion.

To illustrate why domain is so important, consider this example:

\[\dfrac{\cos{x}}{1-\sin{x}} = \dfrac{1+\sin{x}}{\cos{x}}\]
To verify this identity, I'm going to do something that may give you a visceral reaction:  I'm going to "cross-multiply".  Or, more properly, I'm going to multiply both sides by the expression \((1 - \sin x)\cos x\).  I claim that this is a perfectly valid step to take, and what's more, it makes the rest of the proof downright easy by reducing to everyone's favorite Pythagorean identity:

\[
\begin{align*}
(\cos{x})(\cos{x}) &= (1+\sin{x})(1-\sin{x})\\
\cos^2{x} &= 1-\sin^2{x}\\
\sin^2{x} + \cos^2{x} &= 1 \quad\blacksquare
\end{align*}
\]
"But wait," you ask, "what if \(x=\pi/2\)?  Then you're multiplying both sides by zero, and that's certainly not reversible!"

True.  But if \(x=\pi/2\), then the denominators of both sides of the equation are zero, so the identity isn't even true in the first place.  For any value of \(x\) that does not yield a zero in either denominator, though, multiplying both sides of an equation by that value is a reversible operation and therefore completely valid.

Now, this isn't to say that multiplying both sides of an equation by a function can't lead to problems --- for example, if \(h(x)=0\) (as in the zero function), then \(f(x)\cdot h(x)=g(x)\cdot h(x)\) no matter what.  This can even lead to problems in more subtle cases: suppose \(f\) and \(g\) are equal everywhere but a single point \(a\); for example, perhaps \(f(a)=1\) and \(g(a)=2\).  If it just so happens that \(h(a)=0\), then \(f\cdot h\) and \(g\cdot h\) will be equal as functions, even though \(f\) and \(g\) are not themselves equal.

The real issue here can be explained via a quick foray into higher mathematics.  Functions form what's called a ring -- basically meaning you can add, subtract, and multiply them, and these operations have all the nice properties we'd expect.  But being able to preserve that if-and-only-if relationship when multiplying a function by both sides of an equation requires a special kind of ring called an integral domain, which means that it's impossible to multiply two nonzero functions together and get a zero function.

Unfortunately, functions in general don't form an integral domain --- not even continuous functions, or differentiable functions, or even infinitely differentiable functions do!  But if we move up to the complex numbers (where everything works better!), then the set of analytic functions --- functions that can be written as power series (infinite polynomials) on an open domain --- is an integral domain.  And most of the functions that precalculus students encounter generally turn out to be analytic**:  polynomial, rational, exponential, logarithmic, trigonometric, and even inverse trigonometric.  This means that when proving trigonometric identities, multiplying both sides by the same function is a "safe" operation.

So in sum, when proving trigonometric identities, as long as you're careful to only use reversible steps (what a great time to spiral back to one-to-one functions, by the way!), you are welcome to apply all the same algebraic operations that you would when solving equations, and the chain of equalities you establish will prove the identity.  Even "cross-multiplying" is fair game, because any input that would make the denominator zero would invalidate the identity anyway.***  Since trigonometric functions are generally "safe" (analytic), we're guaranteed to never run into any issues.

Now, none of this is to say that there isn't intrinsic merit to learning how to prove an identity by working from one side to the other.  Algebraic "tricks" --- like multiplying by an expression over itself (\(1\) in disguise!) to conveniently simplify certain expressions --- are important tools for students to have under their belts, especially when they encounter limits and integrals next year in calculus.

What we need to do, then, is encourage our students to come up with multiple solution methods, and perhaps present working from one side to the other as an added challenge to build their mathematical muscles.  And if students are going to work on both sides of an equation at once, then we need to hold them to high standards and make them explicitly state in their proofs that all the steps they have taken are reversible!  If they're unsure on whether or not a step is valid, have them investigate it until they're convinced one way or the other.

If we're artificially limiting our students by claiming that only one solution method is correct, we're sending the wrong message about what mathematics really is.  Instead, celebrating and cultivating our students' creativity is the best way to prepare them for problem-solving in the real world.

--

* Rather, I would say it baffles me, but actually I'm quite used to seeing textbooks treat mathematical topics as disparate and unconnected, like how a number of Precalculus books teach vectors in one chapter and matrices in the next, yet never once mentione how they are so beautifully tied together via transformations.

** Except perhaps at a few points.  The more correct term for rational functions and certain trigonometric functions is actually meromorphic, which describes functions that are analytic everywhere except a discrete set of points, called the poles of the function, where the function blows up to infinity because of division by zero.

*** If you extend the domains of the trig functions to allow for division by zero, you do need to be more careful.  Not because there's anything intrinsically wrong with dividing by zero, but because \(0\cdot\infty\) is an indeterminate expression and causes problems that algebra simply can't handle.

January 2, 2019

When Math is in Jeopardy!

(Forgive the somewhat dramatic title, but it was too good to pass up.)

I absolutely love game shows.  It's always fun to imagine you're there on stage ... or at least to yell at the people on TV when they don't know a question that was just so easy!

One of my favorite game shows, of course, is Jeopardy --- it has an air of intellectualism about it, with such an eclectic collection of topics.  I don't even mind that I'm admittedly pretty terrible at it... I always find myself thinking, "Man, if they just had some math questions, I could knock those out of the park!"

Well, a couple of days ago, as we were getting ready to celebrate the advent of 2019, I was elated to see that there was in fact a math question!  (Or math answer, rather.  You know what, I'm just going to call them "clue" and "response" to avoid confusion.)

The category was "Ends in 'ITE'", and the question clue was as follows:
"In math, when the number of elements in a set is countable, it's this type of set."
Image may contain: text
Thanks to @GoGoGadgetGabe on Twitter for the screenshot!
As I was trying to think of what answer response would end in those three letters, a contestant buzzed in and said:
"What is 'finite'?"
Alex Trebek notified the contestant that they were correct.

Meanwhile, I was speechless because I knew they weren't.

Well, at least not completely correct.  See, in math, words have very specific definitions in order to describe very specific phenomena.  In this case, the word "countable" is used in set theory to describe a set --- a collection of things --- that can be put into one-to-one correspondence with a subset of the natural numbers, \(\mathbb{N}=\{0,1,2,3,\ldots\}\).*  So, while finite sets are indeed countable, there are also infinite countable sets --- the integers, the even numbers, and the rational numbers are all well-known examples.  (That last one still amazes me --- in some sense, there are exactly as many fractions as there are whole numbers, even though it seems like the former should outnumber the latter!)

That means that the statement "When the number of elements in a set is countable, it's a finite set" is actually incorrect.

Naturally, I took to the internet to voice my displeasure with how my favorite subject was represented on national television.  When talking with a friend of mine (who knows more about game shows than I ever will), I learned that there had been two other cases recently where math-centric Jeopardy questions had issues with them in the past couple of months.
  • In one case, the clue was, "If \(x^2+2=18\), then \(x\) equals this."  The response that was judged to be correct was "What is '\(4\)'?".  Any algebra teacher reading this right now is shaking their head, because that answer won't get you full credit on any test.  There are two possible values of \(x\): \(4\) and \(-4\).
  • In another case, the response to the clue was supposed to be "What is the commutative property?".  Nobody got it correct (which makes me sad in itself), but when Alex Trebek read the correct response out loud, he said it as "COM-myoo-TAY-tive" instead of "com-MYOO-tuh-TIVE".
Now, in isolation, any one of these would be only a minor annoyance.  After all, there are plenty of clues on Jeopardy that have multiple possible correct responses, and contestants aren't expected to give all correct responses, but rather just one.

But the fact that questions about mathematics appear so infrequently on the show compared to topics such as history, combined with the fact that these kinds of details are not attended to, seems to send a message that mathematics is considered to be not as important, not worth researching fully in the spirit of the subject.

We already live in a culture in which any time I tell somebody I teach math, the inevitable response is "Oh, haha, I was never any good at math."  Somehow people seem proud to admit and even proclaim this.  I'm willing to wager (maybe even make this a true Daily Double) that those people would be much more reluctant to say something like "Oh, haha, I was never any good at reading."  There's a pervasive attitude that mathematics is a torturous and frivolous subject, devoid of the awe-inspiring beauty and sheer fun that those who embrace the subject know it to have.

With that said, I'd like to challenge the writers of Jeopardy --- and perhaps other game shows as well --- to make a conscious effort not only to ask more questions about mathematics, but to take care to do them well, perhaps even consulting one or more mathematicians to make sure the precision and nuance of the subject are properly represented.  (I know math teachers who have come up with versions of game shows for their classes with only mathematically-oriented questions... the students love it!)

It doesn't have to be something like "\(1\times 2\times 3\times 4\times 5\)"** either.  There's such a rich amount of material to pull from --- why not ask questions about, say, fractals?  Or famous mathematicians?  Or even unsolved problems (and those who eventually solved them)?  I would be giddy to see something like,
"Shot and killed in a duel when he was only twenty, this mathematician spent the last night of his life writing down what he'd discovered about quintic equations."
Doesn't that make you want to find out what the story was, why he felt fifth-degree equations were so important that he just had to share it, knowing he would soon die?  Mathematics is full of stories like this, and perhaps letting people know those stories exist, that there's more to math than doing arithmetic problems, might change how people view the subject.

Of course, if winning prize money is what you like about game shows, there's always the million-dollar Millennium Prize Problems...

--

* You might be thinking, "But zero isn't a natural number!"  As it turns out, there's no real consensus on whether zero is considered a natural number.  Some mathematicians choose to include it, while others don't.  To some extent it depends on your field of study --- for example, number theorists may be more likely to disinclude zero because it doesn't play very nicely with things like prime factorizations, while computer scientists are used to counting from zero instead of one.  Peano actually started off his axiomatization of the natural numbers with one being the starting point, but then changed his mind later and started with zero!

** This was actually a clue on a Kid's Jeopardy episode, under the category "Non-Common Core Math".  That will eventually be the impetus for another blog post in the future on the way we currently view mathematics teaching.

October 21, 2017

Discreet Discrete Calculus

Over the past week, I went to the Georgia Mathematics Conference (GMC) at Rock Eagle, held by the Georgia Council of Teachers of Mathematics (GCTM).  The GMC is one of the events I look forward to most every year --- tons of math educators and advocates sharing lessons, techniques, and ideas about how to best teach math to students from kindergarten through college.  I always enjoy sharing my own perspectives as well (even when they do get a bit bizarre!)

This time, I got to share the results of a lesson that I guinea-pigged on my Honors Precalculus class last year, where they explored the relationships between polynomial sequences, common differences, and partial sums.  The presentation from the GMC uses the techniques we looked at to develop the formula for the sum of the first \(n\) perfect squares:

\[1^2+2^2+3^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}\]

At the GMC, we did go a bit further than my class did --- they didn't do the full development of discrete calculus --- but some knowledge of where the ideas lead is never a bad thing, and a different class at a different school may even be able to go further!

Here is the PowerPoint from the presentation.  If you find it useful, or have any questions, please don't hesitate to leave a comment!

(I recommend downloading the PPTX file and viewing the slide show in PowerPoint, rather than using Google's online viewer.)

September 15, 2017

I'm Done with Two Column Proofs.

Wow.

It really has been a while since I've posted here, hasn't it?

But I suppose having a mini-crisis in my geometry class that has forced me to reject our textbook's philosophy on what "proofs" should be is a good enough reason to resurrect this blog.  I finally have the time this year, and I feel the need to share my probably overly opinionated beliefs about math education.

This is the first year I have tried to integrate proofs into our school's geometry curriculum across the board.  (In the past, proofs were only discussed in honors classes, but I felt somehow that reasoning through how you knew something was true was important for everyone.)  I was trying to justify the two-column format, as much as I hate it, as a way to "scaffold" student thinking --- yay educational buzzwords!  But when I actually did it, it got exactly the reaction that I knew it would --- it just served to overly obfuscate the material and utterly drain the life out of it.  I realized I should have stuck to my guns and listened to the likes of Paul Lockhart and Ben Orlin.

So, after some reflection and course correction, here's the email I just sent my students.

---
Hello mathematicians. I have a rather bizarre request. 
Don't do your geometry homework this weekend. 
Yes, you read that right. Don't. 
Let me explain.
We've spent the past couple of days looking at "proofs" in geometry. The reason I say "proofs" in quotations is that, in all honesty, I don't believe the two-column proofs that our book does are are all that useful. I have actually been long opposed to them, but against my better judgment, decided to give them a shot anyway and make them sound reasonable. But you know what they say ... if you put lipstick on a chazir*, it's still a chazir. (They do say that, right?) The thing is, that style of proof just ends up sounding like an overly repetitive magical incantation rather than an actual logical argument --- as some of you pointed out in class today. I truly do value that honesty, by the way, and I hope you continue to be that honest with me. 
Here is what I actually will expect of you going forward. It's quite simple: 
I expect you to be able to tell me how you know something is true, and back it up with evidence. 
That's it. 
It may be a big-picture kind of question, or it may be telling me how we get from step A to step B, but when it really comes down to it, it's all just "here's why I know this is true, based on this evidence". It doesn't have to be some stilted-sounding name like the "Congruent Supplements Theorem" either --- just explain it in your own words. That doesn't mean that any explanation is correct --- it still has to be valid mathematical reasoning. You can't tell me that two segments on a page are congruent because they're drawn in the same color, or something silly like that. But it doesn't have to be in some prescribed way --- just as long as you show me you really do understand it. 
With that in mind, by the way, I'm also not going to be giving you a quiz on Monday, either. Instead, we're going to focus on how to make arguments that are a lot more convincing than just saying the same thing in different words. I think you'll find that Monday's class will make a lot more sense than the past few classes combined. 
So, relax, take a much-deserved Shabbat, and when we come back, I hope to invite you to see geometry the way I see it --- not as a set of arbitrary rules, but as something both logical and beautiful. 
Shabbat Shalom.

*  I teach at a Jewish private school.  "Chazir" is Hebrew for "pig", which has the added bonus of being non-Kosher.  Two-column proofs are treif... at least in the context of introductory geometry.

P.S.  I am not saying that two-column proofs NEVER have a place in mathematics.  I am merely saying that introductory geometry, when kids are still getting used to much of geometry as a subject, is not the proper place to introduce the building of an axiomatic system.  Save that for later courses for the students who choose to become STEM majors.

August 14, 2015

A Real-Life Paradox: The Banach-Tarski Burrito

Who knew the Axiom of Choice could help me decide whether to get guacamole for an extra $1.95?

A couple of weeks ago, the popular YouTube channel Vsauce released a video that tackles what it details as “one of the strangest theorems in modern mathematics”: the Banach-Tarski Paradox.  In the video, Michael Stevens explains how a single sphere can be decomposed into peculiar-looking sets, after which those sets can be recombined to form two spheres, each perfectly identical to the original in every way.  If you haven’t had a chance to watch the video, go ahead and do so here:


Although this seems like a purely theoretical abstraction of mathematics, the video leaves us wondering if perhaps there could be a real-world application of such a bizarre phenomenon.  Stevens asks, “is [the Banach-Tarski paradox] a place where math and physics separate?  We still don’t know … The Banach-Tarski Paradox could actually happen in our real world … some scientists think it may be physically valid.”

Well, my friends, I would like to make the bold claim that I have indeed discovered a physical manifestation of this paradox.

And it happened a few years ago at my local Chipotle.

Let me start off by saying this:  I love Chipotle.  It’s a particularly good day for me when I walk in and get my burrito with brown rice, fajita veggies, steak, hot salsa, cheese, pico de gallo, corn, sour cream, guacamole (yes I know it’s extra, just put it on my burrito already!), and a bit of lettuce.  No chips, Coke, and about a half hour later I’m one happily stuffed math teacher.

The only thing that I don’t like about Chipotle is that the construction of said burritos often ends up failing at the most crucial step – the rolling into one coherent, tasty package.  Given the sheer amount of food that gets crammed into a Chipotle burrito, it’s unsurprising that they eventually lose their structural integrity and burst, somewhat defeating the purpose of ordering a burrito in the first place.

If you have ever felt the pain of seeing your glorious Mexican monstrosity explode with toppings like something out of an Alien movie because of an unlucky burrito-roller, you have probably been offered the opportunity to “double-wrap” your burrito for no extra charge, giving it an extra layer of tortilla to ensure the safe deliverance of guacamole-and-assorted-other-ingredients into your hungry maw.

Now, being a mathematically-minded kind of guy, I asked the employee who made me this generous offer:

“Well, could I just get my ingredients split between two tortillas instead?”

The destroyer-of-burritos gave that look that you always get from anybody who works at a business that bandies about words like “company policy” when they realize they have to deny a customer’s request even in the face of logic, and said:

“If you do that, we’ll have to charge you for two burritos.”

I was dumbfounded.

“Wait … so you’re saying that if you put a second tortilla around my burrito, you’ll charge me for one burrito, but if you rearrange the exact same ingredients, you’ll charge me for two?”

“Yes sir – company policy.”

Utterly defeated, I begrudgingly accepted the offer to give my burrito its extra layer of protection, doing my best to smile at the girl who probably knew as well as I did the sheer absurdity of the words that had come out of her mouth.  I paid the cashier, let out an audible “oof” as I lifted the noticeably heavy paper bag covered with trendy lettering, and exited the store.

When I arrived home, I took what looked like an aluminum foil-wrapped football out of the bag (which was a great source of amusement for my housemates), laid it out on the kitchen table, and decided to dismantle the burrito myself and arrange it into two much more manageable Mexican morsels.  I wondered whether I should have done this juggling of ingredients right there at Chipotle, just to see whether the staff’s heads would explode.

It was in that moment, with my head still throbbing from the madness of the entire experience, that I began to realize what had just happened.  How was it possible that a given mass of food could cost one amount one moment and another amount the next?  I immediately began to deconstruct my burrito, laying out the extra tortilla onto a plate and carefully making sure that precisely one-half of the ingredients – especially the guacamole – found their way into their new home.  As I carefully re-wrapped both tortillas, my suspicions were confirmed.  Sitting right in front of me were two delicious burritos, each identical in price to my original.

I had discovered the Banach-Tarski Burrito.

Contact Form

Name

Email *

Message *