January 12, 2019

Who Says You Can't Do That? --- Trig Identities


Ahh, trig identities...  a rite of passage for any precalculus student.

This is a huge stumbling block for many students, because up until this point, many have been perfectly successful (or at least have gotten by) in their classes by learning canned formulas and procedures and then doing a bunch of exercises that just change a \(2\) to a \(3\) here and a plus to a minus there.  Now, all of a sudden, there's no set way of going about things.  No "step 1 do this, step 2 do that".  Now they have to rely on their intuition and "play" with an identity until they prove that it's correct.

And to make matters worse, many textbooks --- and, as a result, many teachers --- make this subject arbitrarily and artificially harder for the students.

They insist that students are not allowed to work on both sides of the equation, but instead must specifically start at one end and work their way to the other.  I myself once subscribed to this "rule", because it's how I'd always been taught, and I always fed students the old line of "you can't assume the thing you're trying to prove because that's a logical fallacy".

Then one of my Honors Precalculus students called me on it.

He asked me to come up with an example of a trig non-identity where adding the same thing on both sides would lead to a false proof that the identity was correct.  After some thought, I realized that not only couldn't I think of one, but that mathematically, there's no reason that one should exist.

To begin with, one valid way to prove an identity is to work with each side of the equation separately and show that they are both equal to the same thing.  For example, suppose you want to verify the following identity:

\[\dfrac{\cot^2{\theta}}{1+\csc{\theta}}=\dfrac{1-\sin{\theta}}{\sin{\theta}}\]
Trying to work from one side to the other would be a nightmare, but it's much simpler to show that each side is equal to \(\csc{\theta}-1\).  This in fact demonstrates one of the oldest axioms in mathematics, as written by Euclid:  "things which are equal to the same thing are equal to each other."

But what about doing the same thing to both sides of an equation?

There are two important points to realize about what's going on behind the scenes here.

The first is that if your "thing you do to both sides" is a reversible step --- that is, if you're applying a one-to-one function to both sides of an equation --- then it's perfectly valid to use that as part of your proof because it establishes an if-and-only-if relationship.  If that function is not one-to-one, all bets are off.  You can't prove that \(2=-2\) by squaring both sides to get \(4=4\), because the function \(x\mapsto x^2\) maps multiple inputs to the same output.

It baffles me that most Precalculus textbooks mention one-to-one functions in the first chapter or two, yet completely fail to understand how this applies to solving equations.*  A notable exception is UCSMP's Precalculus and Discrete Mathematics book, which establishes the following on p. 169:


Reversible Steps Theorem
Let \(f\), \(g\), and \(h\) be functions.  Then, for all \(x\) in the intersection of the domains of functions \(f\), \(g\), and \(h\),
  1. \(f(x)=g(x) \Leftrightarrow f(x)+h(x)=g(x)+h(x)\)
  2. \(f(x)=g(x) \Leftrightarrow f(x)\cdot h(x)=g(x)\cdot h(x)\) [We'll actually come back to this one in a bit -- there's a slight issue with it.]
  3. If \(h\) is 1-1, then for all \(x\) in the domains of \(f\) and \(g\) for which \(f(x)\) and \(g(x)\) are in the domain of \(h\), \[f(x)=g(x) \Leftrightarrow h(f(x))=h(g(x)).\]

Later on p. 318, the book says:

"...there is no new or special logic for proving identities.  Identities are equations and all the logic that was discussed with equation-solving applies to them."

Yes, that whole "math isn't just a bunch of arbitrary rules" thing applies here too.

The second important point, which you may have noticed while looking at the statement of the Reversible Steps Theorem, is that the implied domain of an identity matters a great deal.  When you're proving a trig identity, you are trying to establish that it is true for all inputs that are in the domain of both sides.  Most textbooks at least pay lip service to this fact, even though they don't follow it to its logical conclusion.

To illustrate why domain is so important, consider this example:

\[\dfrac{\cos{x}}{1-\sin{x}} = \dfrac{1+\sin{x}}{\cos{x}}\]
To verify this identity, I'm going to do something that may give you a visceral reaction:  I'm going to "cross-multiply".  Or, more properly, I'm going to multiply both sides by the expression \((1 - \sin x)\cos x\).  I claim that this is a perfectly valid step to take, and what's more, it makes the rest of the proof downright easy by reducing to everyone's favorite Pythagorean identity:

\[
\begin{align*}
(\cos{x})(\cos{x}) &= (1+\sin{x})(1-\sin{x})\\
\cos^2{x} &= 1-\sin^2{x}\\
\sin^2{x} + \cos^2{x} &= 1 \quad\blacksquare
\end{align*}
\]
"But wait," you ask, "what if \(x=\pi/2\)?  Then you're multiplying both sides by zero, and that's certainly not reversible!"

True.  But if \(x=\pi/2\), then the denominators of both sides of the equation are zero, so the identity isn't even true in the first place.  For any value of \(x\) that does not yield a zero in either denominator, though, multiplying both sides of an equation by that value is a reversible operation and therefore completely valid.

Now, this isn't to say that multiplying both sides of an equation by a function can't lead to problems --- for example, if \(h(x)=0\) (as in the zero function), then \(f(x)\cdot h(x)=g(x)\cdot h(x)\) no matter what.  This can even lead to problems in more subtle cases: suppose \(f\) and \(g\) are equal everywhere but a single point \(a\); for example, perhaps \(f(a)=1\) and \(g(a)=2\).  If it just so happens that \(h(a)=0\), then \(f\cdot h\) and \(g\cdot h\) will be equal as functions, even though \(f\) and \(g\) are not themselves equal.

The real issue here can be explained via a quick foray into higher mathematics.  Functions form what's called a ring -- basically meaning you can add, subtract, and multiply them, and these operations have all the nice properties we'd expect.  But being able to preserve that if-and-only-if relationship when multiplying a function by both sides of an equation requires a special kind of ring called an integral domain, which means that it's impossible to multiply two nonzero functions together and get a zero function.

Unfortunately, functions in general don't form an integral domain --- not even continuous functions, or differentiable functions, or even infinitely differentiable functions do!  But if we move up to the complex numbers (where everything works better!), then the set of analytic functions --- functions that can be written as power series (infinite polynomials) on an open domain --- is an integral domain.  And most of the functions that precalculus students encounter generally turn out to be analytic**:  polynomial, rational, exponential, logarithmic, trigonometric, and even inverse trigonometric.  This means that when proving trigonometric identities, multiplying both sides by the same function is a "safe" operation.

So in sum, when proving trigonometric identities, as long as you're careful to only use reversible steps (what a great time to spiral back to one-to-one functions, by the way!), you are welcome to apply all the same algebraic operations that you would when solving equations, and the chain of equalities you establish will prove the identity.  Even "cross-multiplying" is fair game, because any input that would make the denominator zero would invalidate the identity anyway.***  Since trigonometric functions are generally "safe" (analytic), we're guaranteed to never run into any issues.

Now, none of this is to say that there isn't intrinsic merit to learning how to prove an identity by working from one side to the other.  Algebraic "tricks" --- like multiplying by an expression over itself (\(1\) in disguise!) to conveniently simplify certain expressions --- are important tools for students to have under their belts, especially when they encounter limits and integrals next year in calculus.

What we need to do, then, is encourage our students to come up with multiple solution methods, and perhaps present working from one side to the other as an added challenge to build their mathematical muscles.  And if students are going to work on both sides of an equation at once, then we need to hold them to high standards and make them explicitly state in their proofs that all the steps they have taken are reversible!  If they're unsure on whether or not a step is valid, have them investigate it until they're convinced one way or the other.

If we're artificially limiting our students by claiming that only one solution method is correct, we're sending the wrong message about what mathematics really is.  Instead, celebrating and cultivating our students' creativity is the best way to prepare them for problem-solving in the real world.

--

* Rather, I would say it baffles me, but actually I'm quite used to seeing textbooks treat mathematical topics as disparate and unconnected, like how a number of Precalculus books teach vectors in one chapter and matrices in the next, yet never once mentione how they are so beautifully tied together via transformations.

** Except perhaps at a few points.  The more correct term for rational functions and certain trigonometric functions is actually meromorphic, which describes functions that are analytic everywhere except a discrete set of points, called the poles of the function, where the function blows up to infinity because of division by zero.

*** If you extend the domains of the trig functions to allow for division by zero, you do need to be more careful.  Not because there's anything intrinsically wrong with dividing by zero, but because \(0\cdot\infty\) is an indeterminate expression and causes problems that algebra simply can't handle.

January 2, 2019

When Math is in Jeopardy!

(Forgive the somewhat dramatic title, but it was too good to pass up.)

I absolutely love game shows.  It's always fun to imagine you're there on stage ... or at least to yell at the people on TV when they don't know a question that was just so easy!

One of my favorite game shows, of course, is Jeopardy --- it has an air of intellectualism about it, with such an eclectic collection of topics.  I don't even mind that I'm admittedly pretty terrible at it... I always find myself thinking, "Man, if they just had some math questions, I could knock those out of the park!"

Well, a couple of days ago, as we were getting ready to celebrate the advent of 2019, I was elated to see that there was in fact a math question!  (Or math answer, rather.  You know what, I'm just going to call them "clue" and "response" to avoid confusion.)

The category was "Ends in 'ITE'", and the question clue was as follows:
"In math, when the number of elements in a set is countable, it's this type of set."
Image may contain: text
Thanks to @GoGoGadgetGabe on Twitter for the screenshot!
As I was trying to think of what answer response would end in those three letters, a contestant buzzed in and said:
"What is 'finite'?"
Alex Trebek notified the contestant that they were correct.

Meanwhile, I was speechless because I knew they weren't.

Well, at least not completely correct.  See, in math, words have very specific definitions in order to describe very specific phenomena.  In this case, the word "countable" is used in set theory to describe a set --- a collection of things --- that can be put into one-to-one correspondence with a subset of the natural numbers, \(\mathbb{N}=\{0,1,2,3,\ldots\}\).*  So, while finite sets are indeed countable, there are also infinite countable sets --- the integers, the even numbers, and the rational numbers are all well-known examples.  (That last one still amazes me --- in some sense, there are exactly as many fractions as there are whole numbers, even though it seems like the former should outnumber the latter!)

That means that the statement "When the number of elements in a set is countable, it's a finite set" is actually incorrect.

Naturally, I took to the internet to voice my displeasure with how my favorite subject was represented on national television.  When talking with a friend of mine (who knows more about game shows than I ever will), I learned that there had been two other cases recently where math-centric Jeopardy questions had issues with them in the past couple of months.
  • In one case, the clue was, "If \(x^2+2=18\), then \(x\) equals this."  The response that was judged to be correct was "What is '\(4\)'?".  Any algebra teacher reading this right now is shaking their head, because that answer won't get you full credit on any test.  There are two possible values of \(x\): \(4\) and \(-4\).
  • In another case, the response to the clue was supposed to be "What is the commutative property?".  Nobody got it correct (which makes me sad in itself), but when Alex Trebek read the correct response out loud, he said it as "COM-myoo-TAY-tive" instead of "com-MYOO-tuh-TIVE".
Now, in isolation, any one of these would be only a minor annoyance.  After all, there are plenty of clues on Jeopardy that have multiple possible correct responses, and contestants aren't expected to give all correct responses, but rather just one.

But the fact that questions about mathematics appear so infrequently on the show compared to topics such as history, combined with the fact that these kinds of details are not attended to, seems to send a message that mathematics is considered to be not as important, not worth researching fully in the spirit of the subject.

We already live in a culture in which any time I tell somebody I teach math, the inevitable response is "Oh, haha, I was never any good at math."  Somehow people seem proud to admit and even proclaim this.  I'm willing to wager (maybe even make this a true Daily Double) that those people would be much more reluctant to say something like "Oh, haha, I was never any good at reading."  There's a pervasive attitude that mathematics is a torturous and frivolous subject, devoid of the awe-inspiring beauty and sheer fun that those who embrace the subject know it to have.

With that said, I'd like to challenge the writers of Jeopardy --- and perhaps other game shows as well --- to make a conscious effort not only to ask more questions about mathematics, but to take care to do them well, perhaps even consulting one or more mathematicians to make sure the precision and nuance of the subject are properly represented.  (I know math teachers who have come up with versions of game shows for their classes with only mathematically-oriented questions... the students love it!)

It doesn't have to be something like "\(1\times 2\times 3\times 4\times 5\)"** either.  There's such a rich amount of material to pull from --- why not ask questions about, say, fractals?  Or famous mathematicians?  Or even unsolved problems (and those who eventually solved them)?  I would be giddy to see something like,
"Shot and killed in a duel when he was only twenty, this mathematician spent the last night of his life writing down what he'd discovered about quintic equations."
Doesn't that make you want to find out what the story was, why he felt fifth-degree equations were so important that he just had to share it, knowing he would soon die?  Mathematics is full of stories like this, and perhaps letting people know those stories exist, that there's more to math than doing arithmetic problems, might change how people view the subject.

Of course, if winning prize money is what you like about game shows, there's always the million-dollar Millennium Prize Problems...

--

* You might be thinking, "But zero isn't a natural number!"  As it turns out, there's no real consensus on whether zero is considered a natural number.  Some mathematicians choose to include it, while others don't.  To some extent it depends on your field of study --- for example, number theorists may be more likely to disinclude zero because it doesn't play very nicely with things like prime factorizations, while computer scientists are used to counting from zero instead of one.  Peano actually started off his axiomatization of the natural numbers with one being the starting point, but then changed his mind later and started with zero!

** This was actually a clue on a Kid's Jeopardy episode, under the category "Non-Common Core Math".  That will eventually be the impetus for another blog post in the future on the way we currently view mathematics teaching.

October 21, 2017

Discreet Discrete Calculus

Over the past week, I went to the Georgia Mathematics Conference (GMC) at Rock Eagle, held by the Georgia Council of Teachers of Mathematics (GCTM).  The GMC is one of the events I look forward to most every year --- tons of math educators and advocates sharing lessons, techniques, and ideas about how to best teach math to students from kindergarten through college.  I always enjoy sharing my own perspectives as well (even when they do get a bit bizarre!)

This time, I got to share the results of a lesson that I guinea-pigged on my Honors Precalculus class last year, where they explored the relationships between polynomial sequences, common differences, and partial sums.  The presentation from the GMC uses the techniques we looked at to develop the formula for the sum of the first \(n\) perfect squares:

\[1^2+2^2+3^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}\]

At the GMC, we did go a bit further than my class did --- they didn't do the full development of discrete calculus --- but some knowledge of where the ideas lead is never a bad thing, and a different class at a different school may even be able to go further!

Here is the PowerPoint from the presentation.  If you find it useful, or have any questions, please don't hesitate to leave a comment!

(I recommend downloading the PPTX file and viewing the slide show in PowerPoint, rather than using Google's online viewer.)

September 15, 2017

I'm Done with Two Column Proofs.

Wow.

It really has been a while since I've posted here, hasn't it?

But I suppose having a mini-crisis in my geometry class that has forced me to reject our textbook's philosophy on what "proofs" should be is a good enough reason to resurrect this blog.  I finally have the time this year, and I feel the need to share my probably overly opinionated beliefs about math education.

This is the first year I have tried to integrate proofs into our school's geometry curriculum across the board.  (In the past, proofs were only discussed in honors classes, but I felt somehow that reasoning through how you knew something was true was important for everyone.)  I was trying to justify the two-column format, as much as I hate it, as a way to "scaffold" student thinking --- yay educational buzzwords!  But when I actually did it, it got exactly the reaction that I knew it would --- it just served to overly obfuscate the material and utterly drain the life out of it.  I realized I should have stuck to my guns and listened to the likes of Paul Lockhart and Ben Orlin.

So, after some reflection and course correction, here's the email I just sent my students.

---
Hello mathematicians. I have a rather bizarre request. 
Don't do your geometry homework this weekend. 
Yes, you read that right. Don't. 
Let me explain.
We've spent the past couple of days looking at "proofs" in geometry. The reason I say "proofs" in quotations is that, in all honesty, I don't believe the two-column proofs that our book does are are all that useful. I have actually been long opposed to them, but against my better judgment, decided to give them a shot anyway and make them sound reasonable. But you know what they say ... if you put lipstick on a chazir*, it's still a chazir. (They do say that, right?) The thing is, that style of proof just ends up sounding like an overly repetitive magical incantation rather than an actual logical argument --- as some of you pointed out in class today. I truly do value that honesty, by the way, and I hope you continue to be that honest with me. 
Here is what I actually will expect of you going forward. It's quite simple: 
I expect you to be able to tell me how you know something is true, and back it up with evidence. 
That's it. 
It may be a big-picture kind of question, or it may be telling me how we get from step A to step B, but when it really comes down to it, it's all just "here's why I know this is true, based on this evidence". It doesn't have to be some stilted-sounding name like the "Congruent Supplements Theorem" either --- just explain it in your own words. That doesn't mean that any explanation is correct --- it still has to be valid mathematical reasoning. You can't tell me that two segments on a page are congruent because they're drawn in the same color, or something silly like that. But it doesn't have to be in some prescribed way --- just as long as you show me you really do understand it. 
With that in mind, by the way, I'm also not going to be giving you a quiz on Monday, either. Instead, we're going to focus on how to make arguments that are a lot more convincing than just saying the same thing in different words. I think you'll find that Monday's class will make a lot more sense than the past few classes combined. 
So, relax, take a much-deserved Shabbat, and when we come back, I hope to invite you to see geometry the way I see it --- not as a set of arbitrary rules, but as something both logical and beautiful. 
Shabbat Shalom.

*  I teach at a Jewish private school.  "Chazir" is Hebrew for "pig", which has the added bonus of being non-Kosher.  Two-column proofs are treif... at least in the context of introductory geometry.

P.S.  I am not saying that two-column proofs NEVER have a place in mathematics.  I am merely saying that introductory geometry, when kids are still getting used to much of geometry as a subject, is not the proper place to introduce the building of an axiomatic system.  Save that for later courses for the students who choose to become STEM majors.

August 14, 2015

A Real-Life Paradox: The Banach-Tarski Burrito

Who knew the Axiom of Choice could help me decide whether to get guacamole for an extra $1.95?

A couple of weeks ago, the popular YouTube channel Vsauce released a video that tackles what it details as “one of the strangest theorems in modern mathematics”: the Banach-Tarski Paradox.  In the video, Michael Stevens explains how a single sphere can be decomposed into peculiar-looking sets, after which those sets can be recombined to form two spheres, each perfectly identical to the original in every way.  If you haven’t had a chance to watch the video, go ahead and do so here:


Although this seems like a purely theoretical abstraction of mathematics, the video leaves us wondering if perhaps there could be a real-world application of such a bizarre phenomenon.  Stevens asks, “is [the Banach-Tarski paradox] a place where math and physics separate?  We still don’t know … The Banach-Tarski Paradox could actually happen in our real world … some scientists think it may be physically valid.”

Well, my friends, I would like to make the bold claim that I have indeed discovered a physical manifestation of this paradox.

And it happened a few years ago at my local Chipotle.

Let me start off by saying this:  I love Chipotle.  It’s a particularly good day for me when I walk in and get my burrito with brown rice, fajita veggies, steak, hot salsa, cheese, pico de gallo, corn, sour cream, guacamole (yes I know it’s extra, just put it on my burrito already!), and a bit of lettuce.  No chips, Coke, and about a half hour later I’m one happily stuffed math teacher.

The only thing that I don’t like about Chipotle is that the construction of said burritos often ends up failing at the most crucial step – the rolling into one coherent, tasty package.  Given the sheer amount of food that gets crammed into a Chipotle burrito, it’s unsurprising that they eventually lose their structural integrity and burst, somewhat defeating the purpose of ordering a burrito in the first place.

If you have ever felt the pain of seeing your glorious Mexican monstrosity explode with toppings like something out of an Alien movie because of an unlucky burrito-roller, you have probably been offered the opportunity to “double-wrap” your burrito for no extra charge, giving it an extra layer of tortilla to ensure the safe deliverance of guacamole-and-assorted-other-ingredients into your hungry maw.

Now, being a mathematically-minded kind of guy, I asked the employee who made me this generous offer:

“Well, could I just get my ingredients split between two tortillas instead?”

The destroyer-of-burritos gave that look that you always get from anybody who works at a business that bandies about words like “company policy” when they realize they have to deny a customer’s request even in the face of logic, and said:

“If you do that, we’ll have to charge you for two burritos.”

I was dumbfounded.

“Wait … so you’re saying that if you put a second tortilla around my burrito, you’ll charge me for one burrito, but if you rearrange the exact same ingredients, you’ll charge me for two?”

“Yes sir – company policy.”

Utterly defeated, I begrudgingly accepted the offer to give my burrito its extra layer of protection, doing my best to smile at the girl who probably knew as well as I did the sheer absurdity of the words that had come out of her mouth.  I paid the cashier, let out an audible “oof” as I lifted the noticeably heavy paper bag covered with trendy lettering, and exited the store.

When I arrived home, I took what looked like an aluminum foil-wrapped football out of the bag (which was a great source of amusement for my housemates), laid it out on the kitchen table, and decided to dismantle the burrito myself and arrange it into two much more manageable Mexican morsels.  I wondered whether I should have done this juggling of ingredients right there at Chipotle, just to see whether the staff’s heads would explode.

It was in that moment, with my head still throbbing from the madness of the entire experience, that I began to realize what had just happened.  How was it possible that a given mass of food could cost one amount one moment and another amount the next?  I immediately began to deconstruct my burrito, laying out the extra tortilla onto a plate and carefully making sure that precisely one-half of the ingredients – especially the guacamole – found their way into their new home.  As I carefully re-wrapped both tortillas, my suspicions were confirmed.  Sitting right in front of me were two delicious burritos, each identical in price to my original.

I had discovered the Banach-Tarski Burrito.

April 24, 2015

A Radical New Look for Logarithms

"A good notation has a subtlety and suggestiveness which at times make it almost seem like a live teacher." — Bertrand Russell

"We could, of course, use any notation we want; do not laugh at notations; invent them, they are powerful. In fact, mathematics is, to a large extent, invention of better notations." — Richard Feynman

"By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and in effect increases the mental power of the race." — Alfred North Whitehead


Notation is perhaps one of the most important aspects of mathematics.  The right choice of notation can make a concept clear as day; the wrong choice can make extracting its meaning hopeless.  Of course, one great thing about notation is that even if there's a poor choice of notation out there (such as \(\left[x\right]\) or \(\pi\)), often someone comes along and creates a better one (such as \(\lfloor x\rfloor\) for the floor function or multiples of tau, \(\tau\approx 6.28318\), for radian measure of angles).

Which brings me to one such poor choice of notation, one that I believe needs fixing:  the rather asymmetrical notation of powers, roots, and logarithms.

Here we have three very closely related concepts — both roots and logarithms are ways to invert exponentiation, the former returning the base and the latter returning the exponent.  And yet their notation couldn't be more different:
\[2^3=8\\
\sqrt[3]{8}=2\\
\log_2{8}=3\]This always struck me as annoyingly inelegant.  Wouldn't it be nice if these notations bore at least some resemblance to each other?

After giving it some thought, I believe I have found a possible solution.  As an alternative to writing \(\log_2 {8}\), I propose the following notation:


This notation makes use of a reflected radical symbol, such that the base of the logarithm is written in a similar manner to the index of a radical but below the "point" (the pointy part of the radical symbol), and the argument of the logarithm is written "inside".  The use of this notation has a number of advantages:

  1. The symmetry between the normal radical for roots and the reflected radical for logarithms highlights both their similarities and their differences — each one "undoes" an exponential expression, but each one gives a different part of the expression (the base and the exponent, respectively.)
  2. The radical symbol can be looked at as a modified lowercase letter "r".  (This may actually be the origin of the symbol, where the "r" stands for radix, the Latin word for "root".)  In a similar way, the new symbol for logarithms resembles a capital "L".
  3. The placement of the "small number" and the "point" can take on a secondary spatial meaning: 
    • The "small number" represents a piece of information we know about an exponential expression, and its placement indicates which part we know.
      • For a root, the "small number" is on top, so we know the exponent.
      • For a logarithm, the "small number" is on bottom, so we know the base.
    • The symbol seems to "point" to the piece of information that we are looking for.
      • For a root, the "point" is pointing downward, so we are looking for the base.
      • For a logarithm, the "point" is pointing upward, so we are looking for the exponent.
    • Looking at the image above, the new notation seems to say "We know the base is 2, so what's the exponent that will get us to 8?"
    • Similarly, the expression \(\sqrt[3]{8}\) now can be interpreted as saying "We know the exponent is 3, so what's the base that will get us to 8?"

This notation would obviously not make much of a difference for seasoned mathematicians who are perfectly comfortable with the \(\log\) and \(\ln\) functions.  But from a pedagogical standpoint, the reflected radical, with its multi-layered meaning and auto-mnemonic properties, could help students become more comfortable with a concept that many look at as just meaningless manipulation of symbols.

When I first came up with this reflected-radical notation, I had originally imagined that it should replace the current notation.  However, after some feedback from various people and some further consideration, I think a better course of action would be to have this notation be used alongside the current notation, much in the way that we have multiple notations for other concepts in math (such as the many ways to write derivatives).  However, I would suggest that, if it were to become commonplace*, this notation would be best to use when first introducing the concept in schools.  The current notation isn't wrong per se — it's just not very evocative of the underlying concept.  Anything that can better elucidate that concept can't be a bad thing when it comes to students learning mathematics!

It may seem like a radical idea.
But it's a logical one.


* Of course, for this notation to become commonplace, somebody would need to figure out how to replicate it in LaTeX.  Any takers?

September 15, 2014

Infinity is my favorite number.



Yes, you read that right.

I've recently been embroiled in a lovely debate on Numberphile's video, "Infinity is bigger than you think", in which Dr. James Grime starts off:  "We're going to break a rule.  We're breaking one of the rules of Numberphile.  We're talking about something that isn't a number.  We're going to talk about infinity."



I, too, was a longtime believer of what high school students all over are told:  "Infinity is not a number; infinity is a concept."  As my studies of mathematics progressed, however, I began to see that perhaps the things I had always taken for granted were not as black-and-white as they had seemed.  There was a lot more nuance to mathematics than I had ever realized, and learning those nuances opened up an entire new level of understanding, unlocking all sorts of links between concepts that had previously seemed worlds apart.  So it's no wonder that "Infinity Is Not A Number" (which I will occasionally abbreviate as "IINAN") was one of the first claims to which I took a fine-tooth comb.  What I learned changed my stance on infinity and firmly cemented it as my favorite number - not just concept, but honest-to-god number.*

The most common argument made IINAN proponents involves the curious property that \(\infty +1=\infty\).  This, they say, leads to all sorts of contradictions, because all one has to do is simply subtract \(\infty\) from both sides:
\[
\infty+1=\infty\\
\underline{-\infty\ \ \ \ \ \ \ \ -\infty}\ \ \\
\ \ \ \ \ \ 1=0\]Oh no!  We know that the statement \(1=0\) is obviously false, so there must be a false assumption somewhere.  Many IINAN defenders claim that the false assumption was that we tried to treat \(\infty\) as a number.  But that's not actually where the problem with infinity lies.

The problem is that we tried to do algebra with it.

For mathematicians, the most convenient place to do algebra is in a structure called a field.  If you're already familiar with what a field is, great, but if not, you can think of a field as a number system in which the age-old operations of addition, subtraction, multiplication, and division  the four operations that my father often notes are the only ones he ever needs when I talk about the kinds of math I teach  work exactly as we'd like them to.  The fields with which we are most familiar are the rational numbers (\(\mathbb{Q}\)), the badly-named so-called "real" numbers (\(\mathbb{R}\)), and often the complex numbers (\(\mathbb{C}\)).  One basic thing about a field is that the subtraction property of equality holds:  For any numbers \(a\), \(b\), and \(c\) in our field, if \(a=b\), then \(a-c=b-c\).

What about \(\infty\) though?  When we attempted to use \(\infty\) in an algebra problem, we got back complete garbage.  And we know that the subtraction property of equality should hold for any numbers in a field.  What this means, then, is that \(\infty\) is not part of that field (or any field as far as I'm aware).  So, when someone says "Infinity is not a number", what they really mean is "Infinity is not a real number."  (It's not a complex number, either, for that matter.)  It doesn't follow the same rules that the real numbers do.

But that doesn't mean it's not a number at all.

We've seen this sort of thing happen before.  The Greek mathematician Diophantus, when faced the equation \(4x+20=0\), called its solution of \(-5\) "absurd" — yet now students learn about negative numbers as early as elementary school, and we barely blink an eye at their use in everyday life.  Square roots of negative numbers seemed equally preposterous to the Italian mathematician Gerolamo Cardano, and the French mathematician RenĂ© Descartes called them "imaginary", a term that we're unfortunately stuck with today.  But imaginary numbers — and the complex numbers we build from them — are a vital part of physics, from alternating currents to quantum mechanics.

So what makes infinity any different from \(i\)?

Sure, it seems bizarre that a number plus one could equal itself.  But it's equally bizarre that the square of a number could be negative.  And sure, we can get a contradiction if we do certain things to infinity.  But that happens with \(i\) as well!  If we attempt to use the identity \(\sqrt{a}\cdot\sqrt{b}=\sqrt{ab}\), we can arrive at a similar contradiction:
\[\sqrt{-1}\cdot\sqrt{-1}=\sqrt{-1\cdot-1}\\
\ \ \ i\cdot i=\sqrt{1}\\
-1=1\ \]When this equation fails, you don't see mathematicians clamoring that "\(i\) isn't a number"!  Instead, the response is that the original equation doesn't work like we thought it did when we extend our real number system to include the complex numbers — instead, the square root function takes on a new life as a multi-valued function.  There's that nuance again!  For the same reason, infinity makes us look closer at something as simple as subtraction, at which point we find that \(\infty-\infty\) is an indeterminate form, something that we need the tools of calculus to properly deal with.

The truth is, mathematicians have been treating \(\infty\) as a number** for quite some time now.

In real analysis, which was developed to give the techniques of calculus a rigorous footing, points labelled \(+\infty\) and \(-\infty\) can be added to either end of the real number line to give what we call the extended real number line, often denoted \(\overline{\mathbb{R}}\) or \(\left[-\infty,+\infty\right]\).  The extended real number line is useful in describing concepts in measure theory and integration, and it has algebraic rules of its own, though analysts are still careful to mention that these two extra points are not real numbers.  What's more, the extended real line is not a field, because it doesn't satisfy all the nice properties that a field does.  (But that just makes us appreciate working in a field that much more!)

Projective geometry gives us a different sort of infinity, what I like to call an "unsigned infinity", one that is obtained by letting \(-\infty\) and \(+\infty\) overlap and creating what is known as the real projective line.  And complex analysis, which extends calculus to the complex plane, takes it even further, letting all the different infinities in all directions overlap to create a sort of "complex infinity", sometimes written \(\tilde{\infty}\), sitting atop the Riemann sphere.  What I particularly like about these projective infinities is that, using them, you can actually divide by zero! ***

So, since there are actually a number of different kinds of infinity that can be referred to, I would say that, more specifically, complex infinity is my favorite number.

The tough thing about this situation is that the concept of "number" is a very difficult one to precisely and universally define — similar to how linguists still struggle to come up with a universal definition of "word".  By trying to come up with such a description, you end up either including things that you don't want to be numbers (such as matrices) or excluding things that you do want to be numbers (such as complex numbers).  The best we can really do is keep an open mind about what a "number" is.

After all, there's infinitely many of them already — so there's bound to be new ones we haven't seen yet sooner or later.





* I'm not saying that infinity isn't a concept.  When it really comes down to it, every number is a concept.  That's the beauty of having abstracted the number "two" as an adjective, as in "two sheep", to "two" as a noun.

** There's an argument to be made that treating something like a number doesn't mean it is a number.  But at some point, the semantic distinction between these two becomes somewhat blurred.

*** Don't worry, I'll make a post about how to legitimately divide by zero in the near future!

Contact Form

Name

Email *

Message *