Monday, December 29, 2014

Big Numbers ( I mean REALLY BIG NUMBERS)

Try playing this game at a party:

Two people have index cards and they do the following


You have fifteen seconds. Using standard math notation, English words, or both, name a single whole number—not an infinity—on a blank index card. Be precise enough for any reasonable modern mathematician to determine exactly what number you’ve named, by consulting only your card and, if necessary, the published literature.

Whoever writes the larger number wins.  Of course, the rules don't allow "The number of grains of sand in the Sahara Desert" as that isn't well defined.  After trying this game on multiple people, this is what I found.

Most people attempt the following:

999999999999999999999999999999999999999999999999999999999999999999999999999999999

But what they don't realize is that writing ones takes less time, and therefore one can make the number bigger.  But of course, this is squashed by the clever kid who writes the following:

9999999999999999999999999999999999999999999999999999999999^9999999999999999999999

But even this is quite small, as one can do the following:

9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9

which completely squashes the one before.  (Note: after I told someone about how 1's are easier to write, and the 9^9^9... strategy, she tried the following

1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1)

Place value, multiplication, exponentials, etc... can all express the same numbers.  But it is how concisely they are that makes them different.  The key to the biggest number contest is not swift penmanship, but rather a potent paradigm for concisely capturing the gargantuan.

Humankind is naturally curious about larger, bigger, etc..., the stars in the sky, the sand grains in the desert, etc..., and we find a whole new dimension of numbers arises with a revolution within the numerical system.  Originally, there was I, II, III, IV, V, ... within the Roman numeral system, but as time came, things besides addition came to importance, such as multiplication, which completely invalidated the usage of Roman numerals in favor of the Arabic denumeration.  And so we find that the evolution of these big numbers comes with major revolutions in not just mathematics but the world, with the enlightenment after the scientific revolution being a prime example.  Big numbers mark a major step in human progress.

Large numbers make previously large things not too large.  For example, in ancient times, the number of grains of sand in a desert was used as an example of ginormous amount. Historian Ilan Vardi cites the ancient Greek term sand-hundred, colloquially meaning zillion; as well as a passage from Pindar’s Olympic Ode II asserting that "sand escapes counting."  However, with the advancement of  
exponentiation with Archimedes, such a number didn't look too large.  As Archimedes put it, the number of grains of sand needed to cover the universe only needs 4 characters to write: 10^63 (he didn't count the carrot.)

However, the reverse is also true within large numbers: numbers that are seemingly small within a more potent paradigm may be actually gargantuan.  Consider, for example, the oft-repeated legend of the Grand Vizier in Persia who invented chess. The King, so the legend goes, was delighted with the new game, and invited the Vizier to name his own reward. The Vizier replied that, being a modest man, he desired only one grain of wheat on the first square of a chessboard, two grains on the second, four on the third, and so on, with twice as many grains on each square as on the last. The innumerate King agreed, not realizing that the total number of grains on all 64 squares would be 2^64-1, or 18.6 quintillion—equivalent to the world’s present wheat production for 150 years.

Returning to our bigger number contest, are exponentials the best we can do?  It seems as so humans never came to develop a better encoding scheme until the twentieth century, when the forefathers of computer science developed a much stronger paradigm.  These people, called formalists, wished to rigorify the axioms of mathematics, and one key area of interest was the definition of "computable."  For most until then, primitive recusiveness defined computability, or in other words, being able to compute within a 'reasonable' amount of time.  However in 1928, Wilhelm Ackermann showed the flaw in this definition through a function that was clearly computable, but exhaustively so.  The annoyingly recursive rules for his function went as so:

A(m,n) = 

Case 1: m = 0 => n+1
Case 2: m>0 and n = 0 => A(m-1, 1)
Case 3: m>0 and n>0 => A(m-1, A(m, n-1))

And this function usually scales as the following

A(1) = 1+1
A(2) = 2*2*2
A(3) = 3^3^3^3
etc...

And Ackermann proved that no matter what whole number inputs are given for m and n, the function will eventually give a answer, albeit after a LONG time.  Wielding the Ackermann sequence, we can clobber unschooled opponents in the biggest-number contest.  Just for the fun of it, I recommend writing the following on your sheet of paper:

Ackermann(Ackermann(googolplex, googleplex),Ackermann(googolplex, googolplex))  ;)


Ackermann numbers are pretty big, but they’re not yet big enough. The quest for still bigger numbers takes us back to the formalists. After Ackermann demonstrated that ‘primitive recursive’ isn’t what we mean by ‘computable,’ the question still stood: what do we mean by ‘computable’? In 1936, Alonzo Church and Alan Turing independently answered this question. While Church answered using a logical formalism called the lambda calculus, Turing answered using an idealized computing machine—the Turing machine—that, in essence, is equivalent to every Compaq, Dell, Macintosh, and Cray in the modern world. Turing’s paper describing his machine, "On Computable Numbers," is rightly celebrated as the founding document of computer science.
"Computing," said Turing,
is normally done by writing certain symbols on paper. We may suppose this paper to be divided into squares like a child’s arithmetic book. In elementary arithmetic the 2-dimensional character of the paper is sometimes used. But such use is always avoidable, and I think it will be agreed that the two-dimensional character of paper is no essential of computation. I assume then that the computation is carried out on one-dimensional paper, on a tape divided into squares.
Turing continued to explicate his machine using ingenious reasoning from first principles. The tape, said Turing, extends infinitely in both directions, since a theoretical machine ought not be constrained by physical limits on resources. Furthermore, there’s a symbol written on each square of the tape, like the ‘1’s and ‘0’s in a modern computer’s memory. But how are the symbols manipulated? Well, there’s a ‘tape head’ moving back and forth along the tape, examining one square at a time, writing and erasing symbols according to definite rules. The rules are the tape head’s program: change them, and you change what the tape head does.
Turing’s august insight was that we can program the tape head to carry out any computation. Turing machines can add, multiply, extract cube roots, sort, search, spell-check, parse, play Tic-Tac-Toe, list the Ackermann sequence. If we represented keyboard input, monitor output, and so forth as symbols on the tape, we could even run Windows on a Turing machine. But there’s a problem. Set a tape head loose on a sequence of symbols, and it might stop eventually, or it might run forever—like the fabled programmer who gets stuck in the shower because the instructions on the shampoo bottle read "lather, rinse, repeat." If the machine’s going to run forever, it’d be nice to know this in advance, so that we don’t spend an eternity waiting for it to finish. But how can we determine, in a finite amount of time, whether something will go on endlessly? If you bet a friend that your watch will never stop ticking, when could you declare victory? But maybe there’s some ingenious program that can examine other programs and tell us, infallibly, whether they’ll ever stop running. We just haven’t thought of it yet.

Nope, Turing proved that the halting problem, as it is called, is uncomputable, by a stunning example of self reference.  One can write the following program:

If this program halts, run forever, else halt.

And this contradiction proves the halting problem's uncomputability.  

What does this have to do with big numbers?  Well, Tibor Rado published an article about the biggest numbers anyone had every heard of.

His idea was the following.  He defined the function busy beaver(n) to be the maximum number of steps a program of length n on our Turing tape can go on while still halting.  Now lets suppose this halts.  If we want to see if a random program halts, we wait until it crosses BB(n) steps, and then conclude it doesn't halt, which makes the halting problem computable; a contradiction.  Therefore, BB(n) is uncomputable.  And more than this, lets suppose we have a function D(n) that is larger than BB(n) for every n and is computable.  Then, we can have a program that takes D(n), and checks every number from it down to see if it is BB(n), and therefore we have a way to compute BB(n), which is a contradiction.  Therefore, BB(n) grows faster than ANY computable function.  Now this beast can knock out Ackermann or any other function that comes upon the way.

What if your friend knows about BB(n).  Well, lets supposed we have control of some Oracle which can figure out whether any program halts.  If we use the same method of constructing the BB numbers except on Oracle computers, we will get BB lv 2 numbers, which are uncomputable even by the Oracle.  And we can construct another oracle for oracles, etc... the construct a list of continually more potent BB numbers.  Now you got your friend beat.

Will we ever upgrade from this Busy Beaver mess to something greater? Its hard to see how, but then again, no one was able to predict anything would beat

9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9.

You might wonder why we can’t transcend the whole parade of paradigms, and name numbers by a system that encompasses and surpasses them all. Suppose you wrote the following in the biggest number contest:

The biggest whole number nameable with 1,000 characters of English text

Surely this number exists. Using 1,000 characters, we can name only finitely many numbers, and among these numbers there has to be a biggest. And yet we’ve made no reference to how the number’s named. The English text could invoke Ackermann numbers, or Busy Beavers, or higher-level Busy Beavers, or even some yet more sweeping concept that nobody’s thought of yet. So unless our opponent uses the same ploy, we’ve got him licked. What a brilliant idea! Why didn’t we think of this earlier?

Unfortunately it doesn’t work. We might as well have written
One plus the biggest whole number nameable with 1,000 characters of English text
This number takes at least 1,001 characters to name. Yet we’ve just named it with only 80 characters! Like a snake that swallows itself whole, our colossal number dissolves in a tumult of contradiction. What gives?

The paradox I’ve just described was first published by Bertrand Russell, who attributed it to a librarian named G. G. Berry. The Berry Paradox arises not from mathematics, but from the ambiguity inherent in the English language. There’s no surefire way to convert an English phrase into the number it names (or to decide whether it names a number at all), which is why I invoked a "reasonable modern mathematician" in the rules for the biggest number contest. To circumvent the Berry Paradox, we need to name numbers using a precise, mathematical notational system, such as Turing machines—which is exactly the idea behind the Busy Beaver sequence. So in short, there’s no wily language trick by which to surpass Archimedes, Ackermann, Turing, and Rado, no royal road to big numbers.


You might also wonder why we can’t use infinity in the contest. The answer is, for the same reason why we can’t use a rocket car in a bike race. Infinity is fascinating and elegant, but it’s not a whole number. Nor can we ‘subtract from infinity’ to yield a whole number. Infinity minus 17 is still infinity, whereas infinity minus infinity is undefined: it could be 0, 38, or even infinity again. Actually I should speak of infinities, plural. For in the late nineteenth century, Georg Cantor proved that there are different levels of infinity: for example, the infinity of points on a line is greater than the infinity of whole numbers. What’s more, just as there’s no biggest number, so too is there no biggest infinity. But the quest for big infinities is more abstruse than the quest for big numbers. And it involves, not a succession of paradigms, but essentially one: Cantor’s.


So here we are, at the frontier of big numbers, and if you try to tell your artist friend about this, they might say, "Who cares?" And in this case, it is a quite reasonable question, as there seems to be no application.  I have a couple answers to this.

1.  Big numbers, as mentioned before, spark revolutions.  For example, the use of scientific notation to write big numbers sparked the enlightenment, which led to the European imperialistic period.

2.  We see the effects of these big numbers today.  Without the study of big numbers, we may not have trespassed upon the axioms of mathematics, revolutionizing math and creating computer science, something that we all value today.

3. We must remember that math's value doesn't lie within its application.  Albert Einstein, a famous physicist, once remarked, "Pure mathematics is, in its own way, a poetry of logic," something your artist friend would appreciate. 

No comments:

Post a Comment