Monday, December 29, 2014

Big Numbers ( I mean REALLY BIG NUMBERS)

Try playing this game at a party:

Two people have index cards and they do the following


You have fifteen seconds. Using standard math notation, English words, or both, name a single whole number—not an infinity—on a blank index card. Be precise enough for any reasonable modern mathematician to determine exactly what number you’ve named, by consulting only your card and, if necessary, the published literature.

Whoever writes the larger number wins.  Of course, the rules don't allow "The number of grains of sand in the Sahara Desert" as that isn't well defined.  After trying this game on multiple people, this is what I found.

Most people attempt the following:

999999999999999999999999999999999999999999999999999999999999999999999999999999999

But what they don't realize is that writing ones takes less time, and therefore one can make the number bigger.  But of course, this is squashed by the clever kid who writes the following:

9999999999999999999999999999999999999999999999999999999999^9999999999999999999999

But even this is quite small, as one can do the following:

9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9

which completely squashes the one before.  (Note: after I told someone about how 1's are easier to write, and the 9^9^9... strategy, she tried the following

1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1^1)

Place value, multiplication, exponentials, etc... can all express the same numbers.  But it is how concisely they are that makes them different.  The key to the biggest number contest is not swift penmanship, but rather a potent paradigm for concisely capturing the gargantuan.

Humankind is naturally curious about larger, bigger, etc..., the stars in the sky, the sand grains in the desert, etc..., and we find a whole new dimension of numbers arises with a revolution within the numerical system.  Originally, there was I, II, III, IV, V, ... within the Roman numeral system, but as time came, things besides addition came to importance, such as multiplication, which completely invalidated the usage of Roman numerals in favor of the Arabic denumeration.  And so we find that the evolution of these big numbers comes with major revolutions in not just mathematics but the world, with the enlightenment after the scientific revolution being a prime example.  Big numbers mark a major step in human progress.

Large numbers make previously large things not too large.  For example, in ancient times, the number of grains of sand in a desert was used as an example of ginormous amount. Historian Ilan Vardi cites the ancient Greek term sand-hundred, colloquially meaning zillion; as well as a passage from Pindar’s Olympic Ode II asserting that "sand escapes counting."  However, with the advancement of  
exponentiation with Archimedes, such a number didn't look too large.  As Archimedes put it, the number of grains of sand needed to cover the universe only needs 4 characters to write: 10^63 (he didn't count the carrot.)

However, the reverse is also true within large numbers: numbers that are seemingly small within a more potent paradigm may be actually gargantuan.  Consider, for example, the oft-repeated legend of the Grand Vizier in Persia who invented chess. The King, so the legend goes, was delighted with the new game, and invited the Vizier to name his own reward. The Vizier replied that, being a modest man, he desired only one grain of wheat on the first square of a chessboard, two grains on the second, four on the third, and so on, with twice as many grains on each square as on the last. The innumerate King agreed, not realizing that the total number of grains on all 64 squares would be 2^64-1, or 18.6 quintillion—equivalent to the world’s present wheat production for 150 years.

Returning to our bigger number contest, are exponentials the best we can do?  It seems as so humans never came to develop a better encoding scheme until the twentieth century, when the forefathers of computer science developed a much stronger paradigm.  These people, called formalists, wished to rigorify the axioms of mathematics, and one key area of interest was the definition of "computable."  For most until then, primitive recusiveness defined computability, or in other words, being able to compute within a 'reasonable' amount of time.  However in 1928, Wilhelm Ackermann showed the flaw in this definition through a function that was clearly computable, but exhaustively so.  The annoyingly recursive rules for his function went as so:

A(m,n) = 

Case 1: m = 0 => n+1
Case 2: m>0 and n = 0 => A(m-1, 1)
Case 3: m>0 and n>0 => A(m-1, A(m, n-1))

And this function usually scales as the following

A(1) = 1+1
A(2) = 2*2*2
A(3) = 3^3^3^3
etc...

And Ackermann proved that no matter what whole number inputs are given for m and n, the function will eventually give a answer, albeit after a LONG time.  Wielding the Ackermann sequence, we can clobber unschooled opponents in the biggest-number contest.  Just for the fun of it, I recommend writing the following on your sheet of paper:

Ackermann(Ackermann(googolplex, googleplex),Ackermann(googolplex, googolplex))  ;)


Ackermann numbers are pretty big, but they’re not yet big enough. The quest for still bigger numbers takes us back to the formalists. After Ackermann demonstrated that ‘primitive recursive’ isn’t what we mean by ‘computable,’ the question still stood: what do we mean by ‘computable’? In 1936, Alonzo Church and Alan Turing independently answered this question. While Church answered using a logical formalism called the lambda calculus, Turing answered using an idealized computing machine—the Turing machine—that, in essence, is equivalent to every Compaq, Dell, Macintosh, and Cray in the modern world. Turing’s paper describing his machine, "On Computable Numbers," is rightly celebrated as the founding document of computer science.
"Computing," said Turing,
is normally done by writing certain symbols on paper. We may suppose this paper to be divided into squares like a child’s arithmetic book. In elementary arithmetic the 2-dimensional character of the paper is sometimes used. But such use is always avoidable, and I think it will be agreed that the two-dimensional character of paper is no essential of computation. I assume then that the computation is carried out on one-dimensional paper, on a tape divided into squares.
Turing continued to explicate his machine using ingenious reasoning from first principles. The tape, said Turing, extends infinitely in both directions, since a theoretical machine ought not be constrained by physical limits on resources. Furthermore, there’s a symbol written on each square of the tape, like the ‘1’s and ‘0’s in a modern computer’s memory. But how are the symbols manipulated? Well, there’s a ‘tape head’ moving back and forth along the tape, examining one square at a time, writing and erasing symbols according to definite rules. The rules are the tape head’s program: change them, and you change what the tape head does.
Turing’s august insight was that we can program the tape head to carry out any computation. Turing machines can add, multiply, extract cube roots, sort, search, spell-check, parse, play Tic-Tac-Toe, list the Ackermann sequence. If we represented keyboard input, monitor output, and so forth as symbols on the tape, we could even run Windows on a Turing machine. But there’s a problem. Set a tape head loose on a sequence of symbols, and it might stop eventually, or it might run forever—like the fabled programmer who gets stuck in the shower because the instructions on the shampoo bottle read "lather, rinse, repeat." If the machine’s going to run forever, it’d be nice to know this in advance, so that we don’t spend an eternity waiting for it to finish. But how can we determine, in a finite amount of time, whether something will go on endlessly? If you bet a friend that your watch will never stop ticking, when could you declare victory? But maybe there’s some ingenious program that can examine other programs and tell us, infallibly, whether they’ll ever stop running. We just haven’t thought of it yet.

Nope, Turing proved that the halting problem, as it is called, is uncomputable, by a stunning example of self reference.  One can write the following program:

If this program halts, run forever, else halt.

And this contradiction proves the halting problem's uncomputability.  

What does this have to do with big numbers?  Well, Tibor Rado published an article about the biggest numbers anyone had every heard of.

His idea was the following.  He defined the function busy beaver(n) to be the maximum number of steps a program of length n on our Turing tape can go on while still halting.  Now lets suppose this halts.  If we want to see if a random program halts, we wait until it crosses BB(n) steps, and then conclude it doesn't halt, which makes the halting problem computable; a contradiction.  Therefore, BB(n) is uncomputable.  And more than this, lets suppose we have a function D(n) that is larger than BB(n) for every n and is computable.  Then, we can have a program that takes D(n), and checks every number from it down to see if it is BB(n), and therefore we have a way to compute BB(n), which is a contradiction.  Therefore, BB(n) grows faster than ANY computable function.  Now this beast can knock out Ackermann or any other function that comes upon the way.

What if your friend knows about BB(n).  Well, lets supposed we have control of some Oracle which can figure out whether any program halts.  If we use the same method of constructing the BB numbers except on Oracle computers, we will get BB lv 2 numbers, which are uncomputable even by the Oracle.  And we can construct another oracle for oracles, etc... the construct a list of continually more potent BB numbers.  Now you got your friend beat.

Will we ever upgrade from this Busy Beaver mess to something greater? Its hard to see how, but then again, no one was able to predict anything would beat

9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9^9.

You might wonder why we can’t transcend the whole parade of paradigms, and name numbers by a system that encompasses and surpasses them all. Suppose you wrote the following in the biggest number contest:

The biggest whole number nameable with 1,000 characters of English text

Surely this number exists. Using 1,000 characters, we can name only finitely many numbers, and among these numbers there has to be a biggest. And yet we’ve made no reference to how the number’s named. The English text could invoke Ackermann numbers, or Busy Beavers, or higher-level Busy Beavers, or even some yet more sweeping concept that nobody’s thought of yet. So unless our opponent uses the same ploy, we’ve got him licked. What a brilliant idea! Why didn’t we think of this earlier?

Unfortunately it doesn’t work. We might as well have written
One plus the biggest whole number nameable with 1,000 characters of English text
This number takes at least 1,001 characters to name. Yet we’ve just named it with only 80 characters! Like a snake that swallows itself whole, our colossal number dissolves in a tumult of contradiction. What gives?

The paradox I’ve just described was first published by Bertrand Russell, who attributed it to a librarian named G. G. Berry. The Berry Paradox arises not from mathematics, but from the ambiguity inherent in the English language. There’s no surefire way to convert an English phrase into the number it names (or to decide whether it names a number at all), which is why I invoked a "reasonable modern mathematician" in the rules for the biggest number contest. To circumvent the Berry Paradox, we need to name numbers using a precise, mathematical notational system, such as Turing machines—which is exactly the idea behind the Busy Beaver sequence. So in short, there’s no wily language trick by which to surpass Archimedes, Ackermann, Turing, and Rado, no royal road to big numbers.


You might also wonder why we can’t use infinity in the contest. The answer is, for the same reason why we can’t use a rocket car in a bike race. Infinity is fascinating and elegant, but it’s not a whole number. Nor can we ‘subtract from infinity’ to yield a whole number. Infinity minus 17 is still infinity, whereas infinity minus infinity is undefined: it could be 0, 38, or even infinity again. Actually I should speak of infinities, plural. For in the late nineteenth century, Georg Cantor proved that there are different levels of infinity: for example, the infinity of points on a line is greater than the infinity of whole numbers. What’s more, just as there’s no biggest number, so too is there no biggest infinity. But the quest for big infinities is more abstruse than the quest for big numbers. And it involves, not a succession of paradigms, but essentially one: Cantor’s.


So here we are, at the frontier of big numbers, and if you try to tell your artist friend about this, they might say, "Who cares?" And in this case, it is a quite reasonable question, as there seems to be no application.  I have a couple answers to this.

1.  Big numbers, as mentioned before, spark revolutions.  For example, the use of scientific notation to write big numbers sparked the enlightenment, which led to the European imperialistic period.

2.  We see the effects of these big numbers today.  Without the study of big numbers, we may not have trespassed upon the axioms of mathematics, revolutionizing math and creating computer science, something that we all value today.

3. We must remember that math's value doesn't lie within its application.  Albert Einstein, a famous physicist, once remarked, "Pure mathematics is, in its own way, a poetry of logic," something your artist friend would appreciate. 

Wednesday, November 26, 2014

The Banach Tarski Paradox for dummies

The Banach Tarski Paradox is a historically significant paradox that shows that it is possible to take a ball, break it into pieces, and rearrange the pieces in order to form two new balls, therefore doubling the mass and volume.  (Note: the following is a sketch of the proof, it has many untied ends and is NOT airtight)(knowledge of sets, and countability are assummed.  Learn more at : http://en.wikipedia.org/wiki/Set_%28mathematics%29 and http://en.wikipedia.org/wiki/Countable_set)

We will prove this in 6 simple steps:

Step 1:  Background

The set F2 is the set of all simplified "words" of a and b.  In simpler terms, it is all combinations of a, a^-1 (which for simplicity of typing, I will call -a) b, b^-1.  A sample word looks like the following

                                                          a, b, b, -a, -b, -a, b 

Now, whenever one has an 'a' and a negative 'a' next to each other, they cancel, and the same goes for b and -b.  So, F2 doesn't include:

                                        -a, a, b, b            but it does include                 b, b
We also define S(x) as the subset of F2 that starts with x.

A good way to visualize this is the diagram below, where every word is defined as a point; to get to that point, one has to take the appropriate moves, where a is right, b is up, etc, and e is the empty word:

Therefore, S(a) is represented by the green circle, S(-a) by the red circle (the gaps in the circles were due to size limits in blogger uploads) , but how does aS(-a) represent the blue circle?  First of all, aS(x) = all strings that start with x with an 'a' concatenated in front.  We know that whenever we concatenate the 'a' with a word that starts with '-a', they cancel, and we get a word that starts with any letter except 'a', as if 'a' started our new word, that means our original word was -a, a, ..., which isn't simplified.  Therefore

                                                                 F2 - S(a) = aS(-a)

Also note:  b, b, b, b, b = b5 (5 equals number of repetitions)

We define the following:

W = S(a)                 X = S(-a)                     Y=S(b)         Z = S(-b)

The reason we defined the sets the following way was to get the following properties:

if x ϵ W, then ax ϵ W , and this holds for X with -a, Y with b, and Z with -b.

Also, the union of all 4 sets is F2-{empty}, and they have no elements in common.

WE ARE ALMOST DONE WITH THE BACKGROUND.  DON'T GIVE UP!

Two sets are "equidecomposable" (symbol: ~) if you can take one set, cut it into pieces, move them around and rotate them (no dilations, inversions, etc, otherwise the size of the pieces is changed) and get the other, as shown below:

For those of you with a greater mathematical background (I don't know why you are reading this then), this notion is an equivalence class.  Challenge yourself to prove that it is.

We are finally ready to actually begin the paradox.

Step 2: Prove that a "broken circle" ~ a circle

A broken circle is a circle with a missing point on the circumference, without loss of generality call it D.  We take the circle and "cut" it into two pieces

A = {All points that can be reached by taking point D and rotating it by rotation r x times}

A will look like this {D, Dr, Dr2, Dr3, etc.} (Note: this can be expressed as e^i* theta, but the r notation conveys the same result)

B = {All points on the circle that aren't in A}

If we rotate set A by r, all the elements of A are multiplied by r, as shown below

rA = {Dr, Dr2, Dr3, etc...}

We want to make sure that for any n and m > 0, Drn is not equal to Drm.  We can do so with the following argument:

There are a countable number of m's and n's that result in Drn = Drm.  However, there are an uncountable number of angles possible for rotation.  Therefore, there exists a rotation for which no rn = rm.

Therefore, by rotating A by r and keeping B the same, a point disappeared on the circle: D.

Step 3: A circle ~ A circle missing a countable number of points

We start with the circle.  We use Step two in order to remove a countable number of points by taking out one point at a time.  We must make sure that none of points in our procedure overlap.  We will do so with the following argument:

The number of points needed to remove 1 point using the procedure in Step 2 is countable.  The number of points removed is countable.  So, we know by the fact that the union of countably many sets is countable (For those of you who are bothered by my usage of the axiom of countable choice, the ride only gets rougher.  I suggest stopping now.) that the number of points that we have to guarantee don't overlap is countable. But the number of rotations is uncountable, therefore we can find a "good" angle to rotate by which satisfies all our demands.

Step 4: A sphere ~ A sphere missing countably many points

We use the same method as in Step 3 but we now rotate along an axis of the sphere

Step 5: A ball missing its center ~ 2 balls missing their centers (notice how the steps are progressively getting shorter.)

Theorem: rotations of a ball act like elements of F2

So, we can define W, X, Y, and Z as we did before, however a and b are now rotations, and -a and -b are their inverses.  If we rotate X by a and make it aX, we find the following

F2 - S(a) = aS(-a)   =>  F2 - W = aX  => F2 = W ∪ aX, and since a is a rotation, we know that those two pieces create a ball, as F2 is the entire ball.  We can use the same logic on Y and Z and say that they make a ball too.  We must do this ignoring the centers of each ball as the union of the W, X, Y, and Z doesn't include the center.

Step 6: Ball ~ 2 Balls

By Step two, we can insert a point into each ball.




Monday, October 6, 2014

How powerful is most powerful computer possible?

(Note:  this article requires a knowledge of algebra)

As our computer's get faster and smaller, the obvious question to ask is, "How far can our computers go?"  To understand the answer to this problem, we must understand what limits the power of the idealistic computer.  The limits on our computer are the mass/energy of the system, Heisenberg's Uncertainty Principle, and the speed of light travelling through the device.

The formula for the uncertainty principle and energy mass equivalence (to convert between mass and energy) is as follows

  ∆E*∆t ≥  h/4π and E^2=m^2* c^4+ p^2*c^2
   
 We know that to maximize the energy of the system, we want to have our computer flying around at as close to the speed of light as possible, so we can just change the equation to an inequality, and since p = mass * velocity, we have velocity = c

E^2<2* m^2* c^4

We plug in this value into the uncertainty formula

∆t ≥  h/(4π* √2*m*c^2 )

Now, we also must remember to time, we must add the time it takes to send information from one side to the other.  If we call the length of the computer L, we find the additional time to be L/c.  Can L be as small as we want?  Well, if the length get too small, we will get a black hole, and to calculate when this will happen, we use the Chandrashekar limit.

L > GM / c2

Divide both sides by c to get the additional time.

L/c > GM / c^3

We add this to our time equation

∆t ≥  h/(4π* √2*m*c^2 ) + GM / c^3

We want to minimize this, so we take the derivative of this function to find where the slope of the function is zero, which will be the minimum time.  (Note:  since we want the ideal machine, we will change the inequality to an equality, which will make this an approximation.)

d∆t/dm =  -h/(4π* √2*m^2*c^2 ) + G / c^3

h/(4π* √2*m^2*c^2 ) = G / c^3

(h * c^3) / (G * 4π* √2*c^2) = (hc)/(G * 4π* √2) ~ 1.6748505 * 10 ^-16
Therefore, the maximum bits per second computable is one over our answer, which is 

6.0* 10 ^15 bits per second

So, how long would it take to "solve" chess with our ideal computer?  An article by chess.com 
(read here: http://www.chess.com/blog/watcha/limits-of-quantum-computing-in-solving-chess)
has suggested that it would take up to 10^42 bits, so to find the amount of time it takes for our 
computer to find the answer, we divide the bits by the rate

10^42 bits / 6.0 * 10 ^ 15 bits per second = 1.7 * 10 ^ 26 seconds, which to put in perspective,
the age of the universe is 4.35 +- .02* 10^17 according to most scientists.  Our best computer 
actually isn't all that fast.

Sunday, September 28, 2014

Euphemisms

In World War I, many soldiers had a psychological condition which was caused by the rapid gunfire on the field.  This condition was called "shell shock".  Simple, to the point, and anyone who hears this terminology will understand the condition.  Sounds like the guns themselves.  In World War II, the same condition was termed battle fatigue.  We doubled the number of syllables.  But fatigue is a nice word.  SHELL SHOCK! Battle fatigue.  In the Vietnam War, the exact same condition was called operational exhaustion.  The meaning of the terminology is lost through the dramatization and euphimization of the linguistics.  By the 21st century, we renamed the syndrome post-traumatic stress disorder.  We added another hyphen!  Maybe this is why we study SAT vocabulary.

Americans can't handle reality.  They need soft language to conceal their fears.  The bath room became "powder room".  Car crashes became automobile accidents.  Partly cloudy became partly sunny.  Handicapped became handicapable.  And this is all to hid the rich people's fears while people live in slums.  Oh sorry, excuse me, the economically disadvantaged living in substandard housing within underdeveloped areas of the region.  This doesn't make them any richer.  They are still poor!

Changing the name doesn't change the condition.  Does anyone read King Lear anymore?  People are convinced that they are doing better than they are because of the language.  In schools, you are either outstanding or exceptional.  What about those people who don't do anything except play videogames all day?  They are "minimally exceptional".  We are all told that we can do whatever we want in life.  NO!  I can't become the best wrestler in the world.  It just won't happen.  And the American Dream forces us to forget that.

The area where people most often fool themselves is age.  People always say, "Oh, I am just a tiny bit older now.  Just admit it, you are old, and accept it.  We all age at some point.  I once heard someone say that they are 90 years young.  Are we so obsessed as a society to remove all traces of fear as to describe a human property using its antonym?  This all is going to make me vomit, or actually, engage in a proprietary protein spill.

Tuesday, June 17, 2014

Why is Science so Serious?

If you've ever read a scientific paper, it probably looks something like this:

Recall that G is a compact, connected, simply connected, semisimple Lie group
of rank n. Let T ⊂ G be a maximal torus in G. Letting t denote the Lie algebra of T, we have T ≃ t/Λ, where Λ = π1(T) ≃ Zn. Let Tˆ be the dual torus to T, defined as Tˆ = t ∗/Λ ∗. Let t1 , . . . , tn be a basis for Λ and t1, . . . , tn the dual basis.Using H1 (T , ˆ Z) ≃ Λ, we identify t1 , . . . , tn with a basis of 1-forms on Tˆ. Similarly t1, . . . , tn define a basis of 1-forms for T. The projection π : G → G/T is a principal
torus bundle of rank n and has a Chern class c ∈ H2
(G/T, Λ). Using the basis t1, . . . , tn , we write c = ci ti, where ci ∈ H2(G/T, Z). This defines a twisting class κ = ci ` t i ∈ H3 (G/T × T, ˆ Z).

(credit to DAVID BARAGLIA AND PEDRAM HEKMATI just in case I might be sued for stealing someone's discovery.)

I have no doubt that these guys have done an excellent job in their paper, figuring out something cool that could plausibly intrigue all the rest of us.  But it doesn't.  Why?  It doesn't because its illegible.  All I understood from reading that introduction excerpt was Lie Group (whatever that is), algebra (a nightmare for most of us), and a torus (like my morning donut?)  After doing a lot more research on the topic, I learned that this paper is talking about how toruses (like donuts) can rotate and twist in space, which is pretty cool.

Why do papers have to be so serious?  They have to be because it gives them a bit of secrecy, a bit of mystery, and most importantly, it gives the work value.  For example, three men named Alpher, Bethe, and Gamov wrote a paper (actually, only Alpher and Gamov wrote it, Bethe was Gamov's friend and his name was added as a word play on the first three letters of the greek alphabet.)  Although they had groundbreaking work, both scientists and non-scientists scoffed at the paper as not profound work just because the paper wasn't as serious as it was supposed to be.  Another example is a paper named General second order scalar-tensor theory, self tuning, and the Fab Four.  Journals forced the last part of the name of the paper to be removed as "the Beatles had little to do with Physics" and all references to the band had to be removed .  Thankfully, the original copy still exists on ArXiv, an archive.  All links will be provided at the end of the article.

With science so serious, we are dissuading the next generation from entering the fields as science is purposely making the field unavailable to commoners.  As Tyler DeWitt points out in his TED talk, by telling a story about bacteriophages instead of just saying in fancy terminology their function, kids are much more interested in science.  Science needs to realize that keeping their work secret doesn't increase respect for it but instead decreases understanding of it.

Links:

"Alpha Beta Gamma" paper : http://journals.aps.org/pr/pdf/10.1103/PhysRev.73.803

Fab Four paper : http://arxiv.org/pdf/1106.2000v2.pdf

Tyler DeWitt's TED talk : https://www.youtube.com/watch?v=6OaIdwUdSxE

Sunday, June 15, 2014

Why Yugioh has gotten out of hand

When I was 6, I saw my first anime show; yugioh, and I was hooked.  I forced my parents to buy my set upon set of yugioh cards to play.  And I still do play occasionally, but this is probably the end.

Yugioh was fun because it combined aspects of intellectual card games and trading card games.  It is an extremely complicated rule set to learn, and playing requires high amounts of strategy.  If you look up chaining, for example, one will find hundreds of pages of documentation on how to properly use the chain.  The cool aspect about the game was that you had the ability to remove luck by creating combos no matter what you draw.  And, you have no idea of what cards your opponents have.

Now, this is was all during the peak of the game, which was around 2002.  Since then, its been on the downfall.  The reason is that Konami, the creator of the game, is making the game cheaper and cheaper.  Here's why.  Back in 2002, there were normal cards and these cards called fusion cards, which had a slight advantage over normal cards.  Everyone purchased a lot of  fusion cards, but after one point, everyone was content and sales went down.  To increase sales, Konami created a new genre of cards called Synchro cards, which were much stronger than fusion cards.  Now, all the people who were content with their fusion cards had to buy Synchros to keep up with the new game.  Konami did this again in 2010 with xyz monsters, and now they are trying to do it AGAIN with these new monsters called pendulum monsters, which completely ruin the strategy of the game because once one draws these monsters, its game over for the opponent, no matter whatever combos they have.

By bringing money into yugioh, Konami has messed up the best aspects of the game.  Nowadays, when people play, they say, you must not have any pendulums or xyzs or synchros.  This is unfortunate because it hurt the fans, and Konami as many have run away from the game due to this.  Konami has made a big mistake.

Was Brian Cox Wrong?


Recently, I watched Brian Cox, a well known physicist, on a TV lecture called Night With the Stars.  Here's a link to it: https://www.youtube.com/watch?v=5TQ28aA9gGo.  

The lecture was quite fascinating, as Brian is well known for making physics available to all.  But, Brian might have stepped too far by oversimplifying one of the greatest achievements in physics history: Pauli Exclusion Principle.

The Pauli Exclusion Principle basically says that every fermion (particles with half integer spins) has 4 quantum numbers, numbers that describe it, and no two electrons can share the same 4 numbers.  The numbers are quantum "spin" (not actually the spin of the fermion, no one knows what quantum spin looks like), energy level, azimuthal quantum number, and the magnetic quantum number (in simple terms, describing its magnetic properties.)

In the presentation, Brian Cox basically ignored all of the quantum numbers except for energy levels (formally known as principal quantum number) and made it seem as if everything was connected as mentioned in some religious philosophies.  After the presentation, many people started talking about how quantum mechanics "supports" the idea that everything is spiritually connected and we are connected with god.

Many physicists accuse Brian of messing up the meaning of quantum mechanics by oversimplifying it and allowing scientific spiritualists to make absurd comments.  Brian published an article to cover up his blunder and to explain how uncertainty due to the observer effect and the uncertainty principle (also very often confused, I'll make a post about that one too) make it impossible for a "spiritual connection".

Was Brian Cox wrong in simplifying Pauli Exclusion Principle?  In my opinion, he isn't because he just was trying to make it easier for non-physicists to understand and appreciate.  Imagine him trying to explain quantum spin when we don't even know what it is!  Sure he had a few technical errors but he shouldn't be criticized as much as he is.