IntroductionIntroduction

*Skewes' Number*is a bounding value, that tends to get honorable mention in discussions about large numbers. It was once recognized as the largest number ever used in a serious mathematical proof (a title that was superseded by

*Graham's Number*in 1977) and is notable for being larger than a

*googolplex*.

The research that lead to its creation has to do with the distribution of primes. Prime numbers appear to have no discernible pattern. However, upon closer inspection one finds that it shows signs of both order and chaos. This can be illustrated in the image appearing at the top of this page. By organizing the numbers around a spiral path, and highlighting the primes in red, interesting patterns appear to emerge but are also permeated by a certain randomness. This is known as Ulam's spiral, and it's just one example of how primes can display tantalizing order, yet fail to be entirely predictable.

Throughout history mathematicians have tried to come up with a formula that would generate the nth prime number, or at least a sequence that contained only primes, but neither of these endeavors has ever succeeded. The next best thing seemed to be a way of understanding the distribution of primes. That is, could you at least approximate the number of primes within a given range?

Mathematicians use the notation π(x) to represent the

*prime-counting function*. If

*x*is a real number we can say that:

π(x) = Number of primes less than or equal to x.

This has come to be known as The first few non-negative integral values for this function are:

π(0) = 0

π(1) = 0

π(2) = 1

π(3) = 2

π(4) = 2

π(5) = 3

Because there exists such an integer, by necessity there must be a least integer (greater than 2) for which this is true. In fact he was able to show that the sign of li(x) - π(x) changes infinitely often! What this means is this: As li(x) and π(x) make their way towards infinity they are constantly crossing over each other! That is, the functions keep on outpacing each other, but without any permanent lead.

π(1) = 0

π(2) = 1

π(3) = 2

π(4) = 2

π(5) = 3

One consequence of the nature of primes is that π(x) becomes increasingly difficult to compute the larger x is. Note also that just because we've given the function a name doesn't mean we have provided a way to compute it.

One of the best ways to determine whether a number, n, is prime or not is to try and divide it by every prime whose square is less than or equal to n. This method is pretty quick even for larger double digit numbers.

For example, to prove that 97 is prime consider the following:

One of the best ways to determine whether a number, n, is prime or not is to try and divide it by every prime whose square is less than or equal to n. This method is pretty quick even for larger double digit numbers.

For example, to prove that 97 is prime consider the following:

97/2 = 48 + 1/2

97/3 = 32 + 1/3

97/5 = 19 + 2/5

97/7 = 13 + 6/7

97/3 = 32 + 1/3

97/5 = 19 + 2/5

97/7 = 13 + 6/7

The next prime is 11, but since 11^2 = 121 and 121 > 97, we only need to test for 2,3,5, and 7. Since 97 is not divisible by any of these, it follows that it is prime.

One way to compute π(x) therefore is to simply test every positive integer less than or equal to x for primality and then add up the results. The problem is that the amount of computation would increase slightly faster than the value x. This means that to compute π(100) would require at least ten times the computation as π(10). In truth though we needn't test primality for each number. Any number greater than 9 ending in 0,2,4,5,6, and 8 is clearly not prime because it would either be even, divisible by 5, or both. This means the only numbers above 9 that are eligible to be prime are those ending in 1,3,7, and 9. However this is not as great an improvement as it might at first seem. When x becomes very large, the fact that we only have to test about 40% of all numbers less than x makes little difference. To compute π(1000) we would still have to test about 400 numbers. Also, as the numbers increase we will have to divide it by more primes, and the divisions will get more difficult. Furthermore, this method tells us nothing about the

li(1000) ~ 177.61 > 168 = π(1000)One way to compute π(x) therefore is to simply test every positive integer less than or equal to x for primality and then add up the results. The problem is that the amount of computation would increase slightly faster than the value x. This means that to compute π(100) would require at least ten times the computation as π(10). In truth though we needn't test primality for each number. Any number greater than 9 ending in 0,2,4,5,6, and 8 is clearly not prime because it would either be even, divisible by 5, or both. This means the only numbers above 9 that are eligible to be prime are those ending in 1,3,7, and 9. However this is not as great an improvement as it might at first seem. When x becomes very large, the fact that we only have to test about 40% of all numbers less than x makes little difference. To compute π(1000) we would still have to test about 400 numbers. Also, as the numbers increase we will have to divide it by more primes, and the divisions will get more difficult. Furthermore, this method tells us nothing about the

*pattern*of distribution. We can assume that the primes must be thinning out as we go along, but in what manner? Is there some kind of short cut we could devise for computing π(x) that doesn't involve testing for primality?

**An Approximation for**

**π(x)**

Prime numbers have been known about at least as far back as classical Greece, but interest in them waned after this golden age of mathematics. A renewed interest in primes began sometime in the 18th century (1700's). Large tables of primes were being compiled around this time. For example in 1777 Anton Felkel published a table of complete decompositions of all integers not divisible by 2,3 and 5 from 1 to 408,000[1]. This and similar tables were used as the basis for making an educated guess on the distribution of primes.

In 1798 Adrien-Marie Legendre[2] conjectured that π(x) can be approximated by the function:

In 1798 Adrien-Marie Legendre[2] conjectured that π(x) can be approximated by the function:

x / (Alnx+B)

Where lnx is the natural logarithm of x, and A and B are unspecified constants. In 1808 he further proposed that A = 1 and B = -1.08366.

As we saw in our prime number list in chapter 1.5 (see here) there are only 25 primes less than 100. So it follows that:

As we saw in our prime number list in chapter 1.5 (see here) there are only 25 primes less than 100. So it follows that:

π(100) = 25

plugging in 100 for x, and the suggested values for A and B we would obtain:

100 /(ln100-1.08366) ~ 100/(4.60517-1.08366) = 100/3.52151 ~ 28.40

A slight overestimate, but not too bad. Now lets try 1000. Again from our table from 1.5 we know that:

π(1000) = 168

Again plugging in we obtain:

1000 / (ln1000-1.08366) ~ 171.70

Again we obtain a slight overestimate, but a close value.

In any case Legendre's function was eventually superseded by the Logarithmic Integral function. It is usually denoted by li(x). It can be defined for all non-negative real numbers (except at 1) by the following integral form[3]:

In any case Legendre's function was eventually superseded by the Logarithmic Integral function. It is usually denoted by li(x). It can be defined for all non-negative real numbers (except at 1) by the following integral form[3]:

We needn't worry ourselves too much about what this all means exactly, but in a nutshell li(x) computes the area under the curve defined by 1/lnt from 0 to x. A graph of li(x) looks something like this:

When we compute li(x) we also find that it is larger than π(x). For example:

When we compute li(x) we also find that it is larger than π(x). For example:

li(2) ~ 1.05 > 1 = π(2)

li(10) ~ 6.17 > 4 = π(10)

li(100) ~ 30.13 > 25 = π(100)

li(10) ~ 6.17 > 4 = π(10)

li(100) ~ 30.13 > 25 = π(100)

Interestingly it seems that Legendre's function is a closer fit, but that's because the values for A and B were probably calibrated to carefully match with known values of π(x) at the time.

104,729 is the 10,000th prime[4]. Therefore π(104,729) = 10,000. Plugging 104,729 into the two functions we obtain:

104,729 is the 10,000th prime[4]. Therefore π(104,729) = 10,000. Plugging 104,729 into the two functions we obtain:

104,729/(ln104,729-1.08366) ~ 9997.55

li(104,729) ~ 10,039.74

li(104,729) ~ 10,039.74

The Legendre's function can be seen to lag behind π(x) at this point. The Logarithmic Integral however can be seen to be larger than the expected value. In every instance we see that li(x) is larger than π(x). This would suggest that it acts as an upper bound, letting us know for certain that π(x) can not be above a certain value. But is it always larger for every value of x?

*Trivially*li(x) is less than π(x) when x is between 0 and 1, because π(x) must be zero in this range, but li(x) takes on negative values over this interval (See graph). At some point past 1 however, li(x) overtakes π(x) just before π(x) reaches the value of 1. This occurs at a value known as the*Ramanujan-Solder Constant*[5] which is approximately 1.45. So we can rephrase the question as:*Is π(x) <*

*li(x) for all real x > 2*?

We can at least try to come up with even larger prime sequences to see if this remains true however. In our own times you can find online lists of sequential primes all the way up to at least the 50 millionth prime[6]! This makes my little list of the first 200 primes seem absolutely picayune! According to one site the 50th million prime is:

982,451,653

Thus π(982,451,653) = 50,000,000 = 5E7. Computing for li(x) I obtain:

li(982,451,653) ~ 50,002,080.64

Although relatively close, li(x) is STILL a little bit larger! To test for an even more vast value, we can use the fact that the

*one trillionth*prime is 29,996,224,275,833[7]. That suggests that only about 1 in 30 numbers are prime between 1 and 30 trillion. Computing for li(x) I obtain:li(29,996,224,275,833) ~ 1,000,000,155,430

li(x) is STILL a bit larger! It would seem, that if even by the trillions li(x) is still larger than π(x), that we could expect this trend to continue indefinitely!

Some mathematicians suspected this was the case, but there was at least a reasonable doubt because of the random nature of primes. It was conceivable that the primes might at certain points percolate above li(x) and then return to its inferior status. The answer to whether li(x) is always larger was finally answered at the beginning of the 20th century. In many ways, the answer is more surprising than the question!

**Skewes' Number****s**

**& The Riemann Hypothesis** In 1914 John Edensor Littlewood[8] showed that there must exist an integer greater than 2, call it z, such that:

li(z) < π(z)

Because there exists such an integer, by necessity there must be a least integer (greater than 2) for which this is true. In fact he was able to show that the sign of li(x) - π(x) changes infinitely often! What this means is this: As li(x) and π(x) make their way towards infinity they are constantly crossing over each other! That is, the functions keep on outpacing each other, but without any permanent lead.

The next natural question would be: when does π(x) first cross over li(x). This question was first tackled by Littlewood's student Stanley Skewes.

In 1933 what Skewes was able to prove was that the first crossing must occur before:

In 1933 what Skewes was able to prove was that the first crossing must occur before:

assuming the Riemann hypothesis is true. This is

So -2 leads to the sum of all squares. -3 would lead to the sum of all cubes, etc.

So basically we come to this conclusion: if the real part of C, denoted "re(C)", is greater than 1, then the Euler sum converges to some non-zero value, and if re(C) =< 1 then it diverges, so how can the Riemann Zeta function have any "zeroes" then? That's where Riemann comes in. Recall that the Riemann function is only equal to the Euler sum when re(C) > 1. When re(C) =< 1 this is no longer true. Consequently all of the zeroes must occur when re(C) =< 1.

In 1859 Bernhard Riemann defined his

The Riemann Hypothesis is a conjecture proposed by Bernhard Riemann also in 1859[11]. It states that all of the non-trivial zeroes of The

Riemann wasn't able to prove his conjecture, and so far, no one else has been able to either. The Riemann Hypothesis is one of the great unsolved problems of 19th and 20th century mathematics. It is listed amongst Hilbert's 23 Unsolved problems. If this wasn't enough to give it a place of importance in mathematics it also has important consequences for the distribution of primes! In fact the zeta function is closely related to primes:

Where p stands here for the nth prime. Depending whether or not the Riemann Hypothesis is true has consequences for when the first crossing of li(x) and π(x) should occur. Mathematicians have been somewhat divided on whether or not the hypothesis is in fact true. Although some flatly state that there is no real proof or reason to believe its true, the general consensus seems to be that its

Riemann's Hypothesis has garnered a lot of attention since it's inception, not only because it's a difficult unsolved problem in the study of complex functions, but also because it is connected to a lot of other areas of mathematics, especially number theory.

In any case, when Skewes' published his result it drew attention not only for the sheer size of the upper bound, but also because it made Littlewood's proof more manifest, bringing us perhaps one step closer to understanding the distribution of primes.

The story isn't over yet though. In 1955 Skewes also provided a lesser known second upper bound without assuming the truth of the Riemann Hypothesis:

*Skewes' Number*. Essentially it acts as an upper bound for the first crossing. "e" here refers to the transcendental number which can be defined as:e = 1 + 1/1! + 1/2! + 1/3! + ...

When C is a complex number where the real part is greater than 1. In this form the function is known as the "Euler Sum". The function is said to converge provided the real part is greater than 1. When C=1 we get the harmonic series: where the terms continue on infinitely. This value can be approximated as:

e ~ 2.718281828459...

The Riemann hypothesis is more difficult to explain. It has to do with the "zeroes" of the

*Riemann Zeta function*. The Riemann Zeta function[9] can be defined as:zeta(1) = 1/1+1/2+1/3+ ...

which diverges to infinity. When C=2 we get:

zeta(2) = 1/1+1/4+1/9+1/16+1/25+...

If we get closer to 1 we find that the values get larger and larger... This is known to converge to (pi^2)/6 ~ 1.644934. As we increase the value of C along the real axis, we find that the value of convergence approaches the value of 1. For example:

zeta(3) ~ 1.202

zeta(4) ~ 1.0823

etc.

zeta(4) ~ 1.0823

etc.

zeta(1.5) > 2.412

zeta(1.3) > 3.095

zeta(1.1) > 4.278

zeta(1.01) > 5.083

zeta(1.001) > 5.176

etc.

zeta(1.3) > 3.095

zeta(1.1) > 4.278

zeta(1.01) > 5.083

zeta(1.001) > 5.176

etc.

This makes some sense since we are really approaching the harmonic series. Furthermore if the real part of a complex number, C, is greater than 1, it follows that the absolute value of C must also be greater than 1. Since the real numbers greater than 1 converge, it seems reasonable that complex numbers with radius greater than 1 would also converge.

Conversely if the Complex number has real part equal to or less than 1, this sum is known to diverge. In this case the "Euler Sum" must be distinguished from the "zeta function". Let E(C) be the Euler sum. For example:

Conversely if the Complex number has real part equal to or less than 1, this sum is known to diverge. In this case the "Euler Sum" must be distinguished from the "zeta function". Let E(C) be the Euler sum. For example:

E(0) = 1/1^0 + 1/2^0 + 1/3^0 + ... = 1 + 1 + 1 + ... = ∞

This will also happen with any negative real, since 1/n^-x = n^x, and since x>0 it follows that n^x > 1. The sum must then be greater than an infinite number of 1s and will therefore diverge. As an example we have:

E(-2) = 1/1^(-2) + 1/2^(-2) + 1/3^(-2) + ... = 1 + 4 + 9 + ... = ∞

So -2 leads to the sum of all squares. -3 would lead to the sum of all cubes, etc.

So basically we come to this conclusion: if the real part of C, denoted "re(C)", is greater than 1, then the Euler sum converges to some non-zero value, and if re(C) =< 1 then it diverges, so how can the Riemann Zeta function have any "zeroes" then? That's where Riemann comes in. Recall that the Riemann function is only equal to the Euler sum when re(C) > 1. When re(C) =< 1 this is no longer true. Consequently all of the zeroes must occur when re(C) =< 1.

In 1859 Bernhard Riemann defined his

*Riemann Zeta function*as the analytical continuation of the Euler sum, to all complex numbers except 1. Basically he defined a function which is equivalent to the Euler sum for all Re(C) > 1, but converges everywhere but 1. The*Riemann Zeta function*can be defined as[10]: Here is a complex graph of the Zeta function:

The origin is roughly in the center of the graph to the immediate right of the "dark zone" and prior to the "red zone". Each point represents a complex input, and the output is displayed as a color whose hue defines the angle of the complex output, and whose brightness defines it's absolute value. The "Red hue" corresponds to the positive real direction, cyan to the negative reals, chartreuse (yellow-green) to positive imaginary numbers, and violet to negative imaginary numbers. The "red zone" is the Euler sum region, and the other half is the analytic region. Black corresponds to "zero". Although the "dark region" seems black, in fact this is merely a dim area where the outputs have small non-zero magnitudes.

*Riemann Zeta function*occur when the real part of C = 1/2. The trivial zeroes refer to to negative even integers C= -2,-4,-6,etc. On the graph you can actually see the non-trivial zeroes. They lie on a vertical line that separates the Euler region from the analytic region. This line intersects the "tips" of the "color bands". At the furthest extremity you might be able to observe a black dot. Those are the Riemann zeros. So far these zeroes have always been found along this line, known as the "critical line", and the question is: do they always?Riemann wasn't able to prove his conjecture, and so far, no one else has been able to either. The Riemann Hypothesis is one of the great unsolved problems of 19th and 20th century mathematics. It is listed amongst Hilbert's 23 Unsolved problems. If this wasn't enough to give it a place of importance in mathematics it also has important consequences for the distribution of primes! In fact the zeta function is closely related to primes:

Where p stands here for the nth prime. Depending whether or not the Riemann Hypothesis is true has consequences for when the first crossing of li(x) and π(x) should occur. Mathematicians have been somewhat divided on whether or not the hypothesis is in fact true. Although some flatly state that there is no real proof or reason to believe its true, the general consensus seems to be that its

*probably true.*As strong supporting evidence in it's favor its worth noting that the first trillion zeroes are known to lie on the line Re(s) = 1/2. Then again we've been down that road before: even though li(x)- π(x) is positive up to the trillions, Littlewood proved that the value actually changes signs infinitely often. So just because there are many examples that do not contradict the Riemann Hypothesis, doesn't necessarily mean that a counter-example doesn't exist. Numerical evidence alone can never conclusively prove the Riemann Hypothesis.Riemann's Hypothesis has garnered a lot of attention since it's inception, not only because it's a difficult unsolved problem in the study of complex functions, but also because it is connected to a lot of other areas of mathematics, especially number theory.

In any case, when Skewes' published his result it drew attention not only for the sheer size of the upper bound, but also because it made Littlewood's proof more manifest, bringing us perhaps one step closer to understanding the distribution of primes.

*Skewes' Number*was also quickly recognized as the largest number ever used in a mathematical proof! It was also larger than a*googolplex,*which had only been defined some 13 years earlier in 1920.The story isn't over yet though. In 1955 Skewes also provided a lesser known second upper bound without assuming the truth of the Riemann Hypothesis:

*2nd Skewes' Number*. Either way, this means that the first crossing can not occur after

*2nd Skewes' Number*.

There are some remarkable things about this result. Firstly, it is surprisingly big for an upper bound. Since both numbers were larger than a

*googolplex*, this helped contribute to their popularity. They became hailed as the largest numbers in mathematics until 1977 when

*Graham's Number h*it the scene (See Chapter 3.2). Another reason Skewes' Number should be important to large number aficionados is because it shows that interesting and unexpected things do happen out there with very large numbers, and their sole mathematical property is not merely being very large.

Unfortunately,

*Skewes' Number*s are now more of a mathematical foot note because much better bounds have since been proved.

In 1966 Lehman was able to show that the first crossing occurs somewhere between 1.53E1165 and 1.65E1165, without assuming the Riemann Hypothesis. This was further improved. In 1987 H.J.J. te Riele proved an upper bound of 7E370. An even better upper bound of 1.39822E316 was discovered by Bays&Hudson in 2000. Furthermore they showed that there are at least 1E153 consecutive integers where π(x) is greater than li(x). They went on to suggest that there is probably a consecutive streak as long as 1E311 integers, within this range. This is about as small as the upper bounds have gotten.

From the other end, in 1962 Rosser&Schoenfeld rigorously proved there are no crossover points below 1E8. This lower bound was further improved by Brent in 1975 to 8E10, and by Kotnik in 2008 to 1E14.

Thus, if we wanted to find the crossover point with manual calculation, we'd at least have to check beyond the first 100 trillion integers! It should also be stated that at this time there is no explicitly known value of x with the property π(x)>li(x), despite the fact that π(x) has been computed up to E23 as of 2008[12].

Graphs of li(x)-π(x) reveal the erratic nature of the primes:

This graph reveals that the difference is very small relative to x, and therefore li(x) is a very good approximation. However the unpredictable behavior of π(x) means that every so often it overtake li(x) and the difference becomes negative. When this first happens is anybodies guess, but we know it must lie below E317. The problem is that even though this number "seems" small compared to something like

*Skewes' Number*it is still impractical to impossible to actually compute π(E317). Even though today's super computers can perform*quadrillions*of operations per second, this only amounts to E15. The big bang only happened a mere E18 seconds ago, so there is no way for us to compute at this time more than E33 operations, so were not even close to E317! So even though an actual x satifying li(x) - π(x) < 0 seems tantalizingly close, and would be a number we could actually write out in full, we still can't actually say what the exact value of the smallest integer x is!As the lower and upper bounds of the first crossing have improved,

*Skewes' Number*has become mainly an interesting bit of obscure mathematics history. However the Upper bounds of Skewes' have developed a life of their own outside of their initial purpose, as simply surprisingly big numbers, and have been appropriated by the googologists. For this reason, I think Skewes' Upper bounds will live on, along with many other popular large numbers, like Archimedes 10^(8*10^8),

*googolplex, Mega, Moser, Graham's Number,*etc.

Now that we know where Skewes' Number really comes from,

*just how phenomenally big is it anyway? And how would we compute this?*

**Sizing Up Skewes' Numbers** What we need to do first to understand

*Skewes' Number*, is to convert it into a "base 10" power tower. To do this we need to first revisit logarithms. Recall that:

*log*

_{b}a := x if and only if b^x = a From this definition we can prove that b^(log

_{b}a) = a:

**Since log**

it follows that b^(log

_{b}a :=x only if b^x = a,it follows that b^(log

_{b}a) = b^x = a From this we can now convert any exponential expression into base 10:

79loge+logloge ~ 33.947048381657431173562152093056910892139046390512...

We now would like to know what the effect is of the last logloge being added. To find this we fist must compute 10^33.947...

E33.9470483816... ~ 8,852,142,197,543,270,606,106,100,452,735,038 . 912 895 198 150 688 2...

We now compute:

logloge ~ -0.362215688463210877...

Combining our two results we obtain:

8,852,142,197,543,270,606,106,100,452,735,038 . 550 679 509 451 225 ...

8,852,142,197,543,270,606,106,100,452,735,038 . 912 895 198 150 688 200...

- 0,000,000,000,000,000,000,000,000,000,000,000 . 362 215 688 463 210 877...

-----------------------------------------------------------------------------------------------------------------------------8,852,142,197,543,270,606,106,100,452,735,038 . 550 679 509 451 225 ...

We now take the log of the above result to obtain the new top exponent:

log(8,852,142,197,543,270,606,106,100,452,735,038 . 550 679 509 451 225 ...)

~

33. 947048381657431173562152093056910874368401573643 ...

~

33. 947048381657431173562152093056910874368401573643 ...

Observe how small the difference is between the value we obtained for 79loge+logloge, and our final result:

79loge+logloge ~ 33.9470483816574311735621520930569108

**92139046390512...**

FINAL RESULT ~ 33.9470483816574311735621520930569108**74368401573643...**

**The effect is so miniscule we could have ignored the effect of the last logloge in the expression. So basically we can say that ...**

e

This is usually approximated as 10^10^10^34. It's important to note that 10^10^10^34 IS NOT

Okay, so how can we get a handle on this number? How about we compare it to something which we can just barely wrap our brains around, a

^{ee79}~ 10^{101033.947048381...}This is usually approximated as 10^10^10^34. It's important to note that 10^10^10^34 IS NOT

*SKEWES' NUMBER.*It is merely an approximation.

*Skewes' Number*is defined to be

*exactly*e^e^e^79. Interestingly, although "e" is in the definition, it has not been proven that

*Skewes' Number*is irrational, even though it almost certainly is! The issue would be easily solved if we could compute it directly, but because it's too large to compute it's decimal expansion we would have to rely on analysis instead.

Okay, so how can we get a handle on this number? How about we compare it to something which we can just barely wrap our brains around, a

*googolplex.*A

*googolplex*is 1 followed by a

*googol*zeroes, where a

*googol*is 1 followed by a 100 zeroes:

*googolplex*= 10

^{10100}

One way to think about this number is that it's a number so large that you couldn't write out it's decimal expansion even if you wrote a zero on every atom in the observable universe! Note that it's incorrect to say we can't "write down" this number, because we just did! It's 10^10^100 ( A number's expressibility is related to the form in which it is expressed).

In any case, a

In any case, a

*googolplex*is a really mind bogglingly huge number, but it's infinitesimal when compared to*Skewes' Number.*How many times larger is*Skewes' Number*than a*googolplex?*To find that we'd have to divide*Skewes'*by a*googolplex*:(10^10^10^33.947)/(10^10^100)

Recall that:

(10^n)/(10^m) = 10^(n-m)

We can let N = 10^10^33.947 and M = 10^100. Then we can say:

(10^N)/(10^M) = 10^(N-M)

But what does that result in? Well:

(10^10^33.947) - (10^100)

Although a

*googol*is large, it is nothing compare to 10^10^33.947. Therefore the subtraction would practically result in 10^10^33.947. Let's decrease the last digit by 1 to approximate the effect. The actual effect will be a lot smaller, so by doing so we're actually creating a number smaller than*Skewes'/googolplex.*We can then say that a*Skewes' Number*is*10^10^10^33.946 times larger than a***at least***googolplex!*! What!? That's practically the size of*Skewes'*itself! So this is of absolutely no help in getting us to understand its size, other than to say that it*a***transcends***googolplex.**Let's try something else. Let's just multiply a googolplex by itself and see what happens. Maybe we can work our way up to*

*Skewes' Number.*Well a*googolplex*times itself is:**(10**

^{10100})(10^{10100}) = 10^{(10100+10100)}= 10^{(2*10100)}~ 10^{10100.3} So a

*googolplex googolplexes*is only 10^10^100.3?! Yep. Power towers act very strange on account of the fact that they involve stacked exponents. Just as exponentiation is already a very compact form of numbers, power towers are even more drastically compact, and therefore even more counter-intuitive! Also note that:10^10^100 = 10^10^10^2

&

10^10^100.3 ~ 10^10^10^2.0013

&

10^10^100.3 ~ 10^10^10^2.0013

Yikes! That's barely an improvement, and

*Skewes'*leading exponent is 33.947! Okay were going to have to get more ambitious. How about we multiply a*googolplex*by itself twice:**(10**

^{10100})(10^{10100})(10^{10100})**= 10**

10

^{(10100+10100+10100)}=10

^{(3*10100)}~ 10^{10100.47 }= 10^10^10^2.002 Wow! We've already dwarfed a

Okay you may already realize that

*googolplex*yet we're just getting started. Try to imagine this: Imagine a universe with a*googolplex*particles! Now imagine that massive universe, ... is only a single member of a multiverse with a*googolplex*such universes!! Now imagine a omniverse with a*googolplex*such multiverses!!! That's what we have, and it's nothing.Okay you may already realize that

*googolplex*googolplex*is just a*googolplex*^2, and*googolplex*googolplex*googolplex*is just a*googolplex^3.*So why not try raising a*googolplex*to a larger power. How about a*googolplex^10.*That would bring us up to a 10th order*googolplex universe:*

**(10**

^{10100})^{10}= 10^{(10*10100)}= 10^{10101}= 10^{10102.0043} The leading exponent is still growing extremely slowly ... how about raising to the 100th power!

**(10**

^{10100})^{100}= 10^{(100*10100)}= 10^{10102}= 10^{10102.0086 }

Still not enough ... the 1000th power:

**(10**

10

^{10100})^{1000}= 10^{(1000*10100)}=10

^{10103}= 10^{10102.0128 }

Hmm. This is still way too slow to get anywhere, but all is not lost. You may have already noticed a pattern. The number of zeroes in the number we raise a

*googolplex*to is also the number that the leading exponent is increased by! It follows then that:**(10**

^{10100})^{10100}= 10^{10200} Now we're making some progress. Yet even if we raise a

*googolplex*to the power of a*googol*we still aren't close to*Skewes' Number.*What if we raise a*googolplex*to a*googolplex?*Then we get:(10^10^100)^(10^10^100) ~ 10^10^10^100

This will actually overshoot the value. So we know that the answer lies somewhere between a

*googol*and a*googolplex.*It turns out that we need to raise a*googolplex*to the power of a number "just a little" smaller than the logarithm of*Skewes'.*Therefore:**(10**

^{10100})^{101033.946}~ 10^{101033.947} So to reach

But now you know better! Even if we had a

If you need proof of this statement consider the following:

So does

Imagine playing "chess" with the entire known universe. A move involves taking any two particles and having them exchange places. You keep exchanging particles until the exact same arrangement has occurred three times. The number of possible such games is roughly

Can we substantiate the physicists claim? Sure. First let's assume we are playing with roughly 10^80 particles (the standard figure for the number of particles in the observable universe). How many ways can these be arranged? Roughly (10^80)!. That's the factorial of 10^80. How can we compute that? We'll it turns out that the factorial can be approximated as:

*Skewes' Number*we need to go up to the 10^10^33.946 order googolplex universe! Suffice it to say that it is now a silly understatement to say we can't write out the decimal expansion of*Skewes' Number*, and yet this is the most popular statement for any large number greater than a*googolplex*-_-But now you know better! Even if we had a

*googol googol googol googol googol ... ... ... ... ... ... ... ... ... ... ... ... googol googol googol googol (say*10^31 times) observable universes you still wouldn't have enough particles to write out*Skewes' Number*!!! Now THAT is a proper description.If you need proof of this statement consider the following:

*Skewes' Number*is approximately 10^10^10^33.947 and is much larger than 10^10^10^33. 10^10^10^33 has exactly 1+10^10^33 digits, so we can say that*Skewes' Number*has**at least**10^10^33 digits. There are 10^80 particles in the observable universe. There are (10^100)(10^80) particles in a*googol*observable universes, there are (10^100)(10^100)(10^80) particles in a*googol googol*observable universes, etc. That means that the number of particles in our example would be:(10^100)(10^100) ... ... (10^100)(10^100)(10^80) w/10^31 (10^100)s

Adding up the exponents we would get:

10^(80+100*10^31) = 10^(80+10^33) ~ 10^10^33

So we would have roughly 10^10^33 particles, and therefore would not have enough to write out all the digits of

*Skewes' Number*even if we wrote a digit on each particle!So does

*Skewes' Number*have absolutely no physical meaning?! Well there are ways we can imagine*Skewes' Number*that are tangentially related to reality. One physicist described it this way:Imagine playing "chess" with the entire known universe. A move involves taking any two particles and having them exchange places. You keep exchanging particles until the exact same arrangement has occurred three times. The number of possible such games is roughly

*Skewes' Number!*That's got to make your head spin. Even the amount of time it would take for an arrangement to repeat, if you make a move once per second, would be staggering. The number of possible games would be even more mind boggling!Can we substantiate the physicists claim? Sure. First let's assume we are playing with roughly 10^80 particles (the standard figure for the number of particles in the observable universe). How many ways can these be arranged? Roughly (10^80)!. That's the factorial of 10^80. How can we compute that? We'll it turns out that the factorial can be approximated as:

n! ~ (n/e)^n

So ...

(10^80)! ~ (10^80)^(10^80)/(e^10^80) ~ 10^10^80

Wow! That's how many ways the particles could be arranged. Basically that's how many states the "chessboard" can be in. So we can estimate that it would take roughly 10^10^80 moves for the same arrangement to repeat 3 times. How many possible moves do we have at every turn? Well we can pick any of 10^80 particles and switch it with any of 10^80-1 particles. Note however that switching particle A with B, is the same as switching B with A, so we have to divide the product of 10^80 and 10^80-1 by 2. Thus the number of possible moves on any turn are:

(10^80)(10^80-1)/2 ~ 10^160

So now imagine that we have a game with 10^160 choices on every turn that goes on for 10^10^80 turns. The number of possible games can then be estimated as:

(10^160)^(10^10^80) ~ 10^10^10^80

So we get a number very much like

Okay, if your head is still spinning, get your barrings fast, because we're about to encounter another surprise.

Recall that

*Skewes'*though larger. To get*Skewes' Number*we would just need to adjust the number of particles in play to about 10^33.947, which is actually less than the number of particles the earth is made up of (about 10^50).Okay, if your head is still spinning, get your barrings fast, because we're about to encounter another surprise.

Recall that

*Skewes' Number*is usually approximated as 10^10^10^34. We know however that this is an upper bound on*Skewes'*. The question is, how much larger is this than*Skewes' Number.*Previously when we compared a*googolplex*to*Skewes' Number*we found we have to raise to a ridiculously large power, but that was because a*googolplex*was so small in comparison. However 10^10^10^33.947 and 10^10^10^34 are*very close*in value, ... right? How many times larger could 10^10^10^34 be than*Skewes' Number?*

So we carry out the division:(10^10^10^34)/(10^10^10^33.947) =

10^(10^10^34-10^10^33.947)

10^(10^10^34-10^10^33.947)

Surely subtracting 10^10^10^33.947 has to have a significant effect on 10^10^34, unlike subtracting 10^100. Well consider this :

10^10^34 = 10^10,000,000,000,000,000,000,000,000,000,000,000

10^10^33.947 ~ 10^8,851,156,000,000,000,000,000,000,000,000,000

10^10^33.947 ~ 10^8,851,156,000,000,000,000,000,000,000,000,000

So basically 10^10^34 is 1 followed by 10 decillion zeroes, while 10^10^33.947 is 1 followed by 8.8 decillion zeroes. What this means is that 10^10^34 is "1 followed by 2 decillion zeroes" times larger! Wait, it get's worse. This means that subtracting 10^10^33.947 is like removing a googol-scopic particle from 10^10^34. In fact:

10

^{-1+10^34 }<^{ }10^{10^34}- 10^{10^33.947}< 10^{10^34} The effect is even more miniscule than subtracting 1 from the larger exponent! In fact, the difference is

It seems like it barely had an effect on 10^10^10^34 at all! So again we can't use multiplication to wrap our brains around the difference in size. We must again use exponents. So here's the question, what real power, x, would raise

*much much*closer to 10^10^34 than it is to 10^(-1+10^34). The change in the leading exponent will then end up being even more mind-bogglingly small. We can say that:(10^10^10^34)/(10^10^10^33.947) > 10^10^10^33.99999999999999999999999999999999999

It seems like it barely had an effect on 10^10^10^34 at all! So again we can't use multiplication to wrap our brains around the difference in size. We must again use exponents. So here's the question, what real power, x, would raise

*Skewes' Number*to 10^10^10^34?(e

^{ee79})^{x}= 10^{101034}:x I asked my dad to give a guess just to see what he would say. He guessed 5 :)

5 doesn't even come close. It turns out that:

5 doesn't even come close. It turns out that:

(10

^{101033.947})^{101033.061}~ 10^{101034} To prove this just observe:

(10^10^10^33.947)^(10^10^33.061) =

10^(10^10^33.947 * 10^10^33.061) =

10^10^(10^33.947+10^33.061) =

10^10^[(10^33)(10^0.947+10^0.061)] ~

10^10^[(10^33)(8.85+1.15)] =

10^10^[(10^33)(10)] =

10^10^10^34

10^(10^10^33.947 * 10^10^33.061) =

10^10^(10^33.947+10^33.061) =

10^10^[(10^33)(10^0.947+10^0.061)] ~

10^10^[(10^33)(8.85+1.15)] =

10^10^[(10^33)(10)] =

10^10^10^34

So to reach 10^10^10^34 you'd have to go to the 10^10^33.061 order

*Skewes' universe. o_O;*

Mind you, each sphere is a

First off, we know that the necessary power must be less than 10^10^963.5185 (the logarithm of

*Skewes' Number*times larger than the last. And may I remind you that this is merely the difference between 10^10^10^33.947 and 10^10^10^34. These kinds of "large numbers" are radically different than the large numbers we ordinarily encounter, like a*million, billion,*or*trillion.*

And we aren't done yet. We still have*2nd Skewes' Number*to consider. This number is e^e^e^e^7.705 (credit goes to Giga Gerard for bringing the exact value of*2nd Skewes'*to my attention). The first thing we want to do is get an approximation of this value in base-10 power tower form:e^e^e^e^7.705 =

10^(loge*10^(loge*10^(loge*10^(7.705loge)))) =

10^10^(logloge+10^(logloge+10^(7.705loge+logloge))) ~

10^10^10^(logloge+10^(7.705loge+logloge)) ~

10^10^10^(10^(2.98402329437)-0.3622156)... ~

10^10^10^(963.880722156-0.3622156)... ~

10^10^10^963.518506467...

10^(loge*10^(loge*10^(loge*10^(7.705loge)))) =

10^10^(logloge+10^(logloge+10^(7.705loge+logloge))) ~

10^10^10^(logloge+10^(7.705loge+logloge)) ~

10^10^10^(10^(2.98402329437)-0.3622156)... ~

10^10^10^(963.880722156-0.3622156)... ~

10^10^10^963.518506467...

So we'll say that

That might seem like a fine point, because shouldn't the mathematicians rounded it up to 10^10^10^964 anyway since 963.51 is closer to 964. Well, as it turns out, it is actually much much closer to 10^10^10^963! Don't believe me? Well what does it mean to say a number is closer to one rather than another. In the most natural sense it means if the number is closer to "A" than "B" that the distance from the number to A is shorter than the distance from the number to B. We can therefore figure out what

*2nd Skewes'*is approximately 10^10^10^963.5185. Skewes' himself gave a base-10 upper-bound of 10^10^10^1000 in his original paper. This number is also frequently cited as 10^10^10^963 for obvious reasons. There are two important things to note about this value. First off, although it is often not stated, 10^10^10^963 is ONLY AN APPROXIMATION; it is not*2nd Skewes'.*Secondly, since the approximation is less than the actual value it is incorrect to say that Skewes' proved that the first crossing must occur before 10^10^10^963 without assuming the Riemann Hypothesis. This is NOT what his paper shows. It only shows it must occur before e^e^e^e^7.705. For this reason I recommend always rounding upper-bounds up, never down. So the cited value should be 10^10^10^964.That might seem like a fine point, because shouldn't the mathematicians rounded it up to 10^10^10^964 anyway since 963.51 is closer to 964. Well, as it turns out, it is actually much much closer to 10^10^10^963! Don't believe me? Well what does it mean to say a number is closer to one rather than another. In the most natural sense it means if the number is closer to "A" than "B" that the distance from the number to A is shorter than the distance from the number to B. We can therefore figure out what

*2nd Skewes'*is closer to by computing the absolute differences between the two possible approximations. Let A = 10^10^10^963 and B = 10^10^10^964. We then want to find:|

*2nd Skewes' - A|*

&

|2nd Skewes' - B|

&

|2nd Skewes' - B|

and find out which is larger. Since B is larger than

*2nd Skewes'*we know the difference*2nd Skewes'*-B is negative. This means the absolute value is the terms in reverse:*|2nd Skewes' - B| = B - 2nd Skewes'*

Since

*2nd Skewes'*is larger than A we know the difference*2nd Skewes'*- A is positive. Therefore we can simply drop the absolute value signs:|

*2nd Skewes' - A| =**2nd Skewes' - A*

So let's first compute it's distance from A:

*2nd Skewes' - A =*10^10^10^963.5185 - 10^10^10^963 ~ 10^10^10^963.518499999999999999999999...

That is not an exaggeration ... it's an understatement! So the difference seems to be no smaller really than

B -

*2nd Skewes'*so it must be very very far away from 10^10^10^963, and closer to 10^10^10^964, right? Nope. Let's compute the other difference now:B -

*2nd Skewes'*= 10^10^10^964 - 10^10^10^963.5185 ~ 10^10^10^963.9999999999999999999999999... Again this is not an exaggeration. So it turns out that the distance from B is 10^10^10^963.99 while the distance to A is only 10^10^10^963.51. Therefore,

So let's just go with

Now how does

*2nd Skewes'*is much much closer to 10^10^10^963 than 10^10^10^964. So if anything, it should be rounded down, not up!So let's just go with

*2nd Skewes'*being approximately 10^10^10^963.5185, rather than the simplified approximations, because there is a big difference!Now how does

*2nd Skewes' Number*compare to the original*Skewes' Number*? Well by now you can guess that we'd have to raise*Skewes' Number*to quite a significant power to get*2nd Skewes' Number.*Let's use 10^10^10^34 as*Skewes'*in this computation, just to keep on the up and up.First off, we know that the necessary power must be less than 10^10^963.5185 (the logarithm of

*2nd Skewes'*). Why? Because:(10^10^10^34)^(10^10^963.5185) > (10)^(10^10^963.5185) = 10^10^10^963.5185

Alright, so let's just try 10^10^962. Surely that's got to be enough right? So we have:

(10^10^10^34)^(10^10^962) =

10^(10^10^34*10^10^962) =

10^10^(10^34+10^962)

10^(10^10^34*10^10^962) =

10^10^(10^34+10^962)

And here we see the problem. Adding 10^34 isn't going to get 10^962 to 10^963.5185. It will barely have any effect at all. But now we can see how to compute the leading exponent. We simply need to find:

log(10^963.5185 - 10^34)

On the big number calculator I get:

963.518499999999999999999999999999999999999999999999999999999...

with about 900 9s! So:

(10^10^10^34)^(10^10^963.518499999......(900 9s)......999999999) = 10^10^10^963.5185

Lastly, let's compare

*2nd Skewes' Number*to it's two most popular approximations: 10^10^10^963 and 10^10^10^1000. Since*2nd Skewes'*is larger than 10^10^10^963, we use the following:log(10^963.5185-10^963)

This time I get:

963.361707835183669490901548...

Therefore:

(10^10^10^963)^(10^10^963.361707835183669...) = 10^10^10^963.5185

You'll notice the significant change from 963.5185 to 963.3617. This is due to the fact that the numbers are quite close together (relatively speaking of coarse!). Now let's see how close

*2nd Skewes'*is to Skewes' own approximation:log(10^1000-10^963.5185) ~ 999.999999999999999999999999999999999999856687...

Therefore:

Basically we are raising to the power of 10^10^1000. So we need to raise

Well, I think that's enough for today. Hopefully your brain isn't throbbing at this point. I'm pretty sure you've never heard large numbers explained like this before! Perhaps now you have a better appreciation for what googologists really mean when they say a number is

(10^10^10^963.5185)^(10^10^999.9999999999...) = 10^10^10^1000

Basically we are raising to the power of 10^10^1000. So we need to raise

*2nd Skewes' Number*to the "1 followed by a

*googol^*10 zeroes"th power to get Skewes' approximation of 10^10^10^1000.

Well, I think that's enough for today. Hopefully your brain isn't throbbing at this point. I'm pretty sure you've never heard large numbers explained like this before! Perhaps now you have a better appreciation for what googologists really mean when they say a number is

*very large*:)