Pointless Gigantic List of Functions


Lists of large numbers are commonly made in large number sites, but lists of functions are rather rare. Googology Wiki has a list of googological functions, and Ikosarakt1 on his googology site made an increasing list of functions as well (credit goes to him for the idea), but that's about it. So I decided to make a list of large number functions as well.

Since this is a list of large number functions rather than just a list of functions, there are several restrictions put in place on functions that are allowed in this list.

1. The function must map all natural numbers (numbers in the set {1, 2, 3, 4, 5 ... }) to any real numbers. In other words, no matter which natural number you plug into the function, the function will output a real number. This means that functions that can output negative numbers if you plug in a natural number are not allowed. Functions that output integers are strongly preferred to those that output non-integers, though there are exceptions to this (e.g. log(x)).

2. The function must be non-decreasing for x ≥ 0. This means that for all x ≥ 0, f(x+1) ≥ f(x). So functions like f(x) = (x-3)2 are not allowed because (for example) f(2) is less than f(1).

3. If the function is not a constant function (i.e. a function defined as f(x) = N where N is any number), it should tend to infinity. This means that as the input of the function approaches infinity, the output will as well. This is not as restricting of a rule as you might think!

Also, how are functions sorted out in this list? They are sorted by growth rate, which means that if a function f(n) has a larger output than function g(n) as n gets arbitrarily large, than f(n) grows faster than g(n) and will thus be listed after g(n).

So what are we waiting for? Let's start this list of functions.

The Constant Range

(constant functions)

Entries: 3

The zero function

f(x) = 0

This is the very slowest-growing function allowed on this list. No matter what value you plug in to the function, it will always output zero! The function is non-decreasing and maps all natural numbers to a real numbers (in this case 0), which is why it's allowed on this list.

This function is the base function of the slow-growing hierarchy, meaning that it's the same function as g0(n). It's also a constant function.

The unity function

f(x) = 1

This is another constant function. It outputs 1 no matter what value you plug into the function. It's notable as one of the four fundamental functions in Sbiis Saibian's idea, along with the identity function, the predecessor function, and the successor function (see this article by Sbiis Saibian for details). Sbiis Saibian denotes it u(). Note that he chooses not to give it an input because the function does not need to have an input, as it outputs the same value for all inputs.

f(x) = 2

This function outputs 2 no matter what value you choose. We can make a constant function for any whole number, but it will always be slower-growing than any non-constant function allowed on this list, since constant functions do not tend to infinity but the non-constant functions allowed on this list do. This means that even if you define f(x) = N where N is whatever googolism you choose, it will still be slower growing than any non-constant function allowed on this list!

The Googologically Slow Range

(faster than constant functions but slower than logarithms)

Entries: 10


To get some functions that grow EXTREMELY slowly but still faster than the constant functions, we can use this trick.

Consider any non-constant function allowed on this list F(x). Then define Fi(x) (notation used by Ikosarakt1, i stands for inverse) as the smallest number n such that x < F(n).

Then we can use that trick on the FOOT function, which is currently the fastest-growing function in googology and the function used to define the current largest number in googology (namely BIG FOOT) - for more on it see the FOOT function at the end of the list. In fact we can do this to any googoloical function to get functions like:


The inverse of Rayo's function, a function that diagonalizes over first-order set theory (FOST for short) and used to define Rayo's number, which for a while was the largest named number.


The inverse of Adam Goucher's xi function, an uncomputable function that is one of the fastest-growing functions known.


The inverse of the busy beaver function, the original uncomputable function. The only fully known values of the BB(x) are 1, 4, 6, 13 for x = 1, 2, 3, 4 respectively - BB(5) is at least 4098 (believed to be 4098), BB(6) is greater than 1018,276, BB(7) is greater than 1010101018,705,353, and BB(22) is the smallest value of BB(n) that is known to be greater than Graham's number.

This means that we can construct a table of some small values of BBi(n):

BBi(x) does not hit 6 until x is at least 4098. It does not hit 7 until x is at least 1018,276, and it does not hit 8 until x is at least the unfathomable 1010101018,705,353. Think about that: you need to input a number with over 18,000 digits (and maybe even more than that!) just to get the function to output 7, and even if you input a googolduplex the function still wouldn't output anything more than 7! Even if you input Graham's number the output would be at most 23, and probably less than that! As you can see this is an EXTREMELY slow-growing function alright. Here are some more inverses of fast-growing functions:


The inverse of Loader's D function, which diagonalizes over the calculus of constructions and is used to define Loader's number, the largest named computable number.


The inverse of Harvey Friedman's SCG function, the function used to define SCG(13), a number that has been used in serious mathematics that is larger than Graham's number or TREE(3).


The inverse of Harvey Friedman's TREE function, used to define TREE(3) which is notable because it's a number used in serious mathematics that is MUCH larger than Graham's number.

Gi(x) (G is the Goodstein function)

The inverse of the Goodstein function, a fast-growing function that arises out of a class of integer sequences. The sequences are quite simple, but the lead to a surprisingly powerful function that reaches the same level of Bowers' tetrational arrays, or epsilon-zero in the fast-growing hierarchy.


The inverse of Harvey Friedman's n function, a function with a growth rate comparable to Bowers's linear arrays, or w^w in the fast-growing hierarchy.

Gi(x) (G is Graham's function)

The inverse of the function that is commonly used to define the famous Graham's number, with a growth rate of w+1 in the fast-growing hierarchy.

The inverse Ackermann function


The inverse Ackermann function is the inverse of the function A(n,n) using the Ackermann function, which is commonly denoted α(n) using the Greek letter alpha. The first few values of the function f(n) = A(n,n) are 1, 3, 7, and 61 for n = 0, 1, 2, and 3 respectively, but the next (f(4)) is equal to exactly 22265,536 - 3, which is larger than a googolduplex - the next one after that (f(5)) is the unfathomable 2^^^8-3 using Knuth's up-arrows. In general, A(n,n) = 2^n-2(n+3) - 3.

For any practical-sized input, α(n) will not be any greater than 4. If n = 1 or 2, then α(n) = 1; if n = 3, 4, 5, or 6, α(n) = 2; if n = an integer between 7 and 60 inclusive, α(n) = 3; and if n is at least 61 but anything less than 22265,536 - 3, α(n) = 4. This means that if you plug in a million, a googol, a centillion, a googolplex, or even a googolduplex, the function will still only output 4, an even if you input a giggol (10^^100) or a giggolplex (10^^10^^100) you'd only get 5. To even get an output of ten you'd need an utterly unfathomable number in the enneation range.

However, the inverse Ackermann function certainly grows much quicker than something like the inverse busy beaver function, since (for example) if you input Graham's number into the inverse busy beaver function the output will be no more than 23, but if you input Graham's number (noted G64) into the inverse Ackermann function you'd get approximately G63.


Megafuga(n) is equal to n^^n = n^n^n ... ^n with n copies of n, so the first few values of megafuga(n) are 1, 4, 7,625,597,484,987, and about 10^(8*10^153), a number somewhat larger than a googolplex, for n = 1, 2, 3, 4 respectively. The inverse of the function grows extremely slowly, and here are some examples:

If you input 1, 2, or 3 the function outputs 2; if you input a number greater than 3 but less than 7,625,597,484,987 (for example a million) it outputs 3; if you input a number at least 7,625,597,484,987 but less than 4^4^4^4 (about 10^(8*10^153)), for example a quadrillion, googol, or googolplex, the function outputs 4; if you input a number between 4^4^4^4 and 5^5^5^5^5 ~ 10^10^10^2184 (like a googolduplex) the function outputs 5. If you input something like a giggol (10^^100) however, the function would output about 100, meaning that this function certainly grows faster than the previous function.

The Everyday Function Range

(functions comparable to the logarithm up to slightly above exponential)

Entries: 18

The common logarithm


Now we're finally past functions that are just the inverse of googological functons, and to slow-growing functions that are actually used in mainstream mathematics. This is a rather small scope among the scale of non-decreasing integer functions, that probably goes from around this function up to about the hyper-exponential range.

This function is the common logarithm (also called base-10 logarithm or simply logarithm). It is noted log(x) or log10(x) and it's defined as the value n such that 10n = x. It can be generalized to the base-a logarithm (noted loga(x)), which is defined as the value a such that an = x.

The common logarithm has a fair amount of use in googology, since it can be used to approximate numbers like the Skewes' numbers in base 10 form. To calculate the common logarithm of a number expressed as 10n (e.g. 108.07*10^153), just chop off the bottom ten. For example, the common logarithm of a googol (10100) is 100. As you can see, this function is still pretty slow-growing, but much faster than the previous function. For example, if you apply the inverse of the megafuga- function to a giggol you get 100, but if you take the common logarithm of a giggol you get 10^^99, which isn't any less unfathomable of a number than a giggol itself.

The natural logarithm


The natural logarithm function, denoted as ln(x), is defined as the value n such that en = x, where e is the important mathematical constant equal to about 2.71828... (read its entry on my number list for more on e). Here are some values of the natural logarithm:

ln(0) is undefined (as x becomes close to 0, ln(x) approaches negative infinity)

ln(0.00001) ~ -11.5129

ln(0.5) ~ -0.693

ln(1) = 0

ln(2) ~ 0.693 (has an entry on my number list)

ln(e) = 1

ln(3) ~ 1.099

ln(30) ~ 3.402

This function is a pretty slow-growing function alright. For example, if you plug a googol into the function (the integer version) you only get about 230.2585. Think about how big a googol is (read this article if you need to), and how small the output is in comparison.

Additional note: You can calculate the common logarithm from the natural logarithm with the formula log(x) = ln(x)/ln(10).

The square root function

√x / x1/2

Now for another slow-growing function that is commonly seen in mathematics. This function is defined as the number n such that n2 = x - for example, the square root of 16 is the number n such that n2 = 16, which is 4. It is most commonly denoted √x with the notation for square roots, or also as x1/2 with fractional exponents. Here are some example values:

√0 = 0

√(1/4) = 1/2

√1 = 1

√2 ~ 1.41421

√3 ~ 1.73205

√4 = 2

√10 ~ 3.16228

√64 = 8

√225 = 15

√1000 ~ 31.6228

The square root function grows slowly by any reasonable standard, but definitely not "extremely slow". For example, apply the function to a googol and you get 1050, which is still a pretty big number (roughly the number of atoms that make up Earth).

The prime counting function


The prime counting function, denoted π(x) using the Greek letter pi, is defined as the number of prime numbers less than or equal to x. For example, π(5) is 3 because there are 3 prime numbers less than or equal to 5, and π(100) is 25. It's notable to googology because it comes up in the context of the Skewes' numbers, two large numbers that arose in a problem involving the distribution of primes. Read here for a detailed coverage of them.

The prime counting function can be approximated as x/ln(x) due to the prime number theorem, or more accurately using the integral:




| 1/ln(t) dt




(that integral itself crops up in the context of the Skewes' numbers)

The function is fast-growing enough that applying it to an astronomical sized number creates a number that doesn't seem too much smaller than it. For example, apply it to a septillion (1024) and you get 1.84355*1022, and applying it to a googol gives you about 4.34*1097 (using the approximaton x/ln(x)).

f(x) = 1/2*x

This function is an example of a linear function, a term you probably learned in algebra which refers to a function that can be expressed as f(x) = a*x+b - the name comes from the fact that the graph of a linear function is a line. This specific function returns half of the input, so it grows exactly half as fast as the basic function f(x) = x, the identity function.

The predecessor function

f(x) = x-1

The predecessor function is one of the four fundamental functions, along with the unity function (f(x) = 1) and the next two functions. It is defined as f(x) = x-1, the number exactly one less than x. It is the inverse of the important successor function (f(x) = x+1), a function notable because it is very fundamental.

The predecessor function is important to googology because it crops up in the definition of almost all googological functions. For example, take the 3-rule definition of the Ackermann function A(m,n):

A(0,n) = n+1

A(m,0) = A(n-1,1)

A(m,n) = A(m-1,A(m,n-1))

The definition uses the predecessor function three times, and it's even more prominent in more complex googological functions.

The identity function / input/output function

f(x) = x

This is the identity function (referred to by Sbiis Saibian as the input/output function), which is the simplest non-constant function of all. It simply outputs the same number you plug into the function, meaning that it's the only non-constant function that doesn't do anything with the input.

Besides being fundamental, this function can be thought of as a "neutral point" for functions - all slower-growing functions are inverses of all the faster-growing functions (for example, the predecessor function vs. the successor function). Functions faster than the identity function can be considered fast, and functions slower than it can be considered slow, analogous to Sbiis Saibian's idea that 1 is the boundary between small and large numbers (see here).

Other than that there isn't much else to say about the function. It doesn't have much use in the large number field since it does not do anything to the number you input, and in googology your function should obviously do something with the input. That said, this is still a function I can't exclude from this list. Up next is a very important function in googology:

The successor function

S(x) = x+1

The successor function is the most fundamental function that does something with its input and is not a constant function. It is noted S(x) and is defined as S(x) = x+1 - for example, S(2) is 3 and S(594) is 595. This function is important because it is fundamental - so fundamental is it that its existence is usually treated as an axiom, something that is not true by proof but by definition. Specifically, the property that S(x) is a total function (a function that returns a value no matter what number you input) is usually treated as an axiom, in systems such as Peano arithmetic.

Almost all functions in googology are based in some way upon the successor function. The fast-growing hierarchy, the slow-growing hierarchy, and the Hardy hierarchy are all based around the successor function - S(x) can be expressed as f0(x) in the fast-growing hierarchy, H1(x) in the Hardy hierarchy, and gw+1(x) in the slow-growing hierarchy. Other functions that are based directly upon the successor function include the Ackermann function and Wythagoras's dollar function. Other functions in googology are based upon functions that themselves are built from the successor function (usually exponents), such as Knuth's up-arrows, Conway chain arrows, Bowers' and Bird's arrays, Sbiis Saibian's Extensible-E system, Hyp cos's R notation, Hollom's hyperfactorials, etc. In addition the fast-growing sequences like Goodstein sequences make heavy use of the successor function.

Not much else to say about this function other than that it is fundamental of course. Even counting uses the successor function: if you want to count from a number X to a number Y, you repeatedly apply the successor function to X until you get to Y. Let's move on to some other functions.

f(x) = 2x

This function returns twice its input - in other words it applies the successor function to its input X times where X is the input. It's another simple linear function that easily surpasses the identity function, but still doesn't grow very fast even by ordinary standards.

This function is equivalent to f1(x) using the fast-growing hierarchy, Hw(x) using the Hardy hierarchy, and gw2(x) using the slow-growing hierarchy.

The squaring function / The gar- prefix

f(x) = x2

This function returns the square of its input, i.e. it multiplies the input by itself. Its graph looks like a letter U with its bottom point at the origin. It's both a quadratic function and a power function. As you probably learned in algebra, a quadratic function is any function of the form f(x) = ax2+bx+c (where a is not 0), and the graph of such a function always looks like either a U or an upside-down U depending on whether a is positive or negative. A power function is a function of the form f(x) = b*xa (not to be confused with exponential functions, seen later).

Here are some example values of the function:

0.52 = 0.25

12 = 1

1.52 = 2.25

22 = 4

102 = 100

242 = 576

1,000,0002 = 1,000,000,000,000

The function f(x) = x2 grows reasonably quickly, but nothing too mind-boggling. For example, imagine every second you go to the next member of the sequence Sx = x2, starting with 1. You'd start with 1 the first second, 4 the next second, 9 the third second, etc. Even reaching a googol this way would take 3.17*1042 years, which is pretty insanely long.

On the other hand, repeatedly squaring a number easily makes some large numbers in a small amount of squarings (see f(x) = 22^x later). Also, squaring a number is indeed a common retort to any googolism (albeit an extremely naive one).

The gar- prefix is a prefix that squares the number you apply it to, and it's therefore equivalent to this function. The prefix was based on "gargoogolplex" which young Kieran Cockburn came up with as a name for a googolplex googolplexes. The gar- prefix is the least powerful member of the fuga- family of prefixes, and the other members of that family are seen later in this list.

f(x) = x3

This function returns the cube of its input. It's another power function whose graph looks like this:

It's another simple power function, but it isn't as likely to be mentioned in googology as the previous function. It's also the simplest cubic function, a function of the form f(x) = ax3+bx2+cx+d, where a is not 0.

f(x) = 2x

This is an example of an exponential function, a function of the form y = b*ax. It returns 2 to the power of its input. It crops up in a few places, such as computing which heavily uses powers of two, and in radioactive decay.


f(x) = x*2x

This function is equal to f2(x) using the fast-growing hierarchy. Its growth rate falls among the exponential functions. Here are some example values:

f2(1) = 2

f2(2) = 8

f2(3) = 24

f2(4) = 64

f2(5) = 160

f2(6) = 384

f2(7) = 896

f2(8) = 2048

f2(9) = 4608

f2(10) = 10,240

These first few values aren't all that big, even by ordinary standards. But it doesn't take long for this function to get to sizable values:

f2(20) = 20,971,520

That's about 21 million. And better would be:

f2(100) ~ 1.2677*1032

That's about 127 nonillion. It has exactly 33 digits, which is quite a sizable astronomical number. Better still would be:

f2(1000) ~ 1.0715*10304

That number is about 10120 times bigger than the volume of the observable universe in Planck volumes. Suffice to say that this is a number that we just can't hope to comprehend.

So although this function takes a little bit to grow, once it does it gets to some pretty large numbers. However this function is pretty weak among googological functions.

The ratio between successive values of the function (f2(n) and f2(n+1)) is not the same for all values, but as we input larger and larger numbers, that ratio approaches 2. Therefore this function grows slower than even f(x) = 2.0001x, where the ratio between successive values is always 2.0001.

The exponential function

f(x) = ex

This is another exponential function (often referred to as the exponential function), and it's a particularly special one because it has several unique properties and many applications. The most notable property of this function is that it is its own derivative - in other words, the slope of the graph of the function at any point is exactly equal to the output of the function at that point. The only other functions that are their own derivatives are all the other functions of the form f(x) = n*ex (this includes f(x) = 0).

This function is used in many fields, including physics, chemistry, mathematics, mathematical biology, and even economics - in fact, this very function is where the mathematical constant e was first used. It's sometimes denoted exp(x). The inverse of the function is the also commonly used function ln(x) (seen earlier). See the Wikipedia article for more on this function.

The -plex function / the antilogarithm

f(x) = 10x

This function takes a number x and returns 10 to the power of x, or 1 followed by x zeros. It's notable in googology as the function that defines the -plex suffix, a suffix backformed from the name googolplex for 10 to the power of a googol that is defined as 10 to the power of x.

A few googological notations are based upon this function. Sbiis Saibian's Extensible-E system has the base rule En = 10n, and Hyp cos's R notation and my own alphabetical hierarchy notation are based on this function as well.

The factorial / the -bang suffix

n! = n*(n-1)* ... *3*2*1

The factorial of n is defined as the product of all positive integers 1 to n. It's another common function in mathematics, and it calculates the number of different ways to arrange n objects in a row. The first few values are:

0! = 1 (by definition)

1! = 1

2! = 2

3! = 6

4! = 24

5! = 120

6! = 720

7! = 5040

8! = 40,320

9! = 362,880

10! = 3,628,880

Some larger values are:

20! ~ 2.4329*1018

50! ~ 3.0414*1064

100! ~ 9.3326*10157

So the factorial grows pretty quickly by any reasonable standard - in fact it's one of the fastest-growing functions most people ever learn about. As such it's a great example of what non-googologists see as a fast-growing function, and the factorial is part of the layman's toolbox for generating large numbers. A few googological notations (most notably Hollom's hyperfactorials) extend upon the factorial to huge heights.

The factorial function grows faster than all exponential functions. To see why, consider that in an exponential function the ratio between successive values (for example f(2) and f(3)) is constant, but with the factorial, the ratio between successive values is constantly increasing - for example, 5! is 5 times larger than 4!, 6! is 6 times larger than 5!, etc. However, this function is still roughly on par with exponentiation in terms of how powerful it is.

This function is also equivalent to the -bang suffix, a suffix coined in analogy to the -plex suffix seen a little earlier, on a wiki whose main topic is infinite numbers titled "Cantor's Attic". A notable number defined with the -bang suffix is the googolbang (has an entry on my number list here), which is the factorial of a googol.

The fz- prefix

f(x) = xx

This function takes its input to its own power. It's another function that laymen commonly use to come up with a large number, and it's one of the most common such functions. Therefore it's a function that has some relevance to googology. This function is the function that defines the fz- prefix, a prefix devised by Alistair Cockburn as a more powerful version of the gar- prefix (they're both part of the fuga- family of prefixes). It's also used as the base of Steinhaus-Moser notation, a notation devised by Hugo Steinhaus and extended by Leo Moser as a demonstration of how easy it is to name very large numbers.

This function additionally serves as an upper-bound to the factorial - to see why consider that

n! = n*(n-1)*(n-2) ... *3*2*1


nn = n*n*n*n*n ... *n*n with n copies of n.

The googo- prefix

f(x) = (2x)x

The googo- prefix is a prefix devised by Andre Joyce from a curious observation about the name "googol". He observed that "googol" ends in L (Roman numerals for 50), and a googol can be expressed as (2*50)50. Then he decided to generalize this to a prefix "googo-", which can be applied to any number n written out in Roman numerals and turns n into (2n)n. For example, "googoc" is 200100 ~ 1.2676*10230, since C is Roman numerals for 100.

Joyce has devised similar prefixes to this, and in fact he came up with some sort of crazy vowel and letter repeating scheme that further extends upon the name "googol". To learn about them read this page.

This function doesn't grow very much faster than exponential, but all the next ones quite quickly clobber exponential growth. Since exponential growth is the fastest growth rate of functions encountered in everyday life (unless you count the occasional hyper-exponential growth), when we transcend that, that's when we'll really enter the world of googology.

The Primitive-Recursive Range

(between exponential functions and fw(n))

f(x) = x2x

This is an example of a function whose growth rate is in the often-skipped-over area between exponential functions (and those a bit beyond) and hyper-exponential functions. Hyper-exponential functions (also called double exponential functions) are functions comparable in growth rate to those of the form f(x) = ab^x, i.e. a constant raised to the power of an exponential function.

Sloane and Plouffe's superfactorial

n$ = x!*(x-1)!* ... *3!*2!*1!

Sloane and Plouffe's version of the superfactorial (not to be confused with Pickover's superfactorial, also denoted n$) is defined as taking n, and outputting the product of the factorials of all the integers 1 through n. It's an extension of the factorial that grows significantly faster than exponential but significantly slower than hyper-exponential. Here are some example values:

1$ = 1

2$ = 2

3$ = 12

4$ = 288

5$ = 34,560

6$ = 24,883,200

10$ ~ 6.6586*1027

20$ ~ 1.2744*10156

100$ ~ 2.7031*106940

It isn't hard to see that this function is upper-bounded by f(x) = xx^2. Here's why:

n$ = n!*(n-1)!* ... *3!*2!*1!

< nn*(n-1)(n-1)* ... *33*22*11 (since n! is always upper-bounded by nn)

< nn*nn* ... *nn*nn*nn with n copies of nn

= (nn)n

= nn^2

The hyperfactorial

H(x) = nn*(n-1)(n-1)* ... *33*22*11

The hyperfactorial is another sibling of the factorial. It's defined as taking n, and turning it into the product of all integers 1 to n each raised to their own power. Some example values:

H(1) = 1

H(2) = 4

H(3) = 108

H(4) = 27,648

H(5) = 86,400,000

H(6) = 4,031,078,400,000

H(10) = 2.1578*1044

H(20) = 1.0076*10230

H(100) ~ 3.4553*109014

It grows noticeably faster than Sloane and Plouffe's superfactorial, although the two functions are roughly in the same ballpark, as we saw with the previous function's examples.

The hyperfactorial's growth rate is upper-bounded by f(n) = nn^2 - we already found that out in the previous entry, where we were able to upper-bound Sloane and Plouffe's superfactorial with the hyperfactorial, which can in turn be upper-bounded by f(n) = nn^2.

Notably, just as the factorial has a generalization to real and even complex numbers called the gamma function,the hyperfactorial has a similar generalization known as the K function.

f(x) = 22^x

This is a simple example of a hyper-exponential function, a function whose exponent grows at an exponential rate, so that it looks like an exponential function when graphed on a logarithmic scale. You can imagine the growth rate of this function like so: Start with 2, and square the number every second, so that you get 4, 16, 256, and so on. How fast would that sequence grow? Just eight squarings are enough to exceed the number of atoms in the observable universe (1080) and also a googol (10100), and one more gets us past a centillion (10303). How long would it take to reach a googolplex that way? Surely an eternity, right? Amazingly, in under 6 minutes we reach a googolplex. In only a day we would reach about 1010^26,008, an insanely huge number. However, to reach a number like Skewes' number it would take about 30 decillion (3*1034) years! So this function easily surpasses any with use in everyday life, but it's barely anything in the world of googology!

The -duplex suffix

f(x) = 1010^x

The -duplex suffix is a sibling of the -plex suffix, and it indicates applying -plex (defining it as 10n) to a number twice. It originates from the name googolduplex for 10 to the power of a googolplex (10^10^10^100), a number also known as "googolplexian", "googolplexplex", and some other names. It's another hyper-exponential function, and here are some example values:

1-duplex = 10^10^1 = 10^10 = 10,000,000,000 (ten billion)

2-duplex = 10^10^2 = 10^100 (a googol)

3-duplex = 10^10^3 = 10^1000 (named googolchime by Sbiis Saibian)

10-duplex = 10^10^10 = 10^10,000,000,000 (named trialogue by Sbiis Saibian)

100-duplex = 10^10^100 (a googolplex)

The ultrafactorial / Factorexation

U(x) or x\ = x!x!

The ultrafactorial (also named "factorexation" by SpongeTechX of Googology Wiki) is a function defined as raising a number's factorial to the power of itself. It's a typical example of what a non-googologist would see as a really fast-growing function, although its growth rate is only roughly hyper-exponential, not all that impressive to a googologist. Its growth rate is slightly slower than the fuga- prefix. For example, 100 ultrafactorial is roughly 1010^160, while applying the fuga- prefix to 100 gives us exactly 102*10^198.

A sibling of this function is Pickover's superfactorial, which is MUCH more powerful.

The fuga- prefix

f(x) = xx^(x-1)

The fuga- prefix is another one of the prefixes in the fuga- family of prefixes, and like the gar- and fz- prefixes it was devised by Alistair Cockburn. It's defined as fuga-x = ((...(x^x)^x)...)^x with x copies of x (which simplifies to xx^(x-1)), or x weak-tetrated to x using the weak hyper-operators (can be written as x↓↓x using down-arrow notation). For example, fuga-5 = (((5^5)^5)^5)^5 ~ 7.1821*10436, and fuga-100 (a "fugahundred") = 102*10^198. It's a good example of a function whose growth rate is slightly above hyper-exponential, but it's still a rather humble function in googology.

The -triplex suffix

f(x) = 10^10^10^x

This is an example of a double-hyper-exponential or triple-exponential function, a function whose growth rate like that of a constant raised to the power of a hyper-exponential function. This particular function applies -plex to a number 3 times (for example, 10^10^10^10^100 is a googoltriplex). In this list, it mostly serves as a transition point into the crazy world of tetrational growth rates.

The -stack suffix

f(x) = 10^^x

The -stack suffix is another suffix on analogy to the -plex suffix, from Cantor's Attic. Just as -plex takes 10 to the power of a number, -stack takes 10 tetrated to a number - the name comes from the fact that 10 tetrated to a number can be written as a stack (i.e. a power tower) of that number of tens. A number defined with the -stack suffix is the googol-stack, which is a power tower of a googol tens.

This suffix is an example of a function with tetrational growth, a growth rate where each number is roughly a constant raised to the power of the previous. That is INCREDIBLY fast by a non-googologist's standards, and it's just starting to blast off into the world of googology.

The -megafuga prefix

megafuga(x) = x^^x

The megafuga- prefix is a member of the fuga- family of prefixes. It's a more powerful variant of the fuga- prefix, and it originated like so: Alistair Cockburn introduced his gar-, fz-, and fuga- prefixes to a forum page titled "Really Big Numbers". A friend of his named Stephan Houben noticed an ambiguity in the description of fuga-x as "x to the power of x x times" - does that refer to ((...(x^x)^x)...)^x with x copies of x, or to the much larger x^(x^(...x^(x^x))...))) with x copies of x? Cockburn explained that he had the former in mind, and that a "coffee buddy" of his suggested the name "megafuga-" for the latter function.

Examples of this function:

megafuga-1 = 1^^1 = 1

megafuga-2 = 2^^2 = 4

megafuga-3 = 3^^3 = 7,625,597,484,987

megafuga-4 = 4^^4 ~ 10^(8*10^153)

megafuga-5 = 5^^5 ~ 10^10^10^2184

megafuga-10 = 10^^10, a power tower of 10 10's

megafuga-googolplex = a power tower of a googolplex googolplexes


f2x(x) [roughly 2^x#x in Hyper-E notation]

This is f3(n) in the fast-growing hierarchy. It grows at a rate slightly above tetrational growth, meaning that it transcends the growth rate of all functions of the form f(n) = x^^n where x is some constant, but it doesn't really grow a great deal faster. Since f2(n) is equal to n*2n which doesn't grow faster than even functions like f(n) = 2.001n, f3(n), which is equal to f2(f2( ... f2(n) ... ))) with n copies of f2, can be approximated as 2^2^...^2^n with n 2's, which can be more compactly written using Hyper-E notation as 2^n#n. It's upper-bounded by the function f(n) = n^^(n+1).

Examples of this function:

f3(1) = 2

f3(2) = 2048

f3(3) ~ 10^121,210,694

f3(4) ~ 10^10^10^20.5506

f3(10) ~ 10^10^10^10^10^10^10^10^10^3086

f(x) = x^^(x+1)

This is a function that serves as an upper-bound for the function f3(x) in the fast-growing hierarchy. Its growth rate is roughly on par with the megafuga- prefix.

Pickover's superfactorial

n$ = n!^^n!

This is the more powerful of the two functions known by the word "superfactorial" (the other one is Sloane and Plouffe's superfactorial). It was defined by Clifford Pickover in a book of his titled "Keys to Infinity", a book that talks mostly about infinite numbers. It appears that this function was designed to give people a sense of how big infinity is, as is the case with many of the popular large numbers.

This function is defined as tetrating a number's factorial to itself. For example, only 3 superfactorial is already the incomprehensibly huge 6^^6 ~ 10^10^10^10^19,729. So this is an incredibly fast function ... by a non-googologist's standards. Its growth rate is within the vast sea of functions that are significantly faster than tetration but below pentation.

- - - MORE COMING SOON - - -