Theoretical Asymptotics

3.3.2

Theoretical Asymptotic Analysis

for Extremely Fast Growing Functions

Introduction

Megalo-arithmology (the study of large numbers) quickly leads to the study of functions, and then to the study of the “growth rates” of those functions. There is no fastest growing function, and as a general rule, a faster growing function with even a very small input (less than 10) will win out against a slower function even for reasonably large inputs (any decimal you could explicitly write out, generally anything with less than millions of digits!). So the trick to defining really large numbers becomes less about putting a “really large” input into a fast growing function, and much more about creating faster and faster growing functions. Once one has created a sufficiently fast growing function, call it f(n) you can simply put in a relatively small input, for example f(100) and you’ve got a large number that will be difficult to beat without using f(n). Trivially f(101) will be larger, but this doesn’t really count because it adds no new information to the system. Even f(f(100)) adds no new information to the system and can therefore be considered a negligible contribution to the advancement of megalo-arithmology. In order to do something new in megalo-arithmology you have to at least devise a new kind of function which grows faster than all previously constructed functions.

As such megalo-arithmology becomes the study of functions and their respective growth rates. Because of this we will want to have some theory of what it means to say one function grows faster than another. Computer Science is also interested in the growth rates of functions and even has a practical application for them. Because of this we will be borrowing, at least at the outset, the basic theory of growth rates offered by Computer Science. This will lead to a highly stratified system of growth rates which we’ll find useful in distinguishing subtle differences in growth rate. I’d like to make two important points however about the limitations of the theory of growth rates used in computer science. (1) In computer science the slower growing function is preferable. Therefore asymptotic analysis is rarely applied to functions with a faster growth rate of n^n. Thus we will actually be extending the basic theory well beyond it’s ordinary application into purely theoretical territory (2) The theory used in computer science is specifically geared towards applications in computer science, in which constant factors may be ignored for the purpose of classifying growth rates. However this is not the only way to classify growth rates, nor is it necessarily the best suited definition for the purposes of megalo-arithmology.

With that being said we will look at asymptotic analysis as used in Computer Science as a jumping-off point, and as a way to connect megalo-arithmology to already existing theory. However we aren’t going to stay in familiar territory for long! Remember that megalo-arithmology is above all a theoretical study, and as such it should always eventually out stripe any application since it has no “practical limit”. Remember the 2nd law of googology : “If it has an application outside of googology … it’s too small”.

Big-Oh Notation

So we’ll begin with the definition of growth rate as used in computer science. Before that however let’s try to get an intuitive understanding of what we usually mean when we say one function grows faster than another.

Most people would agree that x^2 grows faster than 2x+1. We can look at a table of values:

The larger number in each row is highlighted in red. As you can see x^2 quickly overtakes 2x+1. But notice that x^2 is not always larger than 2x+1. Specifically we have 2x+1 larger than x^2 when x=0,1,2. Yet we understand that x^2 grows faster because it eventually overtakes 2x+1. This idea can be formalized as:

Given two functions, f(n) and g(n), if there exists a constant , m , such that whenever n>m we have f(n) < g(n) we can say g(n) “grows faster” than f(n).

This forms the basis of the idea of growth rate in computer science. This idea works pretty well most of the time, but it has certain limitations. For example we can say x+1 “grows faster” than x, under this definition simply because x < x+1 for all values of x, and so you could pick any value for m and show that x<x+1 whenever x>m. Yet intuitively x and x+1 don’t seem to “grow” at different rates. That’s because implicitly most people understand growth to mean “rate of change” and we can say that both x and x+1 have a “rate of change” of 1 and therefore they increase at the same rate.

Computer Science also treats x and x+1 as the same growth rate, though for slightly different reasons. To understand the motivations for the definition used in computer science it would be best to begin with the application it’s actually used for.

We tend to think of computers as able to perform any task instantaneously. This is, of course, false. Not only does it actually take a small fraction of a second to perform any task, but the specific task can be the difference between apparently instantaneously, to seconds, minutes, hours, days, weeks, months, years, and ultimately to time scales so great they are impractical or literally impossible such as billions or trillions of years! This property has nothing to do with how fast computers are today. The general trend practically since the inception of computers is that their processing speed doubles roughly every 2 years. Even if this trend we’re too continue indefinitely we would see that the same problem: some tasks would seem instantaneous while others would seem to take virtually forever. This may seem counter-intuitive to the average person. After all, what task takes a computer that long?! It seems like they are always getting better at processing huge amounts of data for things like games and the internet. The reason for this is that all the things you typically associate with computers are tasks which are efficient. However, certain mathematical problems prove to be difficult to solve efficiently, and it is tasks such as these that can mean the difference between a possible solution in a few minutes, and the computer never being able to output an answer in any practical time frame (a year is a pretty reasonable upper limit for any task. Anything much beyond that leads to having to constantly replace the hardware as it would eventually burn out, and if the task took long enough the programmer wouldn’t live to see his program terminate, which would defeat the purpose of even executing the program from the programmers perspective).