Our Discord: https://discord.gg/ajezQHdnRU. Please email mathteca@gmail.com if any organization or society wants to partner with Mathteca.
Time complexity and space complexity are important criteria to measure the efficiency of an algorithm.
There will be certain differences in the running speed of the same algorithm on different computers, and the actual running speed is difficult to calculate theoretically, and it is troublesome to measure it in practice. Therefore, what we usually consider is not the actual running time of the algorithm, but the algorithm. The number of basic operations required to run.
On an ordinary computer, addition, subtraction, multiplication and division, accessing variables (variables of basic data types, the same below), assigning values to variables, etc. can all be regarded as basic operations.
Counts or estimates of basic operations can be used as indicators of how long an algorithm takes.
Definition
To measure the speed of an algorithm, you must consider the size of the data. The so-called data size generally refers to the number of input numbers, the number of points and edges of the graph given in the input, etc. Generally speaking, the larger the data size, the longer the algorithm takes. In algorithm competitions, when we measure the efficiency of an algorithm, the most important thing is not to look at the time it takes under a certain data scale, but to look at the trend of its time increasing with the data scale, that is, the time complexity.
Introduction
The main reasons to consider the trend of time changes with data size are as follows:
Modern computers can handle hundreds of millions or more basic operations per second, so the data we deal with are often very large. If algorithm A takes 100n on data of size n and algorithm B takes n² on data of size n, algorithm B takes less time when the data size is less than 100, but algorithm A can Processing millions of data, while Algorithm B can only handle tens of thousands of data. When the algorithm is allowed to execute for longer, the impact of time complexity on the size of the data that can be processed will be more obvious, which is far greater than the impact of the time taken for the same data size.
We use basic operands to represent the time of the algorithm, and the actual time of different basic operations is different. For example, the time of addition and subtraction is much shorter than that of division. Calculating time complexity while ignoring the differences between different basic operations and the difference between one basic operation and ten basic operations can eliminate the effect of different times between basic operations.
Of course, the running time of the algorithm is not entirely determined by the input size, but is also related to the input content. Therefore, time complexity is divided into several categories, such as:
The worst time complexity is the time complexity corresponding to the longest input at each input size. In algorithm competitions, since the input can be given arbitrarily within a given data range, in order to ensure that the algorithm can pass any data within a certain data range, we generally consider the worst time complexity.
The average (expected) time complexity is the average complexity of all possible input pairs for each input size (the expected time complexity for random inputs).