Time Complexity Analysis [1 ed.] 9798466805017

This book “Time Complexity Analysis” introduces you to the basics of Time Complexity notations, meaning of the Complexit

2,282 204 2MB

English Pages 163 [153] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Time Complexity Analysis [1 ed.]
 9798466805017

Citation preview

Time Complexity Analysis With Time and Space Complexity Cheat Sheets !

Aditya Chatterjee Ue Kiao, PhD.

Introduction to this Book This book “Time Complexity Analysis” introduces you to the basics of Time Complexity notations, meaning of the Complexity values and How to analyze various Algorithmic problems. We have tackled several significant problems and demonstrated the approach to analyze them and arrived at the Time and Space Complexity of the problems and Algorithms. This is a MUST-READ book for all Computer Science students and Programmers. Do not miss this opportunity. You will get a better idea to judge which approach will work better and will be able to make better judgements in your development work. See the “Table of content” to get the list of exciting topics you will learn about. Some of the key points you will understand: Random Access Memory does not take O(1) time. It is complicated and in general, has a Time Complexity of O(√N). Multiplication takes O(N2) time, but the most optimal Algorithm (developed in 2019) takes O(N logN) time which is believed to be the theoretical limit. As per Time Complexity, finding the largest element and the ith largest element takes the same order of time. It is recommended that you go through this book twice. First time, you may skip the minute details that you may not understand at first go and get the overview. In the second reading, you will get all the ideas, and this will strengthen your insights. In 1950s, Computing was not a Science.

It was a collective effort by several Computer Scientists such as Robert Tarjan and Philippe Flajolet who analyzed several computational problems to demonstrate that Computation Problems are equally complicated as Physics and Mathematics Problems. The ideas captured in this book include some of these analyses which glorified Computer Science and made it a Scientific field. Book: Time Complexity Analysis Authors: Aditya Chatterjee; Ue Kiao, PhD. Contributors (7): Vansh Pratap Singh, Shreya Shah, Vikram Shishupalsingh Bais, Mallika Dey, Siddhant Rao, Shweta Bhardwaj, K. Sai Drishya. Published: August 2021. Contact: [email protected] If you would like to participate in OpenGenus’s remote Internship program, apply at: internship.opengenus.org. It will help you gain valuable practical experience and contribute to the community.

Table of content

#

1 2 3 4 5 6 6.1 6.2 6.3 6.4 6.5 7 8 9 10 11 12 12.1 12.2 12.3 12.4

Chapter Introduction to Time and Space Complexity (+ different notations) How to calculate Time Complexity? Meaning of different Time Complexity Brief Background on NP and P Does O(1) time exist?: Cost of accessing Memory Time Complexity of Basic Arithmetic Operations Bitwise operations Addition Subtraction Multiplication Division Analysis of Array Analysis of Dynamic Array Find largest element Find Second largest element Find ith largest element Time Complexity Bound for comparison-based sorting Analysis of Selection Sort Analysis of Insertion Sort Analysis of Bubble Sort Analysis of Quick Sort

Page Number

1 6 10 12 16 35 36 38 42 47 48 50 52 55 57 67 75 80 87 95 101

13 13.1 13.2 14 15 16 17

Bound for non-comparisonbased sorting Analysis of Counting Sort Analysis of Bucket Sort Analysis of Linked List Analysis of Hash functions Analysis of Binary Search Time and Space Complexity Cheat Sheets

110 112 119 130 133 137 140

In addition to this book, you should read:

“Binary Tree Problems: Must for Interviews and Competitive Coding” by Aditya Chatterjee, Ue Kiao and Srishti Guleria .

Introduction to Time and Space Complexity (+ different notations) Time Complexity is a notation/ analysis that is used to determine how the number of steps in an algorithm increase with the increase in input size. Similarly, we analyze the space consumption of an algorithm for different operations. This comes in the analysis of Computing Problems where the theoretical minimum time complexity is defined. Some Computing Problems are difficult due to which minimum time complexity is not defined. For example, it is known that the Problem of finding the i-th largest element has a theoretical minimum time complexity of O(N). Common approaches may have O(N logN) time complexity. One of the most used notations is known as Big-O notation. For example, if an algorithm has a Time Complexity Big-O of O(N^2), then the number of steps is of the order of N^2 where N is the number of data. Note that the number of steps is not exactly N^2. The actual number of steps may be 4 * N^2 + N/3 but only the dominant term without constant factors is considered. Different notations of Time Complexity Different notations of Time Complexity include: Big-O notation Little-O notation Big Omega notation Little-Omega notation Big-Theta notation

In short:

Notation Symbol Meaning Big-O O Upper bound Tight Upper Little-o o bound Big Ω Lower bound Omega Little Tight Lower ω Omega bound Big Upper + Θ Theta Lower bound This will make sense as you go through the details. We will revisit this table following it. 1. Big-O notation Big-O notation to denote time complexity which is the upper bound for the function f(N) within a constant factor. f(N) = O(G(N)) where G(N) is the big-O notation and f(N) is the function we are predicting to bound. There exists an N1 such that: f(N) N1 c is a constant Therefore, Big-O notation is always greater than or equal to the actual number of steps. The following are true for Big-O, but would not be true if you used little-o: x² ∈ O(x²) x² ∈ O(x² + x) x² ∈ O(200 * x²)

2. Little-o notation Little-o notation to denote time complexity which is the tight upper bound for the function f(N) within a constant factor. f(N) = o(G(N)) where G(N) is the little-o notation and f(N) is the function we are predicting to bound. There exists an N1 such that: f(N) < c * G(N) where: N > N1 c is a constant Note in Big-O, N1 c is a constant Therefore, Big Omega Ω notation is always less than or equal to the actual number of steps.

4. Little Omega ω Little Omega ω notation to denote time complexity which is the tight lower bound for the function f(N) within a constant factor. f(N) = ω(G(N)) where G(N) is the litte Omega notation and f(N) is the function we are predicting to bound. There exists an N1 such that: f(N) > c * G(N) where: N > N1 c is a constant Note in Big Omega, >= was used instead of >. Therefore, Little Omega ω notation is always less than to the actual number of steps. 5. Big Theta Θ notation Big Theta Θ notation to denote time complexity which is the bound for the function f(N) within a constant factor. f(N) = Θ(G(N)) where G(N) is the big Omega notation and f(N) is the function we are predicting to bound. There exists an N1 such that: 0 T(N) = N + N/2 + N/4 + ... + 1 => T(N) = N logN This analysis comes in a sorting algorithm which is Quick Sort. For another sorting algorithm known as Stooge Sort, the recurrent relation is: T(N) = 3 * T(2N/3) + O(1) Solving this equation, you will get: T(N) = O(N ^ (log3 / log1.5)) = O(N^2.7095) Solving such recurrence relation requires you to have a decent hold in solving algebraic equations.

Some examples of recurrence relations:

Recurrence T(N) = T(N/2) + O(1) T(N) = T(N/2) + O(N) T(N) = 3 * T(2N/3) + O(1) T(N) = T(N1) + O(1) T(N) = T(N1) + O(N) T(N) = 2 * T(N-1) + O(1)

Algorithm Big-O Binary O(logN) Search Quick Sort O(N logN) Stooge Sort Linear Search Insertion Sort Tower of Hanoi

O(N^2.7095) O(N) O(N^2) O(2^N)

Other techniques are not fixed and depend on the Algorithm you are dealing with. We have illustrated such techniques in the analysis of different problems. Going through the analysis of different problems, you will get a decent idea of how to analyze different computing problems. Note: Recurrence relation techniques can be used to analyze algorithms but not general computing problems. Let us solve some simple code examples: Example 1: for (i = 0; i < N; i++) { sequence of statements of O(1) }

The Time Complexity of this code snippet is O(N) as there are N steps each of O(1) time complexity. Example 2: for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { sequence of statements of O(1) } }

The Time Complexity of this code snippet is O(N^2) as there are N steps each of O(N) time complexity. There are two nested for loops. Example 3: for (i = 0; i < N; i++) { for (j = 0; j < N-i; j++) { sequence of statements of O(1) } }

Number of steps = N + (N-1) + (N-2) + ... + 2 + 1 Number of steps = N * (N+1) / 2 = (N^2 + N)/2 The Time Complexity of this code snippet is O(N^2) as the dominant factor in the total number of comparisons in O(N^2) and the division by 2 is a constant so it is not considered.

Meaning of different Time Complexity Various Time Complexity Big-O values have different names. Following table lists some of the common Big-O values in increasing order of value/ time:

Big-O O(1) O(log* N) O(logN) O(N) O(N logN) O(N^2) O(N^p) O(c^N) O(N!)

Known as Constant Time Iterative Logarithmic Time Logarithmic Time Linear Time Log Linear Time Quadratic Time Polynomial Time Exponential Time Factorial Time

Following is the visualization of some of the above notations:

Some points: The target is to achieve the lowest possible time complexity for solving a problem. For some problems, we need to good through all element to determine the answer. In such cases, the minimum Time Complexity is O(N) as this is the read to read the input data. For some problems, theoretical minimum time complexity is not proved or known. For example, for Multiplication, it is believed that the minimum time complexity is O(N logN) but it is not proved. Moreover, the algorithm with this time complexity has been developed in 2019. If a problem has only exponential time algorithm, the approach to be taken is to use approximate algorithms. Approximate algorithms are algorithms that get an answer close to the actual answer (not but exact) at a better time complexity (usually polynomial time). Several real-world problems have only exponential time algorithms so approximate time algorithms are used in practice. One such problem is Travelling Salesman Problem. Such problems are NP-hard.

Brief Background on NP and P Computation problems are classified into different complexity classes based on the minimum time complexity required to solve the problem. Different complexity classes include: P (Polynomial) NP (Non-polynomial) NP-Hard NP-Complete P: The class of problems that have polynomial-time deterministic algorithms (solvable in a reasonable amount of time). P is the Set of problems that can be solved in polynomial time. NP: The class of problems that are solvable in polynomial time on a nondeterministic algorithm. Set of problems for which a solution can be verified in polynomial time. P vs NP If the solution to a problem is easy to check for correctness, must the problem be easy to solve? P is subset of NP. Any problem that can be solved by deterministic machine in polynomial time can also be solved by non-deterministic machine in polynomial time.

NP - HARD Problems that are "at least as hard as the hardest problems in NP". NP-complete Problems for which the correctness of each solution can be verified quickly, and a brute-force search algorithm can actually find a solution by trying all possible solutions. Relation between P and NP:

P is contained in NP. NP complete belongs to NP NP complete and P are exclusive

One of the biggest unsolved problems in Computing is to prove if the complexity class P is same as NP or not. P = NP or P != NP If P = NP, then it means all difficult problems can be solved in polynomial time.

There are techniques to determine if a problem belongs to NP or not. In short, all problems in NP are related and if one problem is solved in polynomial time, all problems will be solved.

Does O(1) time exist?: Cost of accessing Memory In this chapter, we have taken an in-depth look at the operations and algorithms that have a constant time in terms of asymptotic notation of time complexities. Is O(1) really a practical way of representing the time complexity of certain algorithms/ operations? This is one of the most important chapters of this book so follow along carefully to build your fundamentals. Sub-topics: 1. 2. 3. 4. 5.

Getting started with the Memory model Let's talk about O(1) Digging deep into access time Testing our claim Conclusions a. Physical analysis b. Analysis of algorithms

Myth: Access memory at a particular address takes constant time O(1). 1. Getting started with the Memory model Before we dive into the Time Complexity analysis, we need to understand the memory model of Modern Computers. You will see that the real Time Complexity has a direct link with this structure and moreover, we will show that No Physical Computer can go beyond a certain limit by following rules of Physics. 1.1 Random Access Machine (RAM) and External Memory Model Let us talk about the Random Access Machine (RAM). A RAM physically consists of a central processing unit (CPU) and a memory. The memory itself has various cells inside it that are indexed by means of positive integers. A cell is capable of holding a bit-string. Talking about the

CPU, it has a finite number of registers (an accumulator and an address register, to be specific). In a single step, a RAM can either perform an operation (simple arithmetic or Boolean operations) on its registers or access memory. When a portion of the memory is accessed, the content of the memory cell indexed by the content of the address register is either loaded into the accumulator or is directly written from the accumulator. In this case, two timing models are used which are: The unit-cost RAM in which each operation has cost one, and the length of the bit-strings that can be stored in the memory cells and registers is bounded by the logarithm of the size of the input. The logarithmic-cost RAM in which the cost of an operation is equal to the sum of the lengths (bits) of the operands, and the contents of memory cells and registers do not have any specific constraints. Now that you have an idea about the RAM, let us get an idea about the External Memory Model. An EM machine is basically a RAM that has two levels of memory. Getting into the specifics, the levels are referred to as cache and main memory or memory and disk, respectively. The CPU operates on the data that present in the cache. Both cache and main memory are each divided into blocks of B cells, and the data is transferred between them in blocks. The cache has size M and hence consists of M/B blocks whereas the main memory is infinite in size. The analysis of algorithms in the EM-model actually bounds the number of CPU steps and the number of block transfers. The time taken for a block transfer to be completed is equal to the time taken by Θ(B) CPU steps. In this case, the hidden constant factor is significantly large, and this is why the number of block transfers are taken into consideration in the analysis. 1.2 Levels/Hierarchy of Memory The CPU cache memory is divided into three levels. The division is done by taking the speed and size into consideration.

L1 cache: The L1 (Level 1) cache is considered the fastest memory that is present in a computer system. The L1 cache is generally divided into two sections: The instruction cache: Handles the information about the operation that the CPU must perform. The data cache: Holds the data on which the operation is to be performed. The L1 cache is usually 100 times faster than the system RAM. L2 cache: The L2 (Level 2) cache is usually slower than the L1 cache but has the upper hand in terms of size. The size of the L2 cache depends on the CPU, but generally it is in the range of 256kB to 8MB. It is usually 25 times faster than the system RAM. L3 cache: The L3 (Level 3) is the largest but also the slowest cache memory unit. Modern CPUs include the L3 cache on the CPU itself. The L3 cache is also faster than the system RAM, although it is not very significant. 1.3 Virtual Memory Virtual memory is a section of the volatile memory that is created temporarily on the storage drive for the purpose of handling concurrent processes and to compensate for the high RAM usage. Usually, virtual memory is considerably slower than the main memory because of the fact that the processing power is being consumed by the transportation of data instead of the execution of instructions. A rise in latency is also observed when a system needs to use virtual memory. 1.4 Virtual Address Translation (VAT) Model The Virtual Address Translation (VAT) machines are RAM machines that use virtual addresses and also account for the cost of address translations, and it has to be taken into consideration as well during the analysis of various algorithms. For the purpose of understanding and demonstration, let us assume that: P = Page size (P = 2ᵖ) K = Arity of translation tree (K = 2ᵏ) d = Depth of translation tree (dlog ₖ (max used virtual address))

W = Size of the TC cache τ = Cost of a cache fault (number of RAM instructions) Both the physical and virtual addresses are strings in {0, K-1}ᵈ{0,...,P-1}. Such strings correspond to a number present in the interval [0, KᵈP-1] in a natural manner. The {0, K-1}ᵈ part of the address is the index and is of length d which is an execution parameter that is fixed prior to the execution. It is also assumed that d = [log ₖ (maximum used virtual address/P)]. The {0,...,P1} part of the address is called the page offset where P is the page size (as specified before). Coming to the actual translation process, it is basically a tree walk/traversal. We have a K-ary tree T of height d. The nodes of the tree are the pairs (l, i) where l >= 0 and i >= 0. Here l is the layer of the node and i is the total number of nodes itself. The leaves of the tree are present on layer zero and a node (l, i) on layer l >= 1 has K children on the layer l-1. In particular, the node (d, 0), the root, has children (d-1, 0),...,(d-1, K-1). The leaves of the tree are the physical pages of the main memory of a RAM machine. For example, to translate the virtual address xₔ₋₁...x₀y, we will start from the root of T and then follow the path which will be described by xₔ₋₁...x₀. This path is referred to as the translation path for the address and it ends in the leaf (0, Σ₀≤ᵢ≤ₔ₋₁xᵢKᶦ). The following figure depicts the process in a generic way:

Conclusion: If D and i are integers such that D >= 0 and i >= 0, the translation paths for addresses i and i+D differ in at least the max(0, log ₖ (D/P)) nodes. 1.5 Translation costs and Cache faults In a research that was conducted earlier, six simple programs were timed for different inputs, namely: permuting the elements of an array of size n random scan of an array of size n n random binary searches in an array of size n heap sort of n elements Introsort (hybrid of quick sort, heap sort and insertion sort) of n elements Sequential scan of an array of size n It was found that for some of the programs, the measured running time coincided with the original predictions of the models. However, it was also found that the running of random scan seems to grow as O(N log²N), and that does not really coincide with the original predictions of the models that is O(N).

You might be wondering as to why are the predicted and the measured run times different? (by a factor of N logN). We will have to do some digging in order for us to understand that. Modern computers have virtual memories, and each individual process has its own virtual address space. Whenever a process accesses memory, the virtual address has to be translated into a physical address (similar working as that of NAT, network address translation). This translation of virtual addresses into physical addresses is not trivial, and hence has a cost of its own. The translation process is usually implemented as a hardware-supported walk in a prefix tree and this tree is stored in the memory hierarchy and because of this, the translation process may also sustain cache faults. Cache faults are a type of page fault that occur when a program tries to reference a section of an open file that is not currently present in the physical memory. The frequency of cache faults is actually dependent upon the locality of the memory accesses (lesser local memory accesses result in more cache faults). The depth of the translation tree is logarithmic in the size of an algorithm's address space and hence, in the worst case, all memory accesses may lead to a logarithmic number of cache faults during the translation process. 2. Let's talk about O(1) We know that the time complexity for iterating through an array or a linked list is O(N), selection sort is O(N²), binary search is O(logN) and a lookup in a hash table is O(1). However, we will prove that accessing memory is not a O(1) operation, instead it is a O(√N) operation. We will try to prove this through both theoretical and practical means, and you will be convinced by the reasoning. 3. Digging deep into access time Let us take an example. Try to relate this example with how memory access works in a working

system. Suppose you run a big shop that deals with games. Your shop has a circular storage for games, and you have placed the games in an orderly manner and you remember as to which game can be found in which drawer/place. A customer comes to your shop, and asks for game X. You know where X is placed in your shop. Now, what would be the time taken by you in order for you to grab and bring the game back to the customer? It would obviously be bounded by the distance that you have to walk to get that game, the worst case being when the game is present at either end of the shop, i.e., you will have to walk the full radius r. Now let us take another example. Try to relate this example with how memory access works in a working system as well. Now let us suppose that you have upgraded your shop's infrastructure, and its storage capacity has increased tremendously. The radius of your shop is now twice the original radius and hence the storage capacity has increased too. Now, in the case of the worst-case scenario, you will have to walk twice the distance to retrieve a game. But we also have to think about the fact that the area of the shop is now doubled and therefore it can contain four times the original quantity of games. From this relation, we can infer that N games that can fit into our shop storage is proportional to the square of the radius r of the shop. Hence, N ∝ r² Since we know that the time taken T to retrieve a game is proportional to the radius r of the shop, we can infer the following relation: ==> N ∝ T² ==> T ∝ √N ==> T = O(√N) This scenario is roughly comparable to a CPU which has to retrieve a piece of memory from its library, which is the RAM. The speed obviously differs significantly, but it is after all bounded by the

speed of light. In our example, we assumed that the storage space of our shop had a circular infrastructure. So how much data can be fitted within a specific distance r from the CPU? What if the shape of the storage space was spherical? Well, in that case, the amount of memory that could be fitted within a radius r would then become proportional to r³. However, in practice, computers are actually rather flat because of many factors such as form-factor, cooling issues, etc. 4. Testing our claim Let us say that we have a program that iterates over a linked list of length N, consisting of about 64 to 400 million elements. Each node also contains a 64 bits pointer and 64 bits of dummy data. The nodes of the linked list are also jumbled around in memory such that the memory access for each node is random. We will be measuring iterating through the same list a few times, and then plotting the time taken per element. Well, if the access time really were O(1), then we would get a flat plot (in the graph). However, interestingly, this is what we actually get:

The graph plotted above is a log-log graph, and hence the differences that are visible in the figure are actually huge in size. In the figure, there is a noticeable spike or jump from about one nanosecond per element all the way up to a microsecond. Now why is that happening? Try to recall what we spoke about in the “Getting started with the Memory model” section. The answer to that question is caching. Off-chip or distant communication in RAM can be quite slow at times and in order to combat this con, the concept of cache was introduced. The cache basically is an onchip storage that is much faster and closer. The system on which these tests were conducted had three levels of cache called L1, L2, L3 of 32 kiB, 256 kiB and 4 MiB each respectively with 8 GiB of RAM (with about 6GiB being free at the time of the experiment). In the following figure, the vertical lines are represented by the cache sizes and the RAM:

You might notice a pattern in the figure posted above. The importance and the role of caches also becomes very clear. If you notice, you can see that there is a roughly a 10x slowdown between 10kB and 1MB. The same is true for the phase between 1MB and 100MB and for 100MB and 10Gb as well. It appears as if for each 100-fold increase in the usage of the memory, we get a 10-fold slowdown. Let us compare that with our claim:

In the figure above, the blue line corresponds to a O(√N) cost each time the memory is accessed. So, what happens when we reach the latter (right) side of the graph? Will there be a continuous rise in the graph, or would it become flat? It will actually flatten for a while until the memory could no longer be fit on the SSD and the help of the HDD is needed. From there, we would go to a disk server and then to a datacenter.

For each such jump, the graph would flatten for a while, but the rise will always arguably come back. 5. Conclusions 5.1 Physical analysis

The cost of a memory access depends upon the amount of memory that is actually being accessed as O(√N), where N = amount of memory being touched between each memory access. From this, we can conclude that if one touches the same list/table in a repetitive manner, then: Iterating through a linked list will become a O(N√N) operation the binary search algorithm will become a O(√N) operation a hash map lookup will also become a O(√N) operation. What we do between subsequent operations performed on a list/table also matters. If a program is periodically touching N amount of memory, then any one memory access should be O(√N). So if we are iterating over a list of size K, then it will be a O(K√N) operation. Now, if we iterate over it again (without accessing any other part of the memory first), it will become a O(K√K) operation. In the case of an array (size K), the operation will be O(√N + K) because it is only the first memory access that is actually random. However, iterating over it again will become a O(K) operation. That makes an array more suitable for scenarios where we already know that iteration has to be performed. Hence, memory access patterns can be of significant importance. 5.2 Analysis of algorithms Before proceeding with our conclusive remarks on certain algorithms, some assumptions that are necessary for the analysis to be meaningful have been written below. Moving a single translation path to the translation cache costs more than a single instruction but does not cost more than the instructions that are the equivalent to the size of a page. If at least one instruction is performed for each cell in a page, then the cost of translating the index of that page can be

amortized, i.e., 1 >n; int arr[n]; for (i = 0; i < n; i++) cin >> arr[i]; // input the array insertionSort(arr, n); // sort the array for (i = 0; i < n; i++) cout