Home Algorithms’ complexity analysis - Big-O examples with explanations
Post
Cancel

Algorithms’ complexity analysis - Big-O examples with explanations

As Big-O algorithm’s time complexity is very common theoretical concept, investing some time in understanding it can pay off when implementing solution to some problem, because finding time complexity presumes deep understanding of the code behind it. Important thing though is not to overdo it, because estimating time complexity of more complex algorithms can be more resource-consuming than the coding itself. Another worthy guideline is to first devise any solution to the problem, then according to the needs and priorities of the current project contemplate about optimization, not the vice-versa. In this article we discuss the Big-O time complexity analysis of 10 algorithm implementations in C#, Java, Python and pseudocode.

Example 1.

1
2
3
4
5
6
7
for ( int a=0; i<arrA.length; a++)
    {
    for (int b=0; i<arrB.length; b++)
        {
            System.out.println(a+","+ b);
        }
    }

Here we have two loops iterating over the two different arrays. Since we don’t know anything about those arrays, the total worse time of execution is a product of A and B, O(A*B). If we had two loops iterating within the same array, we’d had O(n^2). Now consider we have two separate loops that iterate over different arrays:

1
2
3
4
5
6
7
8
for(int a=0; i<arrA.length; a++))
    {
        System.out.println(a);
    }
for(int b=0; i<arrB.length; b++)
    {
        System.out.println(b);
    }

We simply add those loops and say O(A+B). Again, there is no way to simplify under that as the details on arrays A and B are unknown. Analogous, if we’d had two loops that iterate on the same array, we could then simplify and state: O(A+A) ~ O(A) – linear time.

Example 2.

1
2
3
4
5
6
7
def Func(n):
	i=1
	j=1
	while j<n:
		i+=1
		j+=i
		print("#") 

In order to calculate the Big-O time in this example, we should ask ourselves under what conditions the while loop exits and what is the number of those steps. Value of i is incremented in each iteration and value of j is sum of previous values of i. The loop terminates when j<n, meaning the loop will terminate when sum of all iterations reaches and overcomes the n, so if we label a total number of iterations as y, we can state: 1+2+…+y>n. We further simplify the statement using the known mathematical rule that the sum of the given numbers’ array is the mean of the product of its last number enlarged by one: y(y+1)/2 > n => O(sq.root n).

Example 3.

1
2
3
4
5
6
7
8
9
10
11
12
search 7 in [1, 3, 4, 7, 10, 12, 22, 25, 30]
 if 7 equals middle of array (10) then return it
  else if 7 smaller than 10
    search in [1, 3,4,7)
      if 7 is smaller than middle of sub-array((3+4)/2)
        search in [1,3]
      else if 7 is bigger than 3.5
        search in[4,7]
          if 7 is smaller than 4
            compare
          else compare to 7
            return 

Here we examine a case of binary search which takes a sorted array (by the way, this is a fine example how sorting reduces problem’s complexity) and by comparing the search case to the middle array in the iterative turns ultimately finds it (or doesn’t). It starts with N elements to search in the first iteration, then N/2. N/4 and so on, until the value either is found (either by finding it in the middle of the array or by reducing it to the size of one element – the one that is searched for) or isn’t. So, the total running time is a number of steps until N goes to the 1 by its division by 2:

N=9 // N/2 N=4 // N/2 N=2 // N/2 N=1 // N/2

This number is same as if we reverse this and ask how many times we need to multiply 1 by 2 until we reach N:

N=1 // N2 N=2 // N2 N=4 // N2 N=9 // N2

That number of times is log, as we search the k in statement 2^k=N => log(2)16=k. We will discard here the basis of the log and the fact that in this particular example we start with an odd number, both of which isn’t important for the Big-O estimation. So, in the binary search, worst execution time is O(log n).

Example 4.

1
2
3
4
5
6
7
8
9
10
11
def Func(n):
    count=0
    for i in range(int(n/2), n):  # O(n/2) time
        a=1
        while a+n/2<=n:  # O(n/2) time
            b=1
            while b<=n:  # O(log n)
                count+=1
                b*=2 # because of this
            a+=1
    print(count)

Here we have three nested loops, one outer and two inner ones. First two loops execute in n/2 times, third one, while b<=n loop, because of the line b*=2 which reduces its time from linear to logarithmic (halving the number of iterations), is logn. We multiply the final outcome as those are nested loops and have: n/2 ^2 * log n => O(n^2 logn).

Example 5.

1
2
3
4
5
6
7
8
int Func(int n)
    {
        if (n<=1)
        {
            return 1;
        }
        return Func(n-1)+Func(n-1);
    }

Calling Func(3) corresponds to:

Func(3-1)+ Func(3-1)=2(Func(2) Func(2)=Func(1)+Func(1)=2 Therefore, Func(3)=2(Func(2))=4 Func(4)=8, Func(5)=16 and so on – we have a clear example of geometric progression, i.e. this is a recursive function which generates multiple instances of itself which are progressing as a powers of 2, therefore the number of those calls defines its complexity, which is equal to 2^N => O(2^N).

Example 6.

1
2
3
4
5
6
7
8
9
10
def Func(n):
    count=0
    if n<=0:
        return 0
    j=1
    for i in range(0,n): # n times
        while j<n: # n-j times const. time
            j+=1
            count+=1
    print(count)

The function above executes in n*const. time => O(n).

Example 7.

1
2
3
4
5
6
7
8
9
10
11
12
13
void ArrayPrint(int[] arrayA, int[] arrayB) 
    {
        for (int i= 0; i < arrayA.length; i++) 
        {
            for (int j = 0; j < arrayB.length; j++) 
            {
                for (int k = 0; k < 100000; k++) 
                {
                    Console.WriteLine(arrayA[i] + "," + arrayB[j]);
                }
            }
        }
    }

Here we have a variation of the example 1., with the addition of one more inner loop which iterates up to a constant number. Regardless of its size, the execution time is constant and we discard it so the total running time remains O(A,B).

Example 8.

1
2
3
4
5
6
7
def Func(n):
    for i in range(0, n//4): # executes n/4 times
        j=1
        while j<=n: # executes n/4 times
            j+=4
            print("*")

Total execution time: O(n^2)

Example 9.

Which of the following statements have the time complexity of O(n)?

  1. O(n+p), where p<n/2
  2. O(2n)
  3. O(n+logn)
  4. O(n+m)

Answer: Cases 1, 2 and 3 all have the O(n). Case 1: Since the p<n/2, the contribution of p is guaranteed smaller than n, therefore we can exclude it. Case 2: We can exclude constant 2, as already mentioned. Case 3: logn will always be smaller than n, therefore we exclude it. Now the case 4 can’t be simplified, as we don’t know anything about n or m, so its complexity is as stated.

Example 10.

1
2
3
4
5
6
7
8
9
10
11
12
count=0
def Logs(n):
    i=1
    global count
    while i<=n: 
        j=n
        while j>0: 
            j=j//2 # logn
            count+=1
        i*=2 # logn
    return count

Here, the outer loop executes when I doubles, which halves the number of executions, giving us the logx times, as discussed in previous examples. Similarly, the inner loop also has logarithmic time. So, the total complexity is O(log^2n).

Conclusion

The concept of Big-O notation may be somewhat too abstract to figure out, but through analyzing simple algorithm implementations one should get a grasp on its basic principles. Having acquainted us with a bunch of such examples, we should be in the position to draw conclusions about the worst time complexity of the given algorithm’s implementation and to compare it with another one. It is a term often found in the computer science books so developers should at least understand the basic concepts behind it.

This post is licensed under CC BY 4.0 by the author.