Space and Time Complexity

Survival of the Fittest, is commonly used to denote progress measured by ‘strength’ or ‘success’. If we think about it, the phrase is applicable to programming too. We are often unaware of what goes on behind our program during its compilation. There may be more than one way to solve a problem, we need to optimize our code and choose the best algorithm to solve our problem. Which is why, while writing a program, we analyze its Space and Time complexity.
Time and space complexity depends on lots of things like hardware, operating system, processors, Compiler etc. However, we only consider the execution time of an algorithm.

Let us understand some basic terminologies.

Space Complexity of an algorithm/program is the total space taken by it, with respect to the input size. Space complexity includes both space used by input as well as the temporary space/run-time space required by the program.

Time Complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the input. It is expressed by the Order of Growth.
Eg. Time Complexity of loops running till n is generally O(n).

Order of Growth is how the time of execution of a program, depends on the length of the input. We use different notations to describe the behavior of a function.
Big O, Omega and Theta notation.

Big O notation gives the ‘worst case’ time complexity for a given algorithm. It represents a curve that bounds an algorithmic function from above.
Omega Notation gives the ‘best case’ time complexity for an algorithm. It represents a curve that bounds an algorithmic function from below.
Theta notation gives the ‘average’ time complexity for an algorithm.

For large values of ‘n’ (input size), the increasing order of Time Complexities is as follows:

O(1)< O(log(log n)) < O(√(log n)) < O(log n)< O(√n) < O(n)< O(n log(n)) < O(n√n)< O(n2)< O(n3)< O(nk)< O(nlog n)< O(2n)< O(n!)

*O(1) denotes ‘constant’ time-complexity.

The inverse relation between Time and Space Complexities is known as Space-Time TradeOff. When an algorithm is optimized to reduce the time complexity, it generally leads to an increase in its space complexity.

Let us understand how to find time complexity of a simple algorithm with an example.

sum = 0;
for (int i = 1; i <= n; i++ )
    for (int j = 1; j <= i*i; j++)
        if (j % i ==0)
            for (int k = 0; k < j; k++)
                sum++;

To calculate time complexity, measure how many times (at most) your code will run.
In this case, there are ‘n’ inputs.

  1. The outermost loop runs n times, disregarding the statements inside it.
  2. The next loop, runs from j=1 till j<i*i , i.e. is n2 times, for ‘n’ inputs, disregarding the outer loop.
  3. The ‘if’ block is an atomic statement, with complexity O(1) because it would not scale on n size.
  4. The innermost loop runs j times. Since, j has an upper bound of n2, the inner most loop runs ‘n2’ times, disregarding the outer loops.
  5. Finally, the innermost statement sum++ runs the number of times the complete nested loop runs, this would give us the Time complexity for the complete program. Thus, this statement runs for n* n2 * n2 times, i.e. for n5 times

Hence, the time-complexity of this loop is O( n5 ).

Let us take another example.

int count = 0;
for (int i = N; i > 0; i /= 2) 
    for (int j = 0; j < i; j++) 
        count++;

In the first look, it seems like the complexity is O(N∗logN).
N for the j-loop and logN for i- loop. However this is incorrect.

Consider the statement count++.
When i=N, j<N; it will run N times.
When i=N/2, j<N/2; it will run N/2 times.
When i=N/4, j<N/4; it will run N/4 times and so on.
Total number of times count++ will run will be given by the sum of these, (to consider ALL iteration, combinations of i and j).
N + N/2 + N/4 +…+ 1 = 2 ∗ N.
So the time complexity will be O(N).

To Sum up here are few examples for better understanding of time complexities:-

If there is a single iteration, and the iterative variable is increasing linearly then it’s O(n) e.g. for(i=0;i<n;i++) // This is O(n)

If the iterative variable is increasing geometrically, then it’s O(log n)
e.g for(k=1;k<n;k= k * 4) //O(log n)

If there is nested loop, where one has a complexity of O(n) and the other O(logn), then overall complexity is O(nlogn);
e.g. for(i=0;i<n;i++) // O(n)
{
for(j=1;j<n;j=j*3) // O(log n)
}

There are some other methods to find out the time complexity of different algorithms, which we will be discussing further in our blog, so stay tuned!

For any doubts or ambiguities, mail us at everythinggeeky.101@gmail.com

Don’t forget to Like, Comment and Share with your friends.

Stay Tuned for Everything Geeky. See you next week!

Cheers! 🙂

Team Bitxorbytes

Follow us on insta @bitxorbytes

cover image credits: https://www.csetutor.com/time-complexity-and-space-complexity-of-an-algorithm/ , https://courses.cs.washington.edu/courses/cse373/19sp/resources/math/asymptotic-analysis/

Published by BitXorBytes

A technical blog which features articles about anything and everything related to computer science and programming! Learn something new and interesting with every post!

3 thoughts on “Space and Time Complexity

Leave a comment

Design a site like this with WordPress.com
Get started