March 18, 2020 || 6 min

- Data structures are used for storing data and organizing data in a way such that the required operations like sorting, inserting can be performed efficiently.
- An algorithm is a finite set of instructions designed to perform a specific task.
- Data structures and algorithms together make a program.

There are two types of complexity we analyze when we design an algorithm -

- Time Complexity tells us how much time an algorithm takes to complete.
- Space Complexity tells us how much storage does this algorithm need for completion.

- upper bound on running time for
*any*input. - T(n) = maximum time of an algorithm on any input of size
**n**.

- Expected running time over
*all*inputs. - T(n) = expected running time of an algorithm over all inputs of size
**n**.

- lower bound on running time for
*any*input. - T(n) = minimum time of an algorithm on any input of size
**n**.

Asymptotic notations are a very important part of this course and it will really haunt you (>_<) if you don't understand them properly. Most of us face difficulties at first in understanding asymptotic notations, in my case, I read class notes, stack overflow, and this book before my mid-semester exam to understand them. The issue is as we move on to the o (little-oh) and w (little-omega) notations, we get confused. So, I have summarised all I have learned from the above resources to explain in the best way possible. Pay attention y'all or maybe take a break and come back with a fresh mind?

The asymptotic notations are of the following five types -

O-notation, pronounced "big-oh notation", is used to describe the asymptotic upper bound of an algorithm. Formally, it is defined as:

For a given function, g(n) , we denote by O(g(n)) the set of functions,
O(g(n)) = { f(n): there exist **positive constants c and n _{0}** such that

Some common O notations are -

- O(1) - constant, independent of size n
- O(log n) - logarithmic
- O(n) - linear
- O(nlogn) = O(logn!) - loglinear, quasilinear or linearithmic
- O(n
^{2}) - quadratic - O(n
^{c}) - polynomial or algebraic - O(c
^{n}) where c > 1 - exponential - O(n!) - factorial

Ω-notation, pronounced "big-omega notation", is used to describe the asymptotic lower bound of an algorithm. Formally, it is defined as:

For a given function, g(n), we denote by Ω(g(n)) the set of functions,
Ω(g(n)) = { f(n): there exist **positive constants c and n _{0}** such that

We talked about O notation above. The disadvantage of describing the complexity of an algorithm in terms of O notation is that if we say that an algorithm runs in f(n)=O(n^{2}), then that algorithm could actually run in n time.

Here comes the role of Θ notation. Usually, we want to be able to say that an algorithm runs no slower and no faster than a particular function. This is called a **tight bound**. Hence, we define Θ notation as -

For a given function, g(n), we denote by Θ(g(n)) the set of
functions
Θ(g(n)) = { f(n): there exist **positive constants c1 , c2 , and n _{0}** such that

- o-notation, pronounced little-oh notation, is used to denote a upper bound that is not
**asymptotically tight**. **Asymptotically tight**: We say a bound f(n) = O(g(n)) is asymptotically tight if f(n) = Θ(g(n)), that is, if also g(n) = O(f(n). The bound 2n^{2}= O(n^{2}) is asymptotically tight as n^{2}= O(2n^{2}) also hold, but the bound 2n = O(n^{2}) is not as n^{2}!= O(2n) because there is no c for which c.2n >= n^{2}for n>n_{o}. So 2n = o(n^{2}) but 2n^{2}!= o(n^{2}).- For a given function, g(n), we denote by o(g(n)) the set of functions
o(g(n)) = { f(n): for any positive constant
**c > 0**, there exists a constant**n**}._{0}> 0 such that 0 <= f(n) < c.g(n) for all n >= n_{0} - The implication of this is that g(n) becomes insignificant relative to f(n) as n approaches infinity:

So the main differences between Big O and small O are points 2 and 4 and the fact that there is no equality in the definition of little oh (in point 3, compare with the definition of Big O). Apart from these, some other differences (picked from here) are -

- In Big-O, it is only necessary that you find a particular multiplier c for which the inequality holds beyond some minimum n. In Little-o, it must be that there is a minimum x after which the inequality holds no matter how small you make c, as long as it is not negative or zero.
- Although somewhat counter-intuitively, Little-oh is the stronger statement. There is a much larger gap between the growth rates of f and g if f ∈ o(g) than if f ∈ O(g).
- One illustration of the disparity is this: f ∈ O(f) is true, but f ∈ o(f) is false.
- Therefore, Big-Oh can be read as "f ∈ O(g) means that f's asymptotic growth is no faster than g's", whereas little-o can be read as "f ∈ o(g) means that f's asymptotic growth is strictly slower than g's". It's like <= versus <.

- ω-notation, pronounced little-omega notation, is used to denote a lower bound that is not asymptotically tight.
- For a given function, g(n), we denote by ω(g(n)) the set of functions
ω(g(n)) = { f(n): for any positive constant
**c > 0**, there exists a constant**n**}._{0}> 0 such that 0 <= c.g(n) < f(n) for all n >= n_{0} - The implication of this is that g(n) becomes insignificant relative to f(n) as n approaches infinity:

We can analyze the differences between little and big omega in a similar manner as done above.

Some relational properties of the notations are given below -

Thanks for bearing me, this was a long one for all of us. I hope you understood. If you have any doubts/suggestions/reviews, please let me know on twitter. In the coming articles, we will use the notations described above, so if you are lost about what the use of these notations is, read the next article.