top of page

Complexity

Tutoring Data Structures > Complexity

When we talk about complexity, we mean:

"How long does an algorithm take for a given input?" (Time complexity) OR

"How much memory is needed to perform this algorthim for a given input?"(Space complexity)

These days, memory is abundant so we don't usually consider space complexity as important as time complexity, which in some cases can mean the difference between taking less than a second, to taking thousands of years!

What we actually measure

What we want is a complexity class for a given algorithm, not how long it takes for a particular implementation. What this means is that instead of just running the program and timing how long it takes, we specify the time complexity class in terms of the input. 

So we say "for a given input n, the time complexity is T(n) or "T in terms of n". We get to this point by counting the number of operations that happen for every n.

 

The following can be considered as operations:

  • Assignment to a variable.

  • Arithmetic operations (+ - * / etc).

  • Incrementing or decrementing.

  • Searching for a value in an array.

  • Comparing two things together.

 

So we look through the algorithm and count every time one of these is present. We then add them together but we take the highest class of value : O(n) +3 + 1 = O(n) since O(n) is the highest class, we forget the smaller numbers

Similarly : O(n) + O(n^2) is O(n^2) and O(n log n) + O(log n) = O(log n)

Amortised complexity

Amortised complexity is a method of analysing algorthims that considers the entire sequence of operations of the program.

Worst case Vs average case complexity

When considering which algorithm or data structure to choose, we need to decide which is more important. If there are very rarely cases where the data will be huge and the time may take longer but mostly the data is small then maybe an algorithm with better average case complexity is better. 

Conversely, if there are always cases which involve a lot of data then it is probably better to choose a solution with better worst case complexity.

 

Most of the time, saving time overall is important so better go with better average time complexity.

However, if the situation is time critical (like in a space shuttle or other hazardous situation) then it may be unacceptable for the algorithm to perform slowly, so we may have to choose better worst case complexity.

Big Oh notation for time complexity

We measure how fast or slow an operation is using "Big Oh notation".

Here are some example complexity classes (fastest to slowest):

O(1) - Constant - The time taken does not depend on the input n.

O(log n) - Logarithmic - with each operation, the time complexity halves. This is considered highly efficient and is why searching a binary search tree is much faster than an un-ordered tree.

O(n) - Linear - the time taken is proportional to the input n.

O(n^2) (O of n squared) - Quadratic - The time complexity is exponential to the power of 2.

O(n^3) (O of n cubed) - Cubic - The time complexity is exponential to the power of 3.

O(n!) - Factorial - Very slow indeed as n increases

bottom of page