Principles of Big O Notation

Melissa Guachun
3 min readDec 31, 2021

--

Photo by Icons8 Team on Unsplash

For my birthday I was gifted “Cracking the Coding Interview”, possibly one of the most appropriate gifts any new programmer could receive. I couldn’t wait to elevate my understanding of Big O notation, so I figured I would review the principles to get started!

For all intents and purposes, my explanations may be brief so that other beginners can understand these concepts.

Big O Notation is essentially an equation for understanding how time scales with respect to some input variables.

Gayle Laakmann McDowell references an experiment where students in a remote part of the world wanted to test if information could travel faster, via carrier pigeon or by the internet. By their surprise, a carrier pigeon beat the slow paced internet in carrying data from one point to another. McDowell identifies the method of transferring data via carrier pigeon as an example of “Constant Time” O(1).

O(1) Constant Time is the constant time with respect to the size of the input. This means that no matter the size of the file or input within the thumb drive, it will not take the carrier pigeon any longer to deliver the information to its destination.

The method of transferring data via the student’s slow internet is referred to by McDowell as “Linear Transfer Time” O(N). Information is delivered by an internet transfer, meaning it scales linearly with respect to the amount of input. This means that we can assume twice the amount of data is going to take about twice the amount of time to be delivered to its destination.

McDowell illustrates these concepts in her example outline how Big O Notation lends us insight into the how runtime scales with respect to the input. This assists us in understanding the scalability and size of our algorithms and how we can build the most time effective solutions.

4 Important Rules of Big O Notation:

  1. Different steps get added:
function something(){
doFirstStep(); // O(a)
doSecondStep(); // O(b)
} // ^ Together this would be:
O(a+b)

2. Drop constants:

function minMax1(array) {
min, max <= NULL
for each e in array \ O(n) \
min = MIN(e, min) / \
for each e in array \ / O(n)
max= MAX(e, max) / O(n) /
}

function minMax2(array){ \
min, max <= NULL \
for each e in array \
min = MIN(e, min) / O(n)
max = MAX(e, max) /
} /

Function minMax1 and function minMax2 perform the same, just in different orders. For this reason, it is important to drop constants in Big O. Since both functions result in O(n), we do not add them to make O(2n). Instead we drop the constant of the two n’s and the result would be O(n).

Do not describe things as O(2n) and so on, in Big O we are only interested in how things scale roughly.

3) Different inputs — — → different variables:

int intersectionSize(arrayA, arrayB){
int count = 0
for a in arrayA {
for b in arrayB{
if a == b {
count = count + 1
}
}
}
return count
}

This whole function outlines array A and array B. So the Big O would be O(a*b), where we are multiplying the length of arrayA and the length of arrayB.

4) Drop non-dominant terms:

function hello(array) {  \
max = NULL \
for each a in array{ \
max = MAX(a, max) / O(n)
} /
print max /
for each a in array { \
for each b in array { \
print a, b \
} / O(n^2)
} /
} /

When dropping non dominants, the (n²) that’s going to dictate how the runtime changes.

Conclusion:
I’m still trying to wrap my mind around these concepts while leaving space for them to change as I progress in my reading of Big O. And while these concepts are laid out, I’m still an forever learning so if there are any mistakes in my explanations please feel free to reach out!

Resources:

“Cracking the Coding Interview” by Gayle Laakmann McDowell

--

--

Melissa Guachun

Software Developer and visual artist based in NYC. Join me on my journey to coding enlightenment or a torrential mental breakdown, whichever comes first.