A not-so intimidating look at Big O
I have begun my job search for my first job in technology after graduating a coding bootcamp. I have built multiple projects and have solved “coding problems” and when my code works I was proud and moved on. It turns out as I start to prepare for technical challenges (an area I already need to improve upon) solving the problem is not good enough. Space and time are on the line. If your program is solved in a “brute-force” way, with enough data your program will be slow and lame. To better prepare for technical interviews I looked into Big O notation and every explanation of it is seemingly worded…overly complicated? This will be my attempt to turn Big O notation into something a human can read and understand.
What is Big O notation? Why?
This seems like a good place to start. In programming there are infinite ways to solve a problem. Lets say we have a real life problem: You need to give something to your neighbor. There are many scenarios the item can end up at their house. You could mail it to them, but it is probably quickest to just walk next door and hand it to them. This is what I learned the hard way with programming.
Just because your code works doesn’t mean it is done.
When developing an application and you are solving a problem you should find the most efficient solution. We find the most efficient solution with Big O notation and imagining our code with have 1 billion pieces of data to go through.
Here I will be going over the Big O “labels”. I like to think of them as labels to basically say “This function is THIS speed”
Some vocabulary we need to understand:
n: input size. I like to think n means number of things being plugged in to the solution.
Basic computer steps: Operations the computer has to perform. This could be addition, subtraction, assigning variables, etc.
Drop constants to simplify: this is hard to explain now but will be shown later in the blog. Basically, to calculate which label we give to an algorithm we just look at the big picture and drop the integers that are not relevant in the end. examples down below.
labels in order of speed fastest to slowest
O(1) or Constant Time
We label something as O(1) when the solution doesn’t require n (the input) at all. The solution will always have the same computer steps.
x = 1 + 1
With this algorithm we always will add once and assign it to the value of x once. It does not depend on any amount of data. Say we have multiple lines of code. You add the lines together then “ignore constants”.
x = 1 + 1 //Big O(1)
y = 2 + 1 //This one is Big O(1) as well
z = x + y //Also Big O(1)
To calculate we will add the labels up O(1) + O(1) + O(1) to give us 3*O(1). This is where we “ignore the constant”. We don’t care that there is 3 things that fire off incredibly fast combined with addition. The whole algorithm is O(1). and that is the only thing that is important in the end
O(n) or Linear Time
We use this label for an algorithm that uses basic computer steps, but those steps happen as many times as you use n. The main example of this is looping and iteration through data.
If we wanted to loop through our n and do computer steps each time and n was 1000, we would do computer steps 1000 times. If n was 1 billion we would perform those steps 1 Billion times. The key takeaway here is the amount of time scales in sync with however many inputs or n we use.
for every number in 0 to n
1 + number
As you can see the number of times your computer needs to perform 1 + a number depends on how many n’s we are inputting. It is important to note that with loops we multiply instead of adding like before. Now note that inside the loop we know now that the “1 + number” is O(1) from earlier. When we combine these it is n*O(1) or simply O(n). One more example of O(n)
friends = 0 //Big O(1)
for every person in n //Big O(n)
friends + 1 //
The total is the sum of these two algorithms so O(1) + O(n). The O(1) is not important to the speed of this solution in the end that we just drop it like the constant to just give us a simple label of O(n).
O(n²) or Quadratic Time
This will be the last label I describe in this blog post. This happens mostly when you loop inside of a loop and should be avoided if possible. When working on a small amount of data you will not notice how slow these solutions can be because computers are so powerful. However when you start working with large n’s your solution will get out of hand quickly. The best way to explain in words O(n²) is with this example: Say you are hosting a party and you want to find out everyone’s names and have all of them know everyone’s names as well. The O(n²) way to handle this would be to say for every person here (n) starting one at a time, ask every person here (n)their name, when you are done move to the next person then they ask everyone at the party their names. If you are like me and only have 3 people at your party this isn’t so bad, but if you have 100 people, or 239582305 people it would be the worst party ever. Try to avoid these solutions.
//Pseudocodefor number in 0 to n //O(n)
for otherNumber in 0 to n //O(n)
number + otherNumber
Like I mentioned before Inside loops we multiple the labels together so we will do O(n) * O(n) or simply O(n²).
These are just the 3 basic labels and I plan to continue learning and writing more efficient code for my projects and future employer.
This blog focuses on speed and time rather than space for brevity.
I just wanted to list a few more examples of cutting “constants”:
O(2n) = O(n)
O(500) = O(1)
O(13n²) = O(n²)
Here is a graph that shows the speed in which these labels perform: the lower the line the faster it is.
Thank you for reading and good luck on your problem solving!