Big Boy Notation

Roman Opalacz
6 min readJan 5, 2021

--

I can remember when I was very excited just to see me code work. It gave the desired output. That was the first metric I used to evaluate my code, and it’s remained the most important one, but many other layers of evaluation have since been added: code length, readability, conventionality; and now, complexity.

Big O notation is shorthand way to write the upper bound, or worst case, rate for which an algorithm will grow in complexity as its input grows. So if a function has an array as an input with n number items, and it’s able to execute all its operations on that array in x time, what happens when we double the length of the input array? Does it take twice as long to run? Less? More? Much much more? Big O notation indicates what that rate of change is.

Big O is annotated with a capital O followed by parentheses containing the mathematical relationship. So a function that has linear time is annotated as O(n), while a function that has quadratic time is annotated as O(n²).

Reducing the time complexity of an algorithm is generally a good thing, although it’s worth noting that algorithms tend to become more logically complex to implement in order to accomplish more efficient runtime. Becoming more logically complex has its own cost that often involves adding more lines of code that introduces the potential for bugs and increased difficulty for troubleshooting. That aside, its still important to understand what kind of runtime you can expect from your algorithms, and how those algorithms could be refactored to reduce that runtime.

With that in mind, I want to discuss the basics of Big O complexity as it pertains to javascript. I’m going to go over how common JS operations relate to complexity, and talk about some basic strategy for reducing complexity.

Constant time operations O(1) are the simplest and add little complexity to an algorithm. The term ‘constant’, in this context, means that the complexity stays the same and does not increase in proportion to the input. As inputs become large enough, their impact becomes negligible, which is why it is represented with a ‘1’. These kinds of operations include doing mathematical operations like addition and multiplication, comparative operations, assignments to variables and array indices or to object properties. The array methods .push() and .pop() are also constant time.

not a very useful function but it runs fast

Linear time operations O(n) scale in direct proportion to the size of the input. A function that loops over every item in an array has linear time complexity. For-loops and while loops are common ways a function might iterate over each item in an input like an array.

Keep in mind that the Big O notation indicates the relationship between the input and the algorithm, but just because an algorithm has linear time does not necessarily mean that its input is an array or some data structure with elements. Running a loop a number of times equal to the length of an input array is a linear relationship, but running a loop a number of times equal to a simple inputted integer also creates a linear relationship.

notice the top example can just have an integer input but still has a linear time relationship

Many array methods have linear relationships with array length. Reverse.( ), sort.( ), .filter(), .map(), .reduce(), .find(), etc; all these have O(n) time, as they have the potential to do an operation on every item in the array. Even if a method like .includes() stops iterating after it finds it first true value, it still has the potential for a worst case involving having to compare all the way to the last item, and since Big O cares about the upper bound or worst case, it still has O(n) time.

The methods .shift() and .unshift() also have an O(n) relationship since removing or adding to the front of an array requires shifting the index of every subsequent element in the array. In fact, inserting data into an array anywhere besides the end incurs the cost of shifting all subsequent index values. Meanwhile, as mentioned earlier, .push() and .pop() have O(1).

Linear time in algorithms is… fine. It’s generally the result of iterating, and iterating is an inevitable part of programming. The thing to keep in mind with iterating is that any operations you include inside iterative code blocks will create a multiplicative relationship with the entire algorithm, since that code will run each iteration. This leads to algorithms with quadratic time or worse.

Quadratic time or O(n²) scales exponentially with the size of inputs and is most commonly caused by having nested linear operations. You might not notice the runtime between a linear vs quadratic operation for small inputs, but as the inputs grow, the difference becomes jarring. Nesting for-loops can cause quadratic time relationships, but so can nesting any linear time operation inside a loop; using .shift() or .includes() inside a loop can also cause quadratic time. If you nest a loop, within a loop, within a loop then you can end up with O(n³) time and so on for each layer of nesting.

Quadratic time or exponential time is…less fine than linear. I don’t think the take away should be to avoid nesting iterative code at all costs. Just realize that there is a multiplicative cost to iterative code, and that if you find yourself implementing nested loops or quadratically complex code, you ought to seek out ways to refactor and optimize your loops.

When it comes to optimizing algorithms that involve loops, if you can avoid using linearly complex operations inside the loops, then thats great. Data Structures and Algorithm guides will teach you how to use various concepts like pointers to keep track of key values in structures to avoid needing to loop through them extraneously. In addition, learning to make smart assumptions about your data so that you can either reduce how much data you need to iterate over or know when you can safely break out of iteration, will net you savings on runtime.

Big O and time complexity are concepts that many software engineers encounter for the first time as they learn about data structures and algorithms, which they typically start learning about as they prepare for job interviews. But understanding of data structures and algorithms is an incredibly deep subject that has broad implications beyond doing well in interviews, and as you continue to learn about the subject, I encourage you to focus on developing a solid framework for analyzing the performance of your code, not just for interviews, but for yourself. Just try not to make it anymore complicated than it needs to be.

--

--

Roman Opalacz

Aspiring software engineer with almost no experience applying logic