Big O Notation is denoted by the syntax O(expr). The expression inside is usually in terms of n, where n is the size of the input.
To figure out the complexity of a given algorithm:
1.
|
Evaluate the amount of operations that each step does relative to the input size
|
2.
|
Keep the term with the largest growth rate (for example, f(n) = n^2 grows more than f(n) = 2n when the magnitude of n increases)
|
3.
|
In the growth rate, any constant term that does not depend on the size of the input n can be seen as 1, i.e. n+2 and 2n+100 are equivalent
|
For example, suppose a program does:
1. print out all of the letters of an input string
2. print out the string "apple"
The equations to represent the complexity of each line would be
1. time to print a single letter * the length of the input string
2. time to print a single letter * the length of "apple" which is 5
Given that the time it takes to print a single letter is the constant t, Step 1 takes t * n time where n is the length of the input string, which experiences a linear growth as the length of the input increases (O(n)). While Step 2 takes t * 5, which remains constant no matter how big the input size is (O(1)). Hence we consider Step 1 to have a greater growth rate than Step 2. Considering the two steps as functions of n, this process is essentially examining the asymptotic behaviors of the two functions.
As a result of this, the expression to represent the complexity for this algorithm would be t * n, however, since t is a constant, it should be omitted.
Hence, the time complexity is denoted as O(n).