As we design functions, we have to consider how they will work in the real world. There are two common considerations as we compare solutions:

**Time complexity** – The amount of time it will take for our solution to complete

**Space complexity** – The amount of memory that our solution will consume

Unfortunately, because we can’t afford to test every solution in the exact conditions under which it will operate (and, let’s face it, we have no idea), we have to find a way to determine complexity without actual data. This is where mathematics come to the rescue.

### Big-O Notation

There are three possible ranges of complexity we could look at:

- The smallest possible situation (little-o notation)
- The largest possible situation (Omega notation)
- The average situation (Big-O notation)

Big-O notation shows us how a function scales with bigger and smaller data sets. We use the algebraic “n” to hold this number, and then we look at the function to determine how a change in “n” changes the time or space requirements. Common answers look like this:

- O(1) – The size of the data has no effect on the size or time
- O(log(n)) – The difference between larger data sets is less than that between smaller data sets (looks like a logarithmic curve)
- O(n) – changing the size increases cost linearly (consistent increase)
- O(n^2) – larger data sets increase the cost much more than smaller data sets (looks like a square curve)
- O(n^y) – Larger data sets REALLY increase the cost

That probably doesn’t make a lot of sense, so let’s work through a few examples.

### Example 1: A for loop

Let’s look at a common-sense for loop. In these cases…

int i; int n; for(i=0; i<n; i++) { ... }

In this case, we can easily determine the big-O notation, because we know that the for loop will execute once for every value of n. That means that the actual time cost of the for loop is x*n, where x is the time it takes to go through the for loop. The x is constant and (theoretically) much smaller than n, so we ignore it.

Thus, the big-O notation is:

O(n)

### Example 2: The selection sort

Now we’ll look at a sort algorithm, which we know always have a complexity between O(n*log(n)) and O(n^2) because nice mathematicians have proved it for us. This algorithm takes a set of unsorted data and returns a set of sorted data:

int i,j; int data[n]; int hold; //Go through all the data in the set for(i=0; i<n; i++) { //We're going to put the smallest value in the "i-th" position //compare against all the unsorted data for(j=i; j<n; j++) { //If we find something smaller, move it to the "smallest" place if(data[j] < data[i]) { hold = data[i]; data[i] = data[j]; data[j] = hold; } } //Now the "smallest" position has the smallest value in the set }

If we think about it, as the algorithm goes through the set, it has to go through most of the set at each stage. Mathematically, it looks like this:

n + (n-1) + (n-2) + (n-3) + … + (n – (n-1))

If we reorganize and simplify, it looks like this:

n^2 – (n)

n is much lower than n^2, so we can safely ignore it, leaving us with a Big-O notation of:

O(n^2)

### The Formula

When you need to solve for Big-O notation, you go through these steps:

- Write out your function
- Step through and figure out how many loops depend on the size of the data (n)
- Find the mathematic formula that describes just these loops
- Simplify
- Consider only the largest factor for Big-O notation
- Write a paper to convince your professor that this was really hard
- Win