# Comp 110 Time Complexity

## Time Complexity

Time complexity refers to the time efficiency of a function or algorithm you have written in your code. In other words, it is a way of gauging how many steps it takes our computers to execute an algorithm. Two solutions to the same problem can have extremely different time complexities!

Why is this important?

Time complexity is a hot topic in computer science that you will get a chance to explore more in-depth later on in your programming career. We won't worry too much about it for the purposes of this class, but it's a good thing to consider when designing your own functions.

For example, placing recursive calls in different places in your implementations can mean the difference between linear and exponential growth of the time taken to complete the steps in your program. Not only is your own time valuable, but your computer will be much happier with you if you give it efficient code to execute.

Common Time Complexities

• Constant: Given an input of size n, the algorithm will take only a single step or constant number of steps to execute.
• Logarithmic: Given an input of size n, number of steps necessary to finish decreases by some factor with each step.
• Linear: Given an input of size n, the steps needed to complete the algorithm is directly related to n.
• Quadratic: Given an input of size n, the steps needed to complete the algorithm is a square of n.
• Exponential: Given an input of size n, the number of steps will be a constant raised the power of n.

Lets take a look at some examples!

1. Imagine you have a room full of 100 students and one of them has stolen Kris' prized possession... his CPU hat! You want to find the thief but you have have to make it to your next class on time, so you brainstorm on the most efficient way to do this.

Option 1:  You will go and ask the first person in the class if they have the CPU hat. After that, you will ask them to go check the with the 99 other students in the class and report back to you about the status of the hat. This algorithm for finding the hat executes in quadratic time! This means in the worst case it will take 1000 (100*100) steps to complete.

Option 2: You walk around the room and ask each student indivudally if they have the CPU hat. In the worst case, this will take 100 steps to complete. This is an example of an alogirthim with a linear time complexity.

Option 3: You saved the best idea for last! You decide to divide the class in two groups and ask them if the CPU hat is on the left or the right side of the room. You then take that group and divide it in half again and repeat this until the only student remaining is the thief. This results in logarithmic time complexity, the most efficient of the 3 options.

2. max1 vs max2 from Lecture 9.

```let max1 = (values: List<number>): number => {
if (values === null) {
return Number.MIN_SAFE_INTEGER;
} else {
let front: number = first(values);
let maxRest: number = max1(rest(values)); // Recursive call!
if (front > maxRest) {
return front;
} else {
return maxRest;
}
}
};
```
```let max2 = (values: List<number>): number => {
if (values === null) {
return Number.MIN_SAFE_INTEGER;
} else {
if (first(values) > max2(rest(values))) { // Recursive call!!
return first(values);
} else {
return max2(rest(values)); // Recursive call!!
}
}
};```

These two functions do the exact same thing, find the maximum in a list of numbers. However, max1 runs with a linear time complexity and is much more efficient than max2. The time taken to solve max2 grows exponentially as the number of items in the list increases. This example underscores the importance of carefully placing recursive calls inside of a function. Every time max1 is called, it will call itself one time maximum. Max2 on the other hand, will call itself recursively at least once and possibly twice each time it executes. This is because we made the poor decision in Max2 of placing a recursive call inside of an if-statement that will be checked every single time this function is called.

3. Fibonacci algorithms.

The Fibonacci sequence is a series of numbers where each number is found by adding up the two number preceding it, starting with 0 and 1.

0, 1, 1, 2, 3, 5, 8, 13........

Let's look at two recursive implementations of finding the nth number in the Fibonacci sequence.

```let fib1 = (n: number): number => {
if (n === 0) {
return 0;
} else if (n === 1) {
return 1;
} else return fib(n - 1) + fib( n - 2);
};
```

This algorithm will calculate each term in the sequence in exponential time - the time taken to finish calculating the term doubles with each successive term in the sequence.

```let fib2 = (n: number, i: number = 1, j: number = 0): number => {
if (n === 0) {
return j;
} else if (n === 1) {
return i;
} else {
return fib2(n - 1, i + j, i);
}
};
```

This second version of the fibonacci algorithm will execute in linear time! It may look strange that the parameters are initialized to 1 and 0. We won't see examples like this in class, but it will work exactly the same as fib1 as long as  you only pass in a single argument, "n" when you call it.

Wrapping your mind around these ideas will take some time, and you shouldn't worry about it too much if this is still a little fuzzy to you. You've got the rest of your programming career to master it!