Remember that we started the course by defining an algorithm as a sequence of instructions, independent of any programming language, that performs some useful task.
Taking that definition a little further, we can add a few more statements,
- An algorithm is a sequence of unambiguous instrutions
- An algorithm has a range of legitimate inputs and should produce the correct result for all values within the range.
- Legitimate inputs should be specified for the algorithm
- There are different ways of solving many problems
- DIfferent algorithms can be developed that perform the same function
- The performance of two algorithms that perform the same function may differ in terms of speed or use of resources
When working with large amounts of data, it becomes important to consider the efficiency of algorithms being used, The computational complexity of an algorithm is a measure of its efficiency in terms of speed and space.
- The time complexity of an algorithm shows how fast it executes
- The space complexity of an algorithm shows how much memory is required for execution
Big O Notation
Most algorithms run for longer when the inputs are larger. For example, it will take more basic machine operations to sort 100 numbers than it does to sort 10.
The rate of growth of an algorithm depends on the nature of the algorithm itself. Some algorithms become unworkable as soon as the input grows above a very small number, some remain efficient.
Big O notation is a way of expressing this rate of growth, allowing us to compare algorithms. Big O is a function, f = O(g) (f is big O of g), such that the algorithm grows no quicker than g.
For example, say we have an algorithm that totals our inputs. Assume that a loop is used to examine each input and add it to the total. For each input we add, the algorithm will require one additional iteration of its loop. The algorithm requires one additional operation for each increase in input. This algorithm therefore is of the class O(n). This is an algorithm that grows in linear time. If you graph the function, you see a straight line.
Some algorithms grow in polynomial time. Study the bubble sort algorithm in the programming section. It consists of a loop within a loop. Increasing the input increases the number of times each loop must be executed. The bubble sort algorithm is of the class O(n2) and gets out of hand when the input gets large. Higher powers are also included (O(n3), O(n4) etc.) in the polynomial time category.
An algorithm executes in exponential time if its growth rate follows an exponential function. These algorithms come in the form O(2n), O(3n). These algorithms tend to be unworkable with large inputs.
The table shows how the functions grow as the value of the input n grows. A mix of number formats is used, but you get the idea.
|100||1||6.64||100||664.39||10000||1000000||1.27 x 1030||9.33 x 10157|
|1000||1||9.97||1000||9965.78||1000000||1000000000||1.07 x 10301|
Linear and logarithmic time is the goal of any algorithm designer. These are efficient algorithms. Polynomial and exponential algorithms grow very quickly. The right end of the table shows the class of algorithms that we would normally want to avoid.
|O(n0)||The algorithm does not grow as the size of the input grows||Determining whether a number is odd or even|
|O(log2n)||The algorithm grows at a rate less than the growth of the input||Binary search|
|0(n)||The algorithm grows in a linear fashion||Linear search|
|O(nlog2n)||The algorithm grows at a rate quicker than the input||Best case quicksort|
|O(n2)||The algorithm grows at a polynomial rate||Bubble sort (as described on this site)|
|O(n3)||The algorithm grows at a polynomial rate|
|O(2n)||Exponential growth||Recursive Fibonacci algorithm (roughly)|
|O(n!)||Mega-quick growth||Brute force solution to the Travelling Salesperson Problem|