# Algorithmic Efficiency

There are often multiple ways to achieve the same result – meaning there is more than one algorithm that can achieve the same job.

There are non-negotiable factors – for example the algorithmic correctness. That is, an algorithm must give the **correct** output for a given set of data. Beyond this, we are concerned with **how efficient** an algorithm is.

## Measuring efficiency

Let’s say we have an algorithm, which adds up all non-negative integers, up to *n*. We achieve this by using a loop, such that if *n *is 100, we loop through 100 times, keeping a running total, adding each value in turn – so, 1 + 2 + 3 + 4 + 5 + … + 98 + 99 + 100.

sub AddIntegers(n) total = 0 for i = 1 to n total = total + i end for return total end sub

If I said this algorithm was great and could complete adding up the first 100,000 numbers in 2 seconds, would that mean anything?

**No!** It depends on too many things – how fast is my computer? How fast is yours? How much RAM or storage do I have compared to you? What about when I get a new computer, or we revisit the algorithm in two years’ time when technology has moved on?

## Order of growth

Instead, we can only discuss algorithms in terms of their time complexity, or Big O notation – the order of growth. This is also referred to as the time efficiency.

This looks at things differently. What if I have an algorithm and supply 100 values to it? What if I supply 200 – does the time taken to process the data double? Does it remain the same? Does the time taken quadruple?

In this way, when faced with more than one solution to a problem, we can opt for the algorithm which **scales more efficiently**. This measure does not rely on the underlying hardware. Of course, the hardware will affect the absolute time taken, but it is more useful to know **how the operation time changes** as the size of the data set changes.

## Adding the first n natural numbers

The example above uses a loop to iterate over a range of numbers. If *n* changes from 100 to 200, it is clear that we have twice as many values to iterate over and add together. This will take twice as long.

Why does this matter? Well, the following is an alternative algorithm that achieves the same result:

sub AddIntegers(n) total = n * (n+1) / 2 return total end sub

What’s different? The result is the same, but now, only one operation is completed (the calculation) regardless of what value *n* takes. This is a more efficient algorithm, because no matter what value we use for *n*, it always takes the same length of time to complete.