Asymptotic Analysis in Practice

In this final article we will look at analysing the complexity of an algorithm without the use of asymptotic analysis, and in doing so showcase the benefits of using Big O Notation. Additionally we will look at the everyday practical uses of Big O when developing software applications.

screen-shot-2017-07-16-at-13-08-25.png

The above code doesn’t do anything meaningful. Given an array of integers of n size, iterate through the array, and for any element equal to 5 we simply multiply it by 6 and print it.

Without asymptotic analysis how would we go about evaluating this algorithm for efficiency without performing some kind of benchmark? Well, the simple answer would be to count the amount of instructions executed. The more instructions that have to be executed the more time its going to take for the computer to finish executing that algorithm. Therefore, knowing how many instructions the algorithm is made of seems like a good starting point.

The example can be broken down into the following: a for loop that executes n times, an if statement, a multiplication, and a print statement.

A problem arises though when trying to base our performance on the number of instructions in the algorithm, our algorithm wont always execute all of them. The multiplication and print instructions only occur if the element is 5, and we cannot be sure if, or how many elements in an array will be equal to 5.

Additionally, the code we write is broken down into something that is understood by a computer. The number of instructions it is broken down into depends on the hardware and compiler. Which means our algorithm again is only being tested against a specific hardware/software configuration.

As we have already seen then asymptotic analysis removes these dependencies, allowing us to test independent of any specific implementation. In asymptotic anlaysis the constant values are ignored, this makes sense in our case as this relates to the instuctions themselves, which like we said may be different.

Therefore, we simply need to look at the algorithm and determine its order of growth. The highest power related to its input, in this case it is simply n. We loop around all of the elements once and possibly do something if we have a value of 5.

Now we can use Big O to describe this algorithms upperbound, or worst case scenario. We can say that the algorithm f(n) is worst when every element in the array is equal to 5, as we would have to execute 2 instructions for each element along with executing the if statement. We would end up with a function f(n) = 2n + 1. As we stated though, we are not concerned with constant values and we know the order of growth is n so we can simply say that our algorithm has a growth rate O(n).

Asymptotic analysis allows us to calculate the efficiency of code based on its input. The best use case for big O is when developing scalable algorithms. An algorithm that has to perform well even when its input is huge. A good example of scaleable algorithms are those related to searching for data. These have to work well for 10 elements as well as for a 100,000. We know as n gets bigger it would take more time for the algorithm, therefore an algorithm with a low order of growth will perform a lot better with large input than one with a higher growth rate.

However, this best use case scenario, may not end up being how you use asymptotic analysis. It all depends on what you end up developing. Most developers won’t be writing scalable algorithms, and instead rely on asymptotic analysis for evaluation and decision making. That is asymptotic analysis is useful at identifying bottle necks in a piece of software, and acts as a useful tool when deciding what algorithms or data structures to use to solve a given problem. Without such knowledge you will be less able to identify performance issues and are more likely to write code that causes them.

I hope this article has provided some context on the theory previously discussed, whilst showing pratical uses of asymptotic analysis. Dont worry though as you become a more experienced programmer using Big O will become second nature.

Describing Algorithms with Big O Asymptotic Notation

In the previous article we looked into techniques for determining an algorithms efficiency, specifically looking into conducting asymptotic analysis of our algorithms. Today we will look at describing our analysis using a specific type of asymptotic notation, Big O. Additionally, we will look into common classifications of algorithms in terms of describing their runtime complexity with Big O.

Asymptotic analysis can help us define the limiting behaviour of a function. For our purposes we can use it to determine the efficiency of our algorithm based on the size of the input. Asymptotic notation helps us describe the analysis of a function.

We can describe the upper bound (Big O), lower bound (Big Omega), or both with Big Theta. The focus of this article is on the discussion of Big O notation, as it is the most commonly used notation when describing algorithms in computer science.

I will save you from suffering through the formal definition of Big O, especially since its not all that helpful for our purposes. If you are interested though, you can read about it on this Wikipedia page.

Big O is a notation that describes the upperbound of our algorithm. The upperbound can be seen as the worse case scenario, measuring against some metric e.g. execution time, memory consumption, etc. The notation for Big O is written O(f(n)) read: “order n” where f(n) describes the largest term within the function.

The biggest term of a function, also known as the highest order, is simply the largest term in that function. Largest describing the term with the highest power. For example, given a function f(n) = 2n^2 + 3n + 5 we can see that the largest term is 2n^2. This means we can describe the function as having an order of growth n^2, O(n^2).

The order of growth of a function is dependent upon its largest term. If f(n) = 2n^2, and g(n) = 2n, we can say that f(n) has a larger order of growth than g(n). As the value of n increases the output of f(n) is going to be much larger than that of g(n).

There are a few things that need to be mentioned before we look at common classifications.

Firstly, just to reiterate Big O is just a form of notation for describing the upperbound of our algorithm. Asymptotic notation in general is just simply a shorthand of describing the behaviour of an algorithm. That is, if we analysed our algorithm in terms of its worst case performance we would use Big O notation denote this behaviour.

Secondly, if an algorithm is described with O(f(n)) we can assume that it will act at worse O(f(n)) but we cannot assume that it will always be of O(f(n)). Therefore, be cautious as O(f(n)) might not tell you all the information you need to know about an algorithm e.g. best/average case scenarios.

Finally, it is worth noting that Big O is a guideline and not a guarantee. For some value n with a large constant c, it might be that a quadractic algorithm may be better than a linear function in cases where n is small. For example, given f(n) = 2,000n + 50 and g(n) = 2n^2 the quadratic algorithm g(n) is more efficient than the linear algorithm f(n) despite the growth of the function being larger.

This means that asymptotic analysis only holds when the value of n is large, and if its not then you cannot assume that a function with a lower order of growth is more efficient than a larger one. Remember asymptotic analysis is only concerned with the largest term, and not its constants or any other terms

With all that out of the way we can look at how to describe certain algorithms in terms of O(f(n)). Below is a few common orders of growth. Each graph shows how execution time is affected as the value of n increases.

O(1) (constant): An algorithm that always requires the same amount of time to execute. Its order of growth is constant. Any single statement of code is constant e.g. print(“Hello World”);, int a = b – c;, etc.

O(log n) (logarithmic): An algorithm in which the time required is dependent upon the log of the input n. Its order of growth is proportional to log n where log is to the base 2. The algorithm will typically take longer to execute as n increases, but after a certain point the input of n wont affect the execution time. An example of an algorithm of O(log n) is a binary search.

O(n) (linear): An algorithm in which the time required to execute is dependent upon the size of the input n. Its order of growth is proportional to n. That is, as n increases the time taken to execute the algorithm will also grow at the same rate as n. An algorithm that uses a single loop iterating n times. An example of such an algorithm is a linear search.

O(n^2) (quadratic): An algorithm in which time required is dependent upon the size of the input n squred. Its order of growth is proportional to n^2. That is the execution time will increase dramatically as n gets larger. A typcial exmaple is any algorithm that makes use of two loops, for instance an insertion sort.

To conclude asymptotic analysis is a means of measuring the performance of an algorithm based on its input. Big O is a form of asymptotic notation that denotes the worst case scneario of our algorithm. Hopefully this has all made some sense, but if not don’t worry, the next article will put asymptotic analysis and Big O into practice through some examples.

An introduction to Theoretically Evaluating Algorithm Efficiency with Asymptotic Notation  

Don’t get scared away by the title; what we are going to talk about isn’t all that complicated. In a previous post I introduced the concept of an algorithm, and gave a brief description into how an algorithm is deemed to be efficient. This article will take this further by discussing techniques available to us for testing the efficiency of our algorithms before we go about implementing them in code.

Before we get started, lets first go back over efficiency. An algorithm can be efficient if it meets the memory or running time requirements imposed. Basically, our algorithm must use less than a maximum amount of memory, or run no slower than an amount of time specified. The restrictions imposed are dependent up on the problem we are trying to solve.

In order to test for efficiency, an algorithm must go through a theoretical analysis, using asymptotic analysis, before the algorithm is implemented.

The reason for this theoretical analysis is that simply without it our algorithms could only be tested through implementation.

Why is this bad? Well, firstly, we have to perform the implementation before we have any idea of how the algorithm will run. Meaning you could spend a long time developing something to realise that this algorithm will not run the way you want it to.

Secondly, by testing an algorithm through implementation we are making our algorithm dependent upon a specific platform, programming language, and hardware configuration. Altering any of these variables could result in a different result. Given the shear amount of variation we could never test our algorithm for all possible configurations.

Having a way of analysing our algorithm before we start implementing it allows us to save time, but more importantly allows us to judge efficiency independent of any hardware or software.

As described by Wikipedia, asymptotic analysis is the field of mathematics for describing the limiting behaviour of functions. A limit of a function is the value a function approaches as the input of that function approaches some value, usually zero or infinity.

Therefore, we are looking at the output of our function against a specific value, based on the values we are passing into the function.

If we have the function f(x) = e^x we could look at the output of that function as x tends towards infinity. Basically our function output grows exponentially as the value of x gets larger.

Asymptotic notation is a language that describes the type of behaviour of a function respective to the growth of that function. What I mean by this is that given a function f(n) = 2n^2 + 600n + 200 we are only concerned with the most significant term, n^2, because as n tends towards infinity the other terms and constants become irrelevant, as shown in the graph below.

As you can see from the graph the n^2 term results in a significantly larger output as the input size increases.

There are a few different types of notation and in the next article we will go into a lot more detail about one of them, but for now lets talk about how all this relates back to algorithms.

This idea can be applied to our algorithms, whereby the input of our function is the size of our input of the algorithm. Input is the metric we use as algorithms are designed to work with inputted data because an algorithm is useless without it. A search algorithm requires elements in which to search, as does a sorting algorithm needs input to sort.

As the input increases in size we can see that an algorithm might take longer to complete, or require more memory. It would take a lot less CPU cycles, or steps to search through a 100 items as it would do to search through 100,000.

This leads us onto the output of our function, which is what we want to measure for within our algorithm. If we are measuring the time it takes to run then we would like to see how long our algorithm takes to complete as the input amount increases. If we want to measure against memory, we would want to see how much memory is used up as the amount of input increases.

Therefore asymptotic analysis is required to measure the running time or memory capacity required by our algorithms as the input size increases. Asymptotic notation is where we describe our function as a rate of growth using the most significant term, and removing any insignificant terms or constants. We end up with an independent method for determing the efficiency of an algorithm.

In the next article we will look at a specific form of asymptotic notation, Big O notation, which is commonly used in computer science for measuring an algorithms performance.

Programming Fundamentals: Algorithms


Welcome to the final article in this series on programming fundamentals. Over the last several articles we have looked at many important concepts that are applicable to any programming language you will be using. These concepts included: variables, data structures, conditions, repetition, and functions. In this last article, we look at algorithms, something that requires the use of all the concepts previously discussed.

At the most basic level we can define an algorithm as a set of steps that when finished result in the completion of a task, or the solution to a problem. The first article in this series introduced an algorithm for making a cup of tea. Under this definition though we could easily deduce that entire programs are algorithms as they are made up of a series of steps, albeit many steps, for completing a task. However, when we discuss algorithms in the realm of computer science they are generally seen as small concise steps intended to complete a specific task.

Algorithms can be classified dependent upon how they go about solving a problem. Some examples of types of algorithms include: divide and conquer, greedy, and brute force algorithms. The classifications give details with regards to how the algorithm performs. A brute force algorithm is one that will try all workable solutions until a match is given. For example, if we wanted to find out a person’s pin number we would try to enter every 4-digit combination until we entered the correct one.

Over the years, a multitude of algorithms have been developed that have been applied to solve a wide range of problem from searching, and sorting data within a data structures, to rendering realistic graphics in games. In most cases, it is up to the developer to use an existing algorithm to solve a specific problem dependent upon the problem at hand. In some situations though, you may have to modify an existing algorithm to suit your need, or even design your own.

Algorithm design involves developing a series of steps that can be reused to solve a specific problem. There is a lot that goes into designing an algorithm. We must understand the problem we are trying to solve, ensure that our algorithm works for all the values we expect to be input, and that the algorithm is efficient. Efficiency generally refers to how much memory we need to use whilst our algorithm runs, and how long it takes for our algorithm to complete.

Algorithms are essential in computer science. They are designed to solve problems, but also to be reusable, so that they can be then applied by developers for whatever they need. A search algorithm could be used, for instance, to sort a range of numbers from highest to lowest in a leaderboard, We decide how to use them, and having so many algorithms already designed for us, we are not short of options.

So there we have it, a quick overview of algorithms. I purposely left this last article light on details as algorithms are such a broad topic which cannot easily be explained in this article alone. But at least you now have some understanding of what they are.

I hope this series has provided a brief introduction, so if you look elsewhere on your journey to becoming a programmer and run into the word algorithm, variable, data structure, or anything of the other things we have discussed then you will know exactly what is going on, and a little a bit about the why.

The last point to make is that this is unfortunately only the beginning. There are a lot of concepts I haven’t discussed, some big ones such as object orientated programming, recursion, nesting, scope, and many more things. But there are plenty of helpful people out there to guide you on your way. Good Luck, and have fun!

Programming Fundamentals: Functions


In the previous post that can be read here, we looked at repetition, the process of telling a computer to repeatedly execute a set of instructions based on some condition. In this article, we will delve into functions, what they are, how we use them, and how best to design functions in our programs.

Yet again, before we delve into functions, there are some things we need to know first. Mostly we need to look at statements and compound statements.

In most programming languages, the smallest element we can use to create a program is known as a statement, which up until now we have been calling an instruction. We have already looked at several different statements: if-statements, while-statements, and for-statements, but other statements exist such as assignment statement, assigning values to our variables, and expression statements, which is a statement that produces a value, for instance 5 + 5 * 2.

Often it takes more than a single statement to get something done, and that’s where compound statements, also known as blocks, come into play. If-statements, while-statements, and for-statements are all examples of compound statements. That is, they comprise of more than a single statement. In most programming languages, we define a block of code using a set of curly braces, so an if statement would look like the following:

If(condition)

{
// statements in here

}

The above example shows an if statement, and curly braces with all instructions within the curly braces being the ones that are executed if the condition evaluates to true.

There are two main benefits for using code blocks. They allow us to define scope, something I won’t be touching on in this article, and that as you have already seen, they allow us to group a set of statements. One thing we can’t do with our compound statements is use it multiple times throughout our program, which finally brings us nicely onto functions.

Functions

A function is like a named compound statement which can be referenced by its name throughout our code, and thus used multiple times. However, unlike a compound statement, our functions have additionally properties. They can accept and return data in the form of variables or data structures.

A function needs a name so we can identify it, like a variable does so we can access the memory location our data resides in, a function name is an identifier to the address were our group of statements are stored. As a function can accept and return data, we also must define this when creating our function. A name, return type and list of accepted data forms the signature of a function. Below is an example of a function:

int AddNumbers(int a, int b)

{

    int c = a + b;

    return c;

}

In the example shown we have created a function called AddNumbers which accepts two variables, also called parameters, called a and b, defined within a set of parentheses, and we have our return type defined as an int, placed before the name of the function. The idea behind this function is that it accepts two integer numbers, and then we add these two numbers together within the function and return the result.

There are no restrictions on the type of data our function returns, it can be a primitive type, or user-defined. Additionally, we can pass in any number of variables of any type, in which they don’t have to be of the same time, we also don’t have to pass in any variables at all. In most languages, we are also allowed to return nothing, which is typically done by specifying the return type as void.

void PrintHello()

{

    Print(Hello);

}

For us to use the functions we create we must call them within the parts of our program in which we want to use them.

Calling the function, is done by using the name followed by a set of parenthesis, following on from the example above we would call the function AddNumbers in the following way: AddNumbers(5,3). The values we passed in are stored in the variables a and b respectively and are then added together, return the variable c which will equal 8. In the example just given though we are calling the function but we are not doing anything with the value returned. To make use of the return value c we need to store that data somewhere, like in a variable. To call the function and store the value would look like the following:

Int d = AddNumbers(5,3)

Functions can be called from anywhere in our program, that is we can all them within loops, if statements, or even within other functions. Functions essentially point to a block of instructions that we want to execute so when we call a function you can think that we are just adding that block of instructions into our program at that point.

As you can start to see functions are a powerful concept. They allow us to reuse a set of statements as many times as we want in our program reducing the amount of instructions we need to write. They also allow us to better organise our program, making it easier to maintain, well that is if we design them properly.

Designing Functions

When deciding whether to write a function there are a few things worth considering. The first step is to decide whether the instructions you want to put into a function are going to be used more than once, if not then you might not have to put them in one.

Secondly you must decide on the return type. Do you think your function should return anything, and if so, then what? Then finally we need to figure out what parameters if anything we need to pass into the function. The specifics are all dependent on the problem you are trying to solve, or the program you are trying to write.

If we wanted to do any mathematical operations such as adding or subtracting numbers then we can assume that we would want to pass in the numbers we want to add together either as variables or as a data structure. We would also probably want to use the result of the function, and therefore should return it.

If we wanted to output something to the screen and wanted to write that into a function we would mostly likely have a parameter of the thing we want to print: a number, or word, but we would most likely not want to return anything as we simply want to output to the screen.

I think the most important thing to think of when designing functions is to remember that a function should only do one thing. If we want to write a program that adds two numbers together and then prints them out we can see that the there are two things we want to do, add numbers, and print them, and that these tasks are separate from one another, and therefore should end up in two separate functions. If we were write them into a single function, we would never be able to reuse our code as effectively as possible. We wouldn’t be able to add two numbers together without printing them, nor could we print a number without first adding it to another.

On a final note to allow functions to help us organise and improve the readability of our programs it is essential that our functions are given a meaningful name, this stems to our variables as well. We need to know what is being stored in a variable, or what a function we call does, and this is best coming across in the names we select for them.

Conclusion

In this article, we have learnt about functions. A function is a grouping of statements that can be referenced by name, that can accept and return data. Using the name of a function we can call it multiple times in different parts of our program. This results in cleaner, more organised programs, that avoid us having to write duplicate code when wanting to perform a similar task, in which only the data has changed.

After reading this article, and assuming you have been reading the rest of the series, you should have a good understanding of the major concepts that most languages are built around. In the final article in this series, we will look at combining all the concepts we have learn about so far, by introducing algorithms.

Programming Fundamentals: Repetition

In the last article we looked at conditional statements, and how they allow for branching, the ability for a computer to decide what instructions to execute based on a set of conditions. The practice of evaluating conditions to dictate which instructions to execute or ignore, is only one application of conditions. Another way to use them is through repetition, the ability for a program to repeat instructions, in which they are repeated based on some condition.

A condition used in repetition, is used to determine how many times a computer should execute some instructions. In some cases, we know exactly how many times we want an instruction to repeat, and if it is only a few times, it is not difficult to type out the same instruction several times. Although when we want to repeat the execution of an instruction 10,000 times this would take a while to type out, and is an ideal situation where we would use repetition. Telling our program to execute the instructions under the condition that it executes them 10,000 times, which obviously saves us a whole lot of time!

In other cases, we may not know how many times we want instructions to repeat, but we know we need to execute them more than once. This maybe because we want to repeat them based on some user input, or based on the results of some other instructions, like a mathematical expression.

A good example program would be a video game. Games are very complicated programs, but the basic way in which they work involves using repetition. We start up our game, and run through the same instructions: asking for user input, updating the things that happen in the game such as characters and enemies moving or shooting, and finally displaying the game to the screen. We can play games for a matter of minutes or hours, and this straightforward process is repeated indefinitely through this time, until something occurs to cause game over. The exact condition in which game over occurs depends on the type of game, but this could be from losing all your lives, having no health left, or running out of time. Either way the game works by repeating a set of instructions until a game over condition evaluates to true, resulting in the game ending.

In terms of implementing repetition into our programs, like with conditional statements, all programming languages support repetition, using loops. There are several different types of loops but the two most common types are while, and for loops. A while loop looks like the following:

While (condition)

    Execute instructions

Looks a lot like an if statement, except we replace the word if with while. What happens is that our program executes instructions based on the condition. The important thing to remember about a while loop is that the instructions won’t be executed a single time if the condition is not met. While loops are best used when we don’t know how many times we need to iterate (pass)through a loop, for example:

While (gameOver does not equal true)

    Execute instructions

In the above example the instructions in the loop will be executed until the variable gameOver is changed to true, which could occur at any time. A for loop on the other hand looks like the following:

For (initialisation; condition; increment/decrement)

    Execute instructions

A for loop seems slightly more complicated than a while loop, but it’s not too difficult to understand. For loops are split into three parts: initialisation, condition, and increment/decrement. In the first part this is where we initialise any variables we would like to use in the loop, usually a variable that stores an integer value. Second is our condition, this obviously decides how many times we iterate through the loop, repeating instructions. Typically, this will be some comparison such as x < 10, or x equals 10. The final part relates to how we alter the variable we initialised, whether we add or subtract some value, dependent on our needs. Below is an example of a for loop.

For (integer i = 0; i < 10; i = i + 2)

    Execute instructions

In this example, we will execute our instructions 5 times. We start by initialising our i variable with the value 0 then we check if the value is less than 10, if it is then we add 2 to the variable then execute the instructions. This is repeated until i is no longer less than 10, which will occur after 5 iterations. It is best to use a for loop when we know exactly how many times we want our instructions to be repeated.

So far, we have looked at using loops as a way of allowing a computer to repeatedly execute instructions, but what types of instructions do we want to execute in a loop? Some examples include: adding or multiplying numbers, initialising variables, or even using loops to traverse through data structures.

The exact nature in which we traverse through a data structure depends on the programming language, and the type of data structure, but the general idea is that we can use a loop to traverse through all the data values we stored within a data structure. If we had a data structure that contained 100 different integer values, we could use a loop to traverse through the data structure, meaning we get access to each value, which we could then do something with, like change its value, or output it to the screen.

To conclude, repetition further enhances the capabilities of the programs we write by allowing us repeatedly execute instructions. Repeating instructions saves us from having to repeatedly type instructions in cases where we know the number of times we want an instruction to be executed. Additionally, they allow the creation of interactive applications such as games by allowing instructions to be executed several times unknown to the programmer. While and for loops are common implementations of repetition within programming languages, each tailoring to a specific situation, while loops best suited for situations where we do not know the number of times the instructions will be executed, and for loops reserved for times when we do know this information.

After reading this article you will be well on your way to understanding the fundamental principles required to write programs however there are still a few things left to learn. The next article will look at functions, what they are, and how they can be used to help us write better programs.

Programming Fundamentals: Conditions

The first article in this series of posts talked about how programs consist of instructions written in programming languages. When we write instructions, we are telling the computer what to do, and the computer executes these instructions in the order that we write them. Our programs will be rather basic though if we don’t allow for branching, the ability to execute certain instructions based on some condition.

A conditional statement is an instruction that allows a computer to decide what to do based on a condition. A condition is anything that can be resolved to either being true or false. Many programming languages include a data type known as Boolean value that is used to store a true or false value. Any data type can usually be resolved to be either true or false, with a value of 0 representing false, and anything else equating to true.

It is not just values that are evaluated, we can also use mathematical expressions as conditions, such as checking x > 5, a + b = 6. We are also not restricted to evaluating a single condition, with the use of Boolean logic we can create complex conditions using AND, OR, and use NOT to allow a condition to be checked against something not being true. AND and OR allow us to combine multiple conditions, using the examples previously described we could write a conditional statement that states that x > 5 AND a + b = 6, which means that both x must be greater than 5 and the variables a and b must equal 6 for the condition to be true. OR on the other hand requires that only one of the conditions, x > 5, OR a + b = 6 equate to true for the whole condition to be considered true.

A programming language can be looked at in the same way as any spoken language, we must learn the words and rules that govern the language, the syntax, to use it. Each programming language will have a set of reserved words and define a structure we must follow for the computer to be able to understand the instructions we write. A conditional statement is an important concept, and is thus implemented into all languages, otherwise there would be no branching. Most languages implement a conditional statement using the word if. A conditional statement will look something like the following:

If (condition) then

    Instructions to do something

In the above case, we would test a condition and then proceed to execute some code. Additionally, another reserved word else, is associated with conditional statements.

If (condition) then

    Instructions to do something

Else

    Instructions to do something else

The inclusion of the word else means that we have the option to branch out and execute instructions dependent on the evaluation of the condition. In the first example, instructions would be executed, if and only if the condition evaluated to true. In this second example, the computer gains the ability to execute instructions if the condition does not evaluate to true. A third example shown below, makes use of the words else if.

If (condition) then

    Instructions to do something

Else if (condition) then

    Instructions to do something else

The key difference between the 2nd and 3rd examples is that instructions in the 3rd example are dependent again upon some condition evaluating to true. In the 2nd example the instructions after the else word are executed every time the if condition evaluates to false. In the 3rd example though, there is chance that both instructions in the if and else if sections aren’t executed if both conditions evaluate to false. Also, there are no restrictions upon the amount of else if conditions you can use. By this I mean we could have code that looked like the following:

If (condition) then

    Instructions to do something

Else if (condition) then

    Instructions to do something else

Else if (condition) then

    Instructions to do something else

Else if (condition) then

    Instructions to do something else


Although we can use as many else if statements as we want, we cannot use else repeatedly, as the computer would be unable to determine which else’s section of instructions to execute.

Therefore, conditions give us incredible opportunities to write more complex code by allowing us to execute instructions based on conditions defined by us whether that’s checking the value of a variable, or evaluating a mathematical expression. We can control the flow of execution of our program, and ensure instructions are only executed when we want them to be. Conditions are not strictly limited to use in conditional statements, such as if statements, and play a big role in the ability to repeat the execution of an instruction, or set of instructions. The next article will expand the use of conditions, by describing their use in relation to repetition.