In this course you learn 15 or so new how-to-calculate instruction lists (called *algorithms*) and how to use them. The main thing you need for the exam is to be confident and fluent with those algorithms.

In that respect this course is like the rest of school maths. It is focused on learning how-to-calculate instruction lists. Real maths is about much more, but see below.

You may find it easier to get fluent with the algorithms, and you will certainly learn more of value for future study and for life, if you can also learn something about the purposes of these algorithms, and roughly in what directions the study of algorithms goes forward from here.

There are three, possibly four, things which make this course’s how-to-calculate instruction lists a bit different.

- They are mostly for problems where in everyday life you might think of yourself as just
*deciding*an answer rather than*calculating*it (e.g. sorting a list; finding whether an item is or is not in a list). - The instruction lists (algorithms) are typically built up from very simple and clear-cut steps (comparing two numbers to see which is bigger; adding one number to another; chosing one or another branch in a tree…), even though you may have to do those steps many times over.
- Often the data are whole-number data, or just yes/no data (e.g. is this vertex connected to that one, yes or no?)
- When we deal with algorithms in everyday life in the form of
*instruction manuals*, we tend to judge a good algorithm or a bad one by how easy it is to understand. Often in other branches of maths we have a similar criterion: a method which is longer, but easier to understand, is better than a quicker but harder-to-understand method. In computer science, however, by far the most important factor is how*fast*the algorithm is, i.e. how few basic calculations it needs.

You would never use one of these algorithms for calculating by hand in everyday life. Sorting algorithms will not help you tidy your room. These algorithms are designed for use on computers, i.e. for devices which can do lots of calculations very fast but need to have those calculations broken down into simple steps.

The real maths and computer science in this field starts with:

- Mathematically
*proving*that the algorithms work (if they are approximate algorithms, like the bin-packing ones, proving bounds on how bad the approximation is) - Mathematically proving bounds on how many operations the algorithms take to carry out for a given amount of data
- Coding the algorithms into a programming language for computers
- Learning how to design new algorithms.

Instead of “decision mathematics”, this course could instead be called “computer science”, or “algorithmics”, or “discrete mathematics” (see note).

Discrete mathematics, algorithmics, and computer science are not different names for the same thing, but overlapping fields, with blurred edges. This course gives an introduction to them all, by teaching you about a few easy algorithms of the type that are developed for computers but not for calculation by hand.

If you go on to study more discrete mathematics, algorithmics, and computer science, you will learn more theory *about* algorithms (how to *prove* that an algorithm works, for example, and how to work out how many calculation steps it will need), as well as directly learning more algorithms.

Graph theory dates back to the 18th century. It was vigorously developed in the late 19th century by mathematicians like James Joseph Sylvester, who coined the term “graph” in this context (meaning something quite different from the graph of a function you draw with axes, etc.) He adapted the term from “graphical formulae” used by the chemists Alexander Crum Brown and Edward Frankland to draw the structures of molecules. Scientists studying the structure of crystals made advances in graph theory.

The basic theory of how-to-calculate instruction lists built up from clear-cut elementary steps is at the heart of computer science, but its most important ideas were developed in the 1930s and 40s, mostly before the invention of the electronic computer, by mathematicians and philosophers such as Kurt Gӧdel, Alonzo Church, Emil Post, and Alan Turing.

Turing was also involved in the construction of the world’s first-ever electronic computer, Colossus, built in Britain to crack German military and naval codes during World War 2. Colossus was broken up at the end of the war, but a rebuilt version can now be seen at Bletchley Park, not far from London.

Tony Hoare’s Quicksort algorithm, Kruskal’s and Prim’s minimum spanning tree algorithms, Djikstra’s shortest path algorithm, and the Chinese postman algorithm, were all developed between the mid 50s and early 60s. Critical path analysis came a bit earlier, and Gantt charts even earlier (before World War 1).

Linear programming was developed by the Russian mathematician Leonid Kantorovich in the 1930s, became widely used in World War 2, and was further developed by John von Neumann after the war.

Bin-packing algorithms are more recent (1970s and 80s), and so (I think, but I may be wrong) is the maximum-matching algorithm.

Theory about algorithms is in large part a separate field of study from the design of computer hardware. Roughly speaking, an algorithm which works on one computer can generally be rewritten to work on another. The *order of magnitude* of the number of calculation steps the algorithm needs will be the same on different machines, though of course one machine may do basic steps faster than another.

Thus, Edsger Dijkstra, one of the great figures in computer science from the 1950s to his death in 2002, rarely used computers himself. In his old age he was persuaded by colleagues to get an Apple Mac, but used it only for email and web browsing. He always wrote his articles on computer science by hand, with a fountain pen.

Note:

That’s “discrete” mathematics as opposed to “continuous” mathematics (all maths using calculus), not “discreet” as opposed to “blabbermouth” or “tactless”. Discrete-vs-continuous is about the same as digital-vs-analogue. The twist is that in principle continuous is more exact than discrete, because discrete representation involves rounding off, but we’ve become used to reckoning digital technologies more precise than analogue technologies (transmitting information by way of continuous functions) because with them exactness (after the initial rounding-off) can be conserved by error-correcting codes. In turn, the theory of error-correcting codes is part of discrete mathematics.

*One of Dijkstra’s handwritten articles*

*Esdger Dijkstra*

*Emil Post*

*Joseph Kruskal*

*Alonzo Church*

*Tony Hoare announcing the invention of Quicksort in 1960*

*Alan Turing*