7.8

2 Programming

Throughout history, humanity has accumulated enormous amounts of knowledge. That knowledge is of extreme importance for our survival and, therefore, needs to be passed to the following generations. To this end, a series of mechanisms were invented. Firstly, oral transmission, from one person to a small group of people. Secondly, written transmission, consisting in documenting knowledge. This approach has the great advantage of reaching out to a much larger group of people and of significantly reducing the risk of knowledge loss. In fact, the written word allows us to preserve knowledge for long periods of time, safe from the inevitable corruption that occurs on a long chain of oral transmissions.

It is thanks to the written word that mankind can understand vast amounts of knowledge, some of which date back thousands of years. Unfortunately, the written word has not always been able to accurately transmit what the author had in mind. The natural language is ambiguous and it evolves with time, making the interpretation of written texts a subjective task: there are omissions, imprecisions, errors, and ambiguities, both when we write a text and when we read it, which can turn knowledge transmission into a fallible endeavor.

When rigor is needed in the transmission of knowledge, relying on the receptor’s abilities to understand it can have disastrous outcomes. In fact, throughout history we can find many catastrophic events caused solely by insufficient or incorrect transmission of knowledge. To avoid these problems, more accurate languages were invented. Mathematics, in particular, has obsessively been seeking to construct a language of absolute rigor over the past millennia. This allows for a much more accurate transmission of knowledge than in other areas, but it still requires a human to understand it.

To better understand this problem, let us consider one concrete example: the calculus of the factorial of a number. If we assume that the person to whom we want to transmit that knowledge knows beforehand the meaning of numbers and arithmetic operations, we can say that to calculate the factorial of a number, one must multiply every number from one until that number. Unfortunately, that description is inaccurate, because it does not state that only integer numbers are to be multiplied. To avoid these imprecisions and simultaneously make the information more compact, mathematics developed a set of symbols and concepts that should be understood by everyone. For example, to define the integer sequence of numbers between \(1\) and \(9\), mathematics allows us to write \(1,2,3,\ldots,9\). In the same manner, instead of referring to a number, mathematics invented the concept of variable: a name that refers to some “thing” that can be used in several parts of a mathematical statement, always representing the same “thing”. In this way, mathematics allows us to more accurately express the factorial computation as follows:

\[n! = 1\times 2\times 3\times \cdots{} \times n\]

But is this definition rigorous enough? Is it possible to interpret it without guessing the author’s intention? For a human being, maybe. However, if we want to be as rigorous as possible, then we must admit that there is one detail in this definition that requires some imagination: the ellipsis. The ellipsis indicates that the reader must imagine what should be in its place. Although most readers will correctly understand that the author meant the multiplication of the subsequent numbers, some might think to replace the ellipsis with something else.

Even if we exclude this last group of people from our target audience, there are still other problems with the previous definition. Let us imagine, for example, the factorial of \(2\). What will be its value? If we use \[n = 2\] in the formula, we get:

\[2! = 1\times 2\times 3\times \cdots{} \times 2\]

In this case, the computation makes no sense, which shows that, in fact, imagination is needed for more than just understanding the ellipsis: the number of terms to consider depends on the number to which we want to calculate the factorial.

Assuming that our readers have enough imagination to figure out this particular detail, they would easily calculate that \(2!=1\times 2=2\). But even so, there will be cases where it is not that clear. For example, what is the factorial of zero? The answer does not appear to be obvious. What about the factorial of \(-1\)? Again, it is not clear. And the factorial of \(4.5\)? Once again the formula says nothing regarding these situations and our imagination cannot guess the correct procedure.

Would it be possible to find a way of transmitting the knowledge required to compute the factorial function that minimizes imprecisions, gaps, and ambiguities? Let us try the following variation of the factorial function definition:

\[n!= \begin{cases} 1, & \text{if $n=0$}\\ n \cdot (n-1)!, & \text{if $n\in \mathbb{N}$.} \end{cases}\]

Have we reached the necessary rigor that dispenses imagination on the reader’s part? One way to find out is to reanalyze the cases that had previously caused problems. First of all, there is no ellipsis, which is positive. Secondly, for the factorial of the number \(2\), we have:

\[2!=2\times 1!=2\times (1\times 0!)=2\times (1\times 1)=2\times 1=2\]

which means that there is no ambiguity. Finally, we can see that it makes no sense trying to determine the factorial value of \(-1\) or \(4.5\) because this function can only be applied to \(\mathbb{N}_0\) members.

This example shows that, even in mathematics, there are different levels of rigor in the different ways of expressing knowledge. Some require more imagination than others but, in general, they have been enough for mankind to preserve knowledge throughout history.

This state of affairs changed in the last decades: mankind has now a partner which has been giving an enormous contribution to its progress: the computer. This machine has the extraordinary capability of being instructed on how to execute a complex set of tasks. Programming is essentially all about transmitting to a computer the knowledge needed to solve a specific problem. This knowledge is called a program. Because they are programmable, computers have been used for the most diversified ends and they have radically changed the way we work. Unfortunately, the computer’s extraordinary ability to learn comes with an equally extraordinary lack of imagination. A computer does not assume or imagine, it just rigorously interprets the knowledge transmitted in the form of a program.

Since it has no imagination, the computer depends critically on the way we present it the knowledge that we wish to transmit, which must be described in a language that allows for no ambiguity, gaps, or imprecision. A language with these characteristics is generally called a programming language.