History of algorithms
The concept of algorithm goes back to antiquity. This has been elucidated in the field of mathematics through the use of variables. The algorithm in the computer sense appeared with the invention of the first machines equipped with automation.
Evolution
- Origin of the word
- Algebra Origin of variables
- Ancient breeds
- Symbols, rules and paradoxes
- First formalization
- Post, character space
- Turing and the Human Calculator
- Effective Rosser method
- Kleene's algorithmic theory
Origin of the word
The word algorithm comes from the name of the mathematician of the Persian 9th century (BC) Abu Abdullah Muhammad ibn Musa al-Khwarizmi. The word algorithism originally referred only to arithmetic rules using Hindu-Arabic numerals, but this evolved through the European Latin translation of the name Al-Khwarizmi into an algorithm in the 18th century. The use of the word has evolved to include all procedures defined to solve a problem or perform a task.
Algebra at the origin of variables
The works of ancient Greek geometers, the Persian mathematician Al-Khorezmi - often considered the "father of algebra," and Chinese and Eastern European mathematicians ended with Leibniz the concept of "ratiocinative calculus," logical algebra.
Ancient breeds
Euclid formalized the algorithm to which his name was given. This algorithm, which is used to calculate the largest common divisor, is as follows:
- divide a by b, we obtain the remainder r - replace a by b - replace b by a - continue as long as possible, otherwise we obtain the gcd.
Archimedes' algorithm gives an approximation of Pi.
Eratosthenes defined an algorithm for finding primes.
Averroes (1126-1198) used a process algorithm for his calculations.
In the 12th century, Adelard from Bath introduced the word algorismus, derived from Al Quarizmi.
Symbols, rules and paradoxes
Between the 1800s and mid-1900s:
- George Buhl (1847) invented binary algebra, the basis of computers. In fact, he combined logic and computation in a common symbolism.
- Gottlob Frege (1879) and the language of formulas, which is a lingua characterica, a language written in special characters, "for pure thought," without any rhetorical embellishment... are created using special symbols manipulated according to well-defined rules.
- Giuseppe Peano (1888) wrote "Principles of Arithmetic Represented by a New Method," which is the first attempt to axiomatize mathematics in a symbolic language.
- Alfred North Whitehead and Bertrand Russell in their Principia Mathematica (1910-1913) in turn simplified and improved Frege's work.
- Kurt Goedel (1931) brings the paradox of a liar reducing recursion rules entirely to numbers.
First formalization
The concept was formalized in 1936 by Alan Turing machines and Alonzo Church's lambda calculus, which then created the foundations of computing.
Post, character space
Emil Post (1936) described the actions of the "computer" as follows:
- «... two concepts are involved: a symbolic space in which the work that produces the problem to be solved should be performed, and a set of directions fixed irretrievably."
Its character space may be
- "a sequence of infinite spaces or rectangles with two states... The solver or performer of the problem moves and works in this character space; they can be and act in one box and only one at a time... the box may allow two possible conditions, be empty or unmarked, or have one mark, say, a vertical bar."
- "The box stands separately and is considered a starting point. (...) the given task is provided in character form by a finite number of boxes - inputs - marked with a line. Similarly, the answer - the conclusion - will be given in symbolic form by a configuration from boxes marked..."
- "The set of directions that apply to a general problem defines a deterministic process when it is applied to each specific problem. This process ends only when the stop direction of the type is reached.
Turing and the Computer human
The work of Alan Turing (1936-1937) preceded the work of Stibitz (1937); it is unclear whether Stibitz was familiar with Turing's work. Turing's biographer believed that Turing's use of typewriter-inspired design was of childish origin: he dreamed of inventing typewriters as a boy. Mrs. Turing had a typewriter, and he may have started by wondering if there was a way to make a typewriter capable of working mechanically itself. Given the prevalence of Morse code and telegraph, printed tape machines, teletype at the time, we can assume it was all affected.
Turing, whose computational model is now called the Turing machine, began, like Post, by analyzing a human computer reduced to elementary operations, "states of thought." But he goes further, creating a model of a machine that can calculate numbers:
- "Calculation is usually done by writing certain characters on paper. We can imagine the paper divided into squares as a children's arithmetic book... I then believe that the calculation shifts from one-dimensional paper to tape divided into squares. I will also assume that the number of characters that can be printed is infinite..."
- "The behavior of a computer at any given moment is determined by the symbols that are presented to it and its "state of thought" at the moment. It can be assumed that there is a limit B to the number of characters or squares that a computer can see at any given time. If he wants to see more, he has to make consistent observations. We will also assume that the number of thought states under consideration is finite...
- "Imagine that the operations performed by a computer are divided into simple operations that are so elementary that it is not easy to imagine that they could be divided even further."
Turing contractions lead to the following set:
- Therefore, simple operations should include :
1) Change of symbols on one of the observed squares.
2) Change from one of the observed squares to another square passing through N squares among the previously observed squares."
"Perhaps some of these changes lead to a change in the state of thought. The simple operation is 1A, more general, so it must be taken from one of the following:
- 1) Possible change (s) in symbol at the same time as possible change in thought state.
- 2) A possible change in (b) observable squares, going hand in hand with a possible change in the state of thought.
- Now we can build a machine to run this computer."
Effective Rosser method
J. Barkley Rosser (1939) boldly defined an effective mathematical method as follows:
- "" An effective method is used here in the specific sense of a method in which each step is precisely defined and which will surely give an answer in a finite number of steps. In this sense, three exact definitions have been given to date. (1) The simplest of them (due to Post and Turing) says, in essence, that an effective method for solving certain sets of problems exists if you can build a machine that will solve any problem in the set without human intervention, except to ask a question and (later) read the answer. All three definitions are equivalent, so it doesn't matter which one to use. Moreover, the fact that all three are equivalent is a strong argument for the validity of each."
(1) Rosser refers to the work of Church and Kleene and their definition of i-definability; Herbrand and G�ouml;del in their use of recursion; Post and Turing with their mechanistic model of computation.
Kleene's algorithmic theory
Stephen C. Kleene (1943) defined the thesis now known and known as the "Church-Turing Thesis." In this context:
- "Algorithmic theories... Developing a complete algorithmic theory, we describe a procedure applicable to each set of values of independent variables, which ends necessarily and in such a way that the result constitutes a certain answer "yes" or "no" to the question: "Is the predicate value correct?"
See also
Algorithms