What does Gödel's theorems say? Interesting facts and useful tips

One of the most famous theorems of mathematical logic, lucky and unlucky at the same time. In this it is similar to Einstein's special theory of relativity. On the one hand, almost everyone has heard something about them. On the other hand, in the popular interpretation, Einstein's theory, as you know, "says everything in the world is relative". And Gödel's incompleteness theorem (hereinafter simply TGN), in an approximately equally free folk formulation, "proves that there are things incomprehensible to the human mind". And so some try to adapt it as an argument against materialism, while others, on the contrary, prove with its help that there is no god. It's funny not only that both sides cannot be right at the same time, but also that neither one nor the other bothers to figure out what, in fact, this theorem says.

So what? Below I will try to "on the fingers" to talk about it. My exposition will, of course, not be rigorous and intuitive, but I will ask mathematicians not to judge me strictly. It is possible that for non-mathematicians (which, in fact, I also belong to), there will be something new and useful in what is told below.

Mathematical logic is indeed a rather complicated science, and most importantly, not very familiar. It requires careful and strict maneuvers, in which it is important not to confuse the really proven with the fact that "it's already clear." However, I hope that in order to understand the following “outline of the proof of TGN”, the reader will need only knowledge of school mathematics / computer science, logical thinking skills and 15-20 minutes of time.

Simplifying somewhat, TGN asserts that in sufficiently complex languages ​​there are unprovable propositions. But in this phrase, almost every word needs an explanation.

Let's start by trying to figure out what a proof is. Let's take some school problem in arithmetic. For example, let it be required to prove the correctness of the following uncomplicated formula: "" (I remind you that the symbol is read "for any" and is called the "universal quantifier"). It can be proved by identically transforming, say, like this:


The transition from one formula to another occurs according to certain well-known rules. The transition from the 4th formula to the 5th occurred, say, because every number is equal to itself - such is the axiom of arithmetic. And the whole procedure of proving, thus, translates the formula into the boolean value TRUE. The result could be FALSE - if we refuted some formula. In this case, we would prove its negation. It is possible to imagine a program (and such programs are actually written) that would prove such (and more complex) statements without human intervention.

Let's state the same thing a little more formally. Suppose we have a set consisting of strings of characters of some alphabet, and there are rules by which a subset of the so-called statements- that is, grammatically meaningful phrases, each of which is true or false. We can say that there is a function that matches statements from one of two values: TRUE or FALSE (that is, maps them to a Boolean set of two elements).

Let's call such a pair - a set of statements and a function from to - "language of statements". Note that in the everyday sense the concept of language is somewhat broader. For example, the Russian phrase "Well, come here!" is not true and not false, that is, from the point of view of mathematical logic, it is not a statement.

For what follows, we need the notion of an algorithm. I will not give its formal definition here - this would lead us quite far aside. I will limit myself to the informal: "algorithm"- this sequence of unambiguous instructions ("program"), which in a finite number of steps converts input data into output. The italicized is fundamentally important - if the program gets hung up on some initial data, then it does not describe the algorithm. For simplicity and in application to our case, the reader can consider that an algorithm is a program written in any programming language known to him, which, for any input data from a given class, is guaranteed to complete its work with a Boolean result.

Let us ask ourselves: is there a “proving algorithm” for every function (or, in short, "deductive") equivalent to this function, that is, translating each statement into exactly the same boolean value as it? More concisely, the same question can be formulated as follows: is every function over a set of propositions computable? As you can already guess, it follows from the validity of TGN that no, not any - there are non-computable functions of this type. In other words, not every true statement can be proved.

It may very well be that this statement will cause you an internal protest. This is due to several circumstances. Firstly, when we are taught school mathematics, sometimes there is a false impression that the phrases “the theorem is true” and “the theorem can be proved or verified” are almost identical. But if you think about it, it's not at all obvious. Some theorems are proved quite simply (for example, by enumeration of a small number of options), and some are very difficult. Consider, for example, Fermat's famous Last Theorem:


the proof of which was found only three and a half centuries after the first formulation (and it is far from elementary). It is necessary to distinguish between the truth of a statement and its provability. It does not follow from anywhere that there are no true, but unprovable (and not fully verifiable) statements.

The second intuitive argument against TGN is more subtle. Suppose we have some unprovable (within the framework of this deductive) statement. What prevents us from accepting it as a new axiom? Thus, we will slightly complicate our system of proofs, but this is not terrible. This argument would be perfectly correct if there were a finite number of unprovable propositions. In practice, the following may happen - after postulating a new axiom, you will stumble upon a new unprovable statement. Take it as another axiom - you will stumble upon the third. And so on ad infinitum. They say deductica will stay incomplete. We can also take forceful measures so that the proving algorithm ends after a finite number of steps with some result for any statement of the language. But at the same time, he will begin to lie - lead to the truth for incorrect statements, or to lies - for the faithful. In such cases it is said that the deductive contradictory. Thus, one more formulation of TGN sounds like this: “There are propositional languages ​​for which complete consistent deductics is impossible” - hence the name of the theorem.

Sometimes called "Gödel's theorem" is the statement that any theory contains problems that cannot be solved within the framework of the theory itself and require its generalization. In a sense this is true, although such a formulation obscures the issue rather than clarifies it.

I also note that if we were talking about the usual functions that map the set of real numbers to it, then the “non-computability” of the function would not surprise anyone (just do not confuse “computable functions” and “computable numbers” - these are different things). Any schoolchild knows that, say, in the case of a function, you must be very lucky with the argument so that the process of calculating the exact decimal representation of the value of this function ends in a finite number of steps. And most likely you will calculate it using an infinite series, and this calculation will never lead to an exact result, although it can come close to it - simply because the value of the sine of most of the arguments is irrational. TGN simply tells us that even among functions whose arguments are strings and whose values ​​are zero or one, non-computable functions, although arranged in a completely different way, also exist.

For what follows, we will describe the "language of formal arithmetic". Let us consider a class of text strings of finite length, consisting of Arabic numerals, variables (letters of the Latin alphabet) taking natural values, spaces, signs of arithmetic operations, equality and inequality, quantifiers (“exists”) and (“for any”) and, perhaps, , some other symbols (their exact number and composition are unimportant for us). It is clear that not all such lines are meaningful (for example, "" is nonsense). The subset of meaningful expressions from this class (that is, strings that are true or false in terms of ordinary arithmetic) will be our set of statements.

Examples of formal arithmetic statements:


etc. Now let's call a "formula with a free parameter" (FSP) a string that becomes a statement if a natural number is substituted into it as this parameter. Examples of FSP (with parameter):


etc. In other words, FSPs are equivalent to functions of a natural argument with a Boolean value.

Denote the set of all FSPs by the letter . It is clear that it can be ordered (for example, we first write out one-letter formulas ordered alphabetically, followed by two-letter formulas, etc.; it is not fundamental for us which alphabet the ordering will take place in). Thus, any FSP corresponds to its number in the ordered list, and we will denote it .

Let us now turn to a sketch of the proof of TGN in the following formulation:

  • For the propositional language of formal arithmetic, there is no complete consistent deduction.

We will prove by contradiction.

So let's assume that such a deductive exists. Let's describe the following auxiliary algorithm that assigns a boolean value to a natural number as follows:


Simply put, the algorithm results in the value TRUE if and only if the result of substituting into the FSP its own number in our list gives a false statement.

Here we come to the only place where I will ask the reader to take my word for it.

Obviously, under the above assumption, any FSP from can be associated with an algorithm containing a natural number at the input and a Boolean value at the output. Less obvious is the opposite:


The proof of this lemma would require at least a formal, not an intuitive, definition of the notion of an algorithm. However, if you think about it a little, it is quite plausible. Indeed, algorithms are written in algorithmic languages, among which there are such exotic ones as, for example, Brainfuck , which consists of eight one-character words, in which, nevertheless, any algorithm can be implemented. It would be strange if the richer language of formal arithmetic formulas that we have described turned out to be poorer - although, no doubt, it is not very suitable for ordinary programming.

After passing this slippery place, we quickly get to the end.

So, we have described the algorithm above. According to the lemma I asked you to believe, there exists an equivalent FSP. It has some number in the list - let's say . Let's ask ourselves, what's the point? Let it be TRUE. Then, according to the construction of the algorithm (and hence the function equivalent to it), this means that the result of substituting a number into the function is FALSE. The opposite is checked in the same way: from FALSE follows TRUE. We have arrived at a contradiction, which means that the original assumption is wrong. Thus, for formal arithmetic, there is no complete consistent deduction. Q.E.D.

Here it is appropriate to recall Epimenides (see the portrait in the title), who, as you know, declared that all Cretans are liars, being himself a Cretan. In a more concise formulation, his statement (known as the "liar paradox") can be formulated as: "I'm lying." It is precisely such a statement, which itself proclaims its falsity, that we have used for the proof.

In conclusion, I want to note that TGN does not claim anything particularly surprising. After all, everyone has long been accustomed to the fact that not all numbers can be represented as a ratio of two integers (remember, this statement has a very elegant proof that is more than two thousand years old?). And the roots of polynomials with rational coefficients are also not all numbers. And now it turned out that not all functions of a natural argument are computable.

The sketch of the proof given was for formal arithmetic, but it is not difficult to see that THN applies to many other propositional languages ​​as well. Of course, not all languages ​​are like that. For example, let's define a language like this:

  • "Any phrase in the Chinese language is a true statement if it is contained in Comrade Mao Tse Tung's quote book, and is incorrect if it is not contained."

Then the corresponding complete and consistent proving algorithm (it can be called "dogmatic deductive") looks something like this:

  • “Flip through Comrade Mao Tse Tung’s quote book until you find the statement you are looking for. If it is found, then it is true, and if the quote book is over, and the statement is not found, then it is false.

Here we are saved by the fact that any citation is obviously finite, so the process of "proving" will inevitably end. Thus, TGN is inapplicable to the language of dogmatic statements. But we were talking about complex languages, right?

The idea of ​​the proof is to construct an expression that would testify to its

own unprovability. This build can be done in three steps:

The first stage is the establishment of a correspondence between formal arithmetic and the set of integers (gedelization);

The second stage is the construction of some special property about which it is not known whether it is a theorem of formal arithmetic or not;

The third stage is the substitution in instead of x of a certain integer associated with itself, i.e., the replacement by these numbers of all

First stage. Godelization of formal arithmetic

Formal arithmetic can be arithmetized (i.e., Gaedelized) in the following way: each of its theorems is assigned a certain number. However, since any number is also a theorem, then any theorem can be considered, on the one hand, as a theorem of formal arithmetic, and on the other hand, as a theorem over a set of theorems of formal arithmetic, i.e., as a metatheorem corresponding to the proof of a certain theorem.

Thus, we can conclude that the system of formal arithmetic also contains its own metasystem.

We now present our results more concretely and in detail.

First, we can associate with each symbol and formal arithmetic a special code designation, called in this case the Gödel number

Secondly, we assign to each sequence of characters the same Gödel number using some composition function. Let where are sequences of characters that form

Thirdly (and this is essential), each proof of the sequence of axioms and substitution rules (or substitution rules) is assigned a number where denotes the sequence of theorems used in the proof

Thus, to any proof in formal arithmetic there corresponds a certain number - its Gödel number. Any reasoning of formal arimetic is transformed into calculations on the set of natural numbers.

So, instead of manipulating symbols, theorems, proofs, you can use

calculations on the set of integers. Any expression such as, for example, the following: provable in formal arithmetic," now corresponds to a certain number, which we will denote as

Let us formulate the following proposition.

Formal metaarithmetic is contained in the set of natural numbers, and it is itself contained in the interpretation of formal arithmetic.

This situation with formal arithmetic resembles the situation with natural language: after all, nothing prevents us from using it to formulate its basic concepts and rules in it.

A proper choice of function allows one to make an unambiguous transition from A to, i.e., to assign two different number-numbers to two different proofs. For example, one can choose the Gödel numbers in such a way that each symbol of the alphabet of formal arithmetic has its own prime number, as shown, for example, in Table. 3.2.

Table 3.2

Each formula (consisting of symbols varying from 1 to in turn is encoded by a sequence consisting of the first prime numbers, i.e., the number

where is a prime number.

In turn, the proof, i.e., the sequence of formulas will be encoded in a similar way by the number

And vice versa, thanks to this method of constructing numbers, it becomes possible, starting from a certain number, by decomposing it into prime factors (due to the uniqueness of decomposing natural numbers into products of powers of prime numbers), to return in two steps to exponents, i.e., to primitive symbols formal arithmetic. Of course, this is mostly of theoretical importance, as the numbers quickly get too big.

so that they can be manipulated. However, it should be noted that the fundamental possibility of this operation is essential.

Example. Let the number T be given, corresponding to some proof and representing the product of prime numbers:

This decomposition means that the proof of the theorem contains two stages: one corresponds to the number 1981027125 253, and the other to the number 1981027125 211. Decomposing again into prime factors each of these numbers, we get

From the formal arithmetic alphabet coding table (Table 3.2) we find that our Gödel numbers for These two numbers

will match the following proof:

The formula follows from the formula

Thus, in metaarithmetic, the value of the original number is obtained from formal arithmetic.

Second phase. Gödel's Lemma

Every number T associated with a proof corresponds to a theorem provable in formal arithmetic. “Gaudelized” formal arithmetic is called arithmetized formal arithmetic. Since each axiom and each rule of arithmetized formal arithmetic corresponds to some arithmetic operation, then with the help of a systematized check it is possible to determine whether a given number T corresponds to the proof of some theorem of the Number T and in this case form a pair of conjugate numbers. The expression and are conjugate” Representable within the most arithmetized formal arithmetic. This means that there is a Gödel number that digitizes this statement.

We have come to the critical point of Gödel's proof. Let A be an expression of arithmetized formal arithmetic that contains some free variable. Instead of it, you can make a substitution of some term. In particular, it is possible to replace the expression A by the expression A itself. In this case, the number-expression A simultaneously performs two different roles (see above constructions

Cantor and Richard): it is both the true expression for the substitution and the resulting term. This special substitution will be denoted as So the formula means that the number is the Gödel number obtained by substituting - to the expression A:

Gödel then constructs an expression (of which it is not known whether it is a theorem or a non-theorem) into which he introduces this substitution. The expression looks like this:

Third stage. Final substitution

In arithmetized formal arithmetic, this expression is represented in numerical form. Let E be its Gödel number. Since the expression contains a free variable, we have the right to perform a substitution - over replacing the number E and denoting - replacing E:

We denote this second expression by a and its Gödel number by E. Let us give an interpretation of the expression e.

First interpretation. There is no such pair for which the following would simultaneously hold true: on the one hand, T is the number of the arithmetized proof of the theorem arithmetized by itself, and on the other hand, there would be a replacement But since there is the same transformation as the others, it is representable in terms and in their code notation - Gödel numbers and, therefore, such a number exists. Then perhaps the number T does not exist.

Second interpretation. There is no arithmetized proof T of the theorem which would be a substitution for E. So, if there is no proof, it is because it is not a theorem in itself. This leads to a third interpretation.

Third interpretation. An expression for which the Gödel number is a -substitution of E is not a theorem in arithmetized formal arithmetic. But this is where the contradiction lies, since by construction it is itself a -substitution of E, and the number is by construction nothing other than the number E itself. This implies the last interpretation of e.

09sen

Any system of mathematical axioms, starting from a certain level of complexity, is either internally inconsistent or incomplete.

In 1900, the World Conference of Mathematicians was held in Paris, where David Gilbert(David Hilbert, 1862–1943) outlined in the form of theses the 23 most important, in his opinion, tasks that theoreticians of the coming twentieth century had to solve. Number two on his list was one of those simple problems that seem obvious until you dig a little deeper. In modern terms, it was the question: is mathematics sufficient on its own? Hilbert's second task was reduced to the need to strictly prove that the system of axioms - basic statements taken in mathematics as a basis without proof - is perfect and complete, that is, it allows mathematical description of everything that exists. It was necessary to prove that it is possible to set such a system of axioms that, firstly, they will be mutually consistent, and secondly, one can draw a conclusion from them regarding the truth or falsity of any statement.

Let's take an example from school geometry. In standard Euclidean planimetry (geometry on a plane), it can be unconditionally proved that the statement "the sum of the angles of a triangle is 180°" is true, and the statement "the sum of the angles of a triangle is 137°" is false. Speaking essentially, in Euclidean geometry, any statement is either false or true, and the third is not given. And at the beginning of the twentieth century, mathematicians naively believed that the same situation should be observed in any logically consistent system.

And then in 1931 some Viennese bespectacled mathematician Kurt Gödel- took and published a short article that simply overturned the whole world of so-called "mathematical logic". After long and complex mathematical and theoretical preambles, he literally established the following. Let's take any statement like: "Assumption #247 is logically unprovable in this system of axioms" and call it "statement A". So, Gödel simply proved the following amazing property of any system of axioms:

"If a statement A can be proved, then a non-A statement can be proved."

In other words, if it is possible to prove the validity of the statement "Assumption 247 is not provable", then it is also possible to prove the validity of the statement "Assumption 247 is provable". That is, returning to the formulation of the second Hilbert problem, if the system of axioms is complete (that is, any statement in it can be proved), then it is inconsistent.

The only way out of this situation is to accept an incomplete system of axioms. That is, we have to put up with the fact that in the context of any logical system we will still have “type A” statements that are obviously true or false - and we can judge their truth only outside the framework of the axiomatics we have adopted. If there are no such statements, then our axiomatics is contradictory, and within its framework there will inevitably be formulations that can be both proved and refuted.

So the formulation of Gödel's first, or weak, incompleteness theorem is: "Any formal system of axioms contains unresolved assumptions". But Gödel did not stop there, formulating and proving Gödel's second or strong incompleteness theorem: “The logical completeness (or incompleteness) of any system of axioms cannot be proved within the framework of this system. To prove or disprove it, additional axioms (strengthening of the system) are required.

It would be safer to think that Godel's theorems are abstract and do not concern us, but only areas of sublime mathematical logic, but in fact it turned out that they are directly related to the structure of the human brain. The English mathematician and physicist Roger Penrose (born 1931) showed that Gödel's theorems can be used to prove the existence of fundamental differences between the human brain and a computer. The point of his reasoning is simple. The computer operates strictly logically and is not able to determine whether statement A is true or false if it goes beyond the scope of axiomatics, and such statements, according to Gödel's theorem, inevitably exist. A person, faced with such a logically unprovable and irrefutable statement A, is always able to determine its truth or falsity - based on everyday experience. At least in this, the human brain is superior to a computer shackled by pure logical circuits. The human brain is able to understand the full depth of truth contained in Gödel's theorems, but a computer one can never. Therefore, the human brain is anything but a computer. He is able to make decisions, and the Turing test will pass.

Any system of mathematical axioms, starting from a certain level of complexity, is either internally inconsistent or incomplete.

In 1900, the World Conference of Mathematicians was held in Paris, at which David Hilbert (1862-1943) presented in the form of abstracts the 23 most important, in his opinion, problems formulated by him, which were to be solved by theoretical scientists of the coming twentieth century. Number two on his list was one of those simple problems that seem obvious until you dig a little deeper. In modern terms, it was the question: is mathematics sufficient on its own? Hilbert's second problem was to prove rigorously that the system axioms- the basic statements taken in mathematics as a basis without proof - is perfect and complete, that is, it allows you to mathematically describe everything that exists. It was necessary to prove that it is possible to set such a system of axioms that, firstly, they will be mutually consistent, and secondly, one can draw a conclusion from them regarding the truth or falsity of any statement.

Let's take an example from school geometry. Standard Euclidean planimetry(geometry on the plane) it is possible to prove unconditionally that the statement "the sum of the angles of a triangle is 180°" is true, and the statement "the sum of the angles of a triangle is 137°" is false. Speaking essentially, in Euclidean geometry, any statement is either false or true, and the third is not given. And at the beginning of the twentieth century, mathematicians naively believed that the same situation should be observed in any logically consistent system.

And then in 1931, some Viennese bespectacled mathematician Kurt Godel took and published a short article that simply overturned the whole world of so-called "mathematical logic". After long and complex mathematical and theoretical preambles, he literally established the following. Let's take any statement like: "Assumption #247 is logically unprovable in this system of axioms" and call it "statement A". So Gödel simply proved the following amazing property any axiom systems:

"If a statement A can be proved, then a non-A statement can be proved."

In other words, if it is possible to prove the validity of the statement "Assumption 247 not provable", then it is possible to prove the validity of the statement "Assumption 247 provably". That is, returning to the formulation of the second Hilbert problem, if the system of axioms is complete (that is, any statement in it can be proved), then it is inconsistent.

The only way out of this situation is to accept an incomplete system of axioms. That is, we have to put up with the fact that in the context of any logical system we will be left with “type A” statements that are obviously true or false - and we can judge their truth only outside the framework of the axiomatics we have adopted. If there are no such statements, then our axiomatics is contradictory, and within its framework there will inevitably be formulations that can be both proved and refuted.

So the wording first,or weak Gödel's incompleteness theorems: "Any formal system of axioms contains unresolved assumptions." But Gödel did not stop there, formulating and proving second, or strong Godel's incompleteness theorem: “The logical completeness (or incompleteness) of any system of axioms cannot be proved within the framework of this system. To prove or disprove it, additional axioms (strengthening of the system) are required.”

It would be safer to think that Godel's theorems are abstract and do not concern us, but only areas of sublime mathematical logic, but in fact it turned out that they are directly related to the structure of the human brain. The English mathematician and physicist Roger Penrose (born 1931) showed that Gödel's theorems could be used to prove fundamental differences between the human brain and a computer. The point of his reasoning is simple. The computer operates strictly logically and is not able to determine whether statement A is true or false if it goes beyond the scope of axiomatics, and such statements, according to Gödel's theorem, inevitably exist. A person, faced with such a logically unprovable and irrefutable statement A, is always able to determine its truth or falsity - based on everyday experience. At least in this, the human brain is superior to a computer shackled by pure logical circuits. The human brain is able to understand the full depth of truth contained in Gödel's theorems, but a computer one can never. Therefore, the human brain is anything but a computer. He is capable make decisions, and the Turing test will pass.

I wonder if Hilbert had any idea how far his questions would take us?

Kurt Godel, 1906-78

Austrian, then American mathematician. Born in Brünn (Brünn, now Brno, Czech Republic). He graduated from the University of Vienna, where he remained a teacher in the Department of Mathematics (since 1930 - a professor). In 1931 he published a theorem that later received his name. Being a purely apolitical person, he extremely hard survived the murder of his friend and department employee by a Nazi student and fell into a deep depression, the relapses of which haunted him until the end of his life. In the 1930s, he emigrated to the United States, but returned to his native Austria and got married. In 1940, at the height of the war, he was forced to flee to America in transit through the USSR and Japan. For some time he worked at the Princeton Institute for Advanced Study. Unfortunately, the psyche of the scientist could not stand it, and he died of starvation in a psychiatric clinic, refusing to eat, because he was convinced that they intended to poison him.

Uspensky V.A.

Gödel's incompleteness theorem. 1994.

Theoretical Computer Science 130,1994, pp.273-238.

Perhaps Gödel's incompleteness theorem is truly unique. Unique in that they refer to it when they want to prove "everything in the world" - from the presence of gods to the absence of reason. I have always been interested in a more "primary question" - and who among those who refer to the incompleteness theorem could not only formulate it, but also prove it? I publish this article for the reason that it presents a very accessible formulation of Gödel's theorem. I recommend that you first read the article by Tullio Regge Kurt Gödel and his famous theorem

The conclusion about the impossibility of a universal criterion of truth is

a direct consequence of the result obtained by Tarski by combining

Gödel's undecidability theorem with his own theory of truth, according to

which there can be no universal criterion of truth even for relatively

narrow area of ​​number theory, and hence for any science that uses

arithmetic. Naturally, this result applies a fortiori to the concept of truth

in any non-mathematical field of knowledge in which it is widely

arithmetic is used.

Karl Popper

Uspensky Vladimir Andreevich was born on November 27, 1930 in Moscow. Graduated from the Faculty of Mechanics and Mathematics of Moscow State University (1952). Doctor of Physical and Mathematical Sciences (1964). Professor, Head of the Department of Mathematical Logic and Theory of Algorithms of the Faculty of Mechanics and Mathematics (1966). Reads courses of lectures "Introduction to Mathematical Logic", "Computable Functions", "Gödel's Completeness Theorem". Prepared 25 candidates and 2 doctors of sciences

1. Statement of the problem

The incompleteness theorem, the exact formulation of which we will give at the end of this chapter, and perhaps later (if the reader is interested in this) and the proof, states approximately the following: under certain conditions in any language there are true, but unprovable statements.

When we formulate a theorem in this way, almost every word requires some explanation. Therefore, we will begin by explaining the meaning of the words we use in this formulation.

We will not give the most general possible definition of a language, preferring to confine ourselves to those language concepts that we will need later. There are two such concepts: "the alphabet of the language" and "the set of true statements of the language".

1.1.1. Alphabet

By alphabet, we mean a finite set of elementary signs (that is, things that cannot be broken down into component parts). These characters are called letters of the alphabet. By a word of an alphabet we mean a finite sequence of letters. For example, ordinary words in English (including proper names) are words of the 54-letter alphabet (26 small letters, 26 uppercase letters, a dash and an apostrophe). Another example - natural numbers in decimal notation are words of a 10-letter alphabet, whose letters are signs: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. We will use ordinary capital letters to denote alphabets. If L is an alphabet, then L? will denote the set of all words of the alphabet L, - words formed from its letters. We will assume that any language has its own alphabet, so that all expressions of this language (ie - names of various objects, statements about these objects, etc.) are words of this alphabet. For example, any sentence in the English language, as well as any text written in English, can be considered as a word of the extended 54-letter alphabet, which also includes punctuation marks, interword space, a red line character, and possibly some other useful characters. Assuming that language expressions are words of some alphabet, we thus exclude from consideration "multilayer" expressions like ???f(x)dx. However, this limitation is not too significant, since any such expression, using suitable conventions, can be "stretched" into a linear form. Any set M contained in L? is called a word set of the alphabet L. If we simply say that M is a word set, then we mean that it is a word of some alphabet. Now the language assumption above can be rephrased as follows: in any language, any set of expressions is a word set.

1.1.2. Lots of true claims

We assume that we are given a subset T of the set L? (where L is the alphabet of some language we are considering), which is called the set of "true statements" (or simply "truths"). Passing directly to the subset T, we omit the following intermediate steps of reasoning: firstly, which words of the alphabet L are well-formed expressions of the language, that is, they have a certain meaning in our interpretation of this language (for example, 2 + 3, x + 3, x=y, x=3, 2=3, 2=2 are well-formed expressions, while expressions like +=x are not); secondly, which expressions are formulas, i.e. may depend on a parameter (eg, x=3, x=y, 2=3, 2=2); thirdly, which of the formulas are closed formulas, i.e. statements that do not depend on parameters (for example, 2=3, 2=2); and finally, which closed formulas are true statements (for example, 2=2).

1.1.3. Fundamental language pair

1.2. "Unprovable"

"Unprovable" means having no evidence.

1.3. Proof

Despite the fact that the term "proof" is perhaps one of the most important in mathematics (the Bourbaki begin their book "Fundamentals of Mathematics" with the words: "From the time of the ancient Greeks, saying "mathematics" meant the same as saying "proof""), he does not have a precise definition. In general, the concept of proof with all its semantic branches belongs, rather, to the field of psychology than to mathematics. But be that as it may, proof is simply an argument that we ourselves find quite convincing in order to convince everyone else.

When written down, the proof becomes a word in some alphabet P, just as any English text is a word in the alphabet L, an example of which was given above. The set of all proofs forms a subset (and quite a large subset) of the set P?. We will not attempt to give a precise definition of this both "naive" and "absolute" concept of proof, or - which is equivalent - to define the corresponding subset of P?. Instead, we will consider a formal analogue of this vague concept, for which we will still use the term "proof" in what follows. This analog has two very important features that distinguish it from the intuitive concept (although the intuitive idea of ​​the proof still reflects these features to some extent). First of all, we assume that there are different conceptions of proof, that is, different subsets of proofs in P? are allowed, and even more than that: we will, in fact, assume that the alphabet of proofs of P itself can change. In what follows, we will require that for each such conception of a proof there is an efficient method, in other words, an algorithm that would necessarily determine whether a given word of the alphabet P is a proof or not. We also assume that there is an algorithm that can always be used to determine which statement a given proof proves. (In many situations, the statement being proved is simply the last statement in the sequence of steps that form the proof.)

Thus, our final wording of the definition is as follows:

(1) We have the alphabet L (the alphabet of the language) and the alphabet P (the alphabet of the proof).

(2) We are given a set P which is a subset of P? and whose elements are called "proofs". In the following, we will assume that we also have an algorithm that allows us to determine whether an arbitrary word of the alphabet P is an element of the set P, that is, a proof, or not.

(3) Also we have a function? (for finding what exactly has been proven), whose domain is? satisfies the condition P???P?, and whose range is in P?. We assume that we have an algorithm that calculates this function (the exact meaning of the words "algorithm calculates a function" is the following: the values ​​of the function are obtained using this algorithm - a set of special transformation rules). We will say that the element p? P is a proof of the word?(p) of the alphabet L.

A triple that satisfies conditions (1)-(3) is called a deductive system over the alphabet L.

For the reader familiar with the usual way of defining "proof" in terms of "axiom" and "rule of inference", we will now explain how this method can be regarded as a special case of the definition given in section 1.3.2. That is, a proof is usually defined as a sequence of such language expressions, each of which is either an axiom or previously obtained from already existing statements using one of the inference rules. If we add a new word * to the alphabet of our language, then we can write such a proof as a word composed using the resulting alphabet: the sequence of expressions becomes the word C1*C2*...*Cn. In this case, the function that determines what exactly has been proved has its value in the part of this word immediately following the last letter * in the sequence. The algorithm whose existence is required in Section 1.3.2. definitions, can easily be constructed once we have precisely defined any of the accepted meanings of the words "axiom" and "rule of inference".

1.4. Attempts to accurately formulate the incompleteness theorem

1.4.1. First try

"Under certain conditions, for the fundamental pair of the language of the alphabet L and the deductive system over L, there always exists a word in T that has no proof." This option still looks vague. In particular, we could easily come up with any number of deductive systems that have very few provable words. For example, in an empty deductive system (where P = ?) there are no words at all that would have proofs.

1.4.2. Second try

There is another, more natural approach. Suppose we are given a language - in the sense that we are given a fundamental pair of this language. Now we will look for such a deductive system over L (intuitively, we are looking for a proof technique) with which we could prove as many words from T as possible, in the limit all words from T. Gödel's theorem describes the situation in which such a deductive system ( by which every word in T would be provable) does not exist. Thus, we would like to formulate the following statement:

"Under certain conditions regarding the fundamental pair, there is no such deductive system in which every word from T would have a proof."

However, such a statement is obviously false, since it is only necessary to take a deductive system in which P = L, P = P? and?(p) = p for all p in P?; then every word from L? is trivially provable. Therefore, we need to accept some limitation on which deductive systems we use.

1.5. Consistency

It would be quite natural to require that only "true statements", that is, only words from T, can be proved. We will say that a deductive system is consistent with respect to a fundamental pair if?(P)?T. In all subsequent reasoning, we will be interested only in such consistent deductive systems. If we are given a language, then it would be extremely tempting to find such a consistent deductive system in which every true statement would have a proof. The variant of Gödel's theorem that interests us exactly states that under certain conditions with respect to the fundamental pair, it is impossible to find such a deductive system.

1.6. completeness

It is said that a deductive system is complete with respect to a fundamental pair, provided that ?(P)?T. Then our formulation of the incompleteness theorem takes the following form:

Under certain conditions with respect to the fundamental pair, there is no such deductive system over L that would be both complete and relatively consistent.