The foundations of experimental science were laid. The Emergence of Experimental Science

The formation of science in the proper sense of the word is associated with the use of the experimental method in scientific research, which was the basis of theoretical natural science. As V.S. Stepin noted, the very idea of ​​experimental research implicitly assumed the presence in culture of special ideas about nature, activity and the cognizing subject, which were not characteristic of ancient culture, but began to form in the Renaissance and received a complete expression in the New Age. In an experimental study, the subject of cognition acts as an active principle, opposing natural matter, changing its things by force pressure on them. A natural object is known in an experiment because it is placed in simulated conditions and only thanks to this it reveals its invisible essential connections for the subject.

The socio-cultural prerequisite for the experimental study of nature was a new system of value orientations, which is already beginning to be seen in the culture of the Renaissance. On the one hand, it is argued, in contrast to the medieval worldview, a new system of humanistic ideas associated with the concept of man as actively opposed to nature as a thinking and active principle. On the other hand, the interest in the knowledge of nature is emphasized, which is considered as a field for the application of human forces.

Already in the Renaissance, a new understanding of the relationship between natural, natural and artificial, created by human activity, began to take shape. The traditional Christian teaching about the creation of the world by God receives a special interpretation. In relation to the divine mind that created the world, nature is seen as artificial. Human activity is interpreted as a kind of similarity on a small scale of acts of creation. And the basis of this activity is the imitation of nature, the recognition of a reasonable principle (laws) in it and the following of a meaningful harmony of nature in human arts - science, art, technical inventions. The values ​​of artificial and natural are equalized, and a reasonable change in nature in the process of human activity appears not as something contrary to it, but as consistent with its natural structure. It is this new attitude towards nature was enshrined in the category of "nature", which served as a prerequisite for the development of a fundamentally new way of knowing the world: the idea arises of the possibility of asking nature theoretical questions and getting answers to them by actively transforming natural objects.

New meanings of the category "nature" were associated with the formation of new meanings of the categories of "space" and "time" as homogeneous formations, and this made it possible to assert the idea of ​​the possibility and reproducibility of the experiment anywhere in the world and at any time.

The experimental method began to be prepared for development by Leonardo da Vinci (1452-1519). But Leonardo lived a hundred years before this era, and he did not have the appropriate technical capabilities and conditions. The logical structure of the experimental method was also not developed. Leonardo da Vinci's experiment lacked the strictness of definitions and the accuracy of measurements.

The beginning of the experimental method of modern times was laid by the invention of two important instruments: the compound microscope (c. 1590) and the telescope (c. 1608). Already the ancient Greeks were familiar with the magnifying power of lens glasses. But the essence of both a microscope and a telescope is the combination of several magnifying glasses. Apparently, initially such a connection happened by chance, and not under the influence of any guiding theoretical idea. The first microscope was apparently invented by the Dutch glass grinder Zachary Jansen. The first spyglass was the Dutch optician Franz Lipperstey.

With the advent of telescopes, the development of astronomy has risen to a qualitatively new level. The four largest satellites of Jupiter were discovered, many new stars not visible to the naked eye; it was reliably established that nebulae and galaxies are a huge cluster of stars. In addition, dark spots were found on the Sun.

G. Galilei played a fundamental role in substantiating the experimental method. Galileo and his followers at the Florentine Academy of Experiments, founded after his death, conducted field experiments. A full-scale experiment is carried out with objects in the situation of the reality being studied and, as a rule, involves the intervention of the experimenter in the natural course of events. Galileo also introduced the thought experiment into scientific knowledge. A mental experiment involves setting a conditional situation that exhibits the properties of interest to the researcher and operating with idealized objects. Galileo actively introduced into the minds of scientists of his time the idea that science without mental construction, without idealization, without abstractions, without generalizing conclusions based on facts, is impossible.

Galileo's ideas about the method of experiment were most productively developed by H. Huygens. On the basis of experimental research, Huygens invented a pendulum clock with an escapement mechanism, established the laws of oscillation of a physical pendulum, and laid the foundations for the theory of impact. Huygens improved the telescope by designing an eyepiece and with the help of this device he discovered the ring of Saturn and its satellite Titan.

The productivity of the experimental method was demonstrated in the subsequent period of the development of mechanics. The tradition, going from Galileo and Huygens to Hooke and Newton, was associated with attempts to model in thought experiments with mechanical devices the forces of interaction between celestial bodies. For example, Hooke considered the rotation of the planets by analogy with the rotation of a body attached to a thread, as well as a body tied to a rotating wheel. Newton used an analogy between the rotation of the Moon around the Earth and the motion of a ball inside a hollow sphere.

Characteristically, it was on this path that the law of universal gravitation was discovered. This law was formulated by Newton by comparing Kepler's laws and mathematical expressions obtained in a thought experiment on an analog mechanical model that characterize the motion of a ball under the action of centrifugal forces.

experimental science

The idea of ​​humanists to increase the importance of man and make him feel like a part of Nature brought to life the first attempts to create an experimental science.

Until that moment, science was a complex of theoretical knowledge. The Laws of the Universe and Nature were formulated once and for all, and there was no attempt, no need to check and confirm them.

The renaissance, on the contrary, elevates the person who feels himself a part of Nature, naturally associates himself with her and tries to understand, experience and describe her laws.

Thus, acting with the heart, a person comes to a new idea of ​​science. Other interesting points are added to this: for example, during the Renaissance, the original works of Plato, Pythagoras, the works of Greek astronomers, geographers, and mathematicians reappear, since it was very important for humanists to return to the original texts, thus moving away from medieval translations, dogmatic and tendentious.

The study of these ancient sources shocked followers who realized that many centuries ago scientists, astronomers, geographers, mathematicians, physicians, astrologers lived, who interpreted the basic laws of the Universe using numbers and formulas that the language of mathematics uses to explain these laws. An example is the Pythagoreans and Plato, who continued their philosophy.

The revival of science was also associated with magic.

From the book Ancient Greece author Lyapustin Boris Sergeevich

From the book Jews, Christianity, Russia. From prophets to general secretaries author Katz Alexander Semyonovich

18. Jews and science Evaluating the personal achievements of scientists is more difficult than the achievements of cultural figures. It is believed that everyone understands culture, and therefore each person on an emotional level confidently and authoritatively judges a movie, an artist’s canvas or a song. Science is destiny

From the book Another History of Literature. From the very beginning to the present day author Kalyuzhny Dmitry Vitalievich

Science and "science"

From the book Culturology: A Textbook for Universities author Apresyan Ruben Grantovich

10.2. Science and human consciousness. Science and morality One of the connecting links between the internal development of science and people's consciousness is the picture of the world. It becomes one of the mechanisms of the influence of science on the worldview, so it is important to understand what it is

From the book The Renaissance. Life, religion, culture the author Chamberlin Eric

From the book The Greatness of Ancient Egypt author Murray Margaret

Science The Egyptians achieved particular success in applied mathematics and medicine. Unfortunately, no explanations are given in the surviving papyri, in which problems and solutions are given. Irrigation for a country with a hot and dry climate was of vital importance.

From the book Myths and Legends of China author Werner Edward

Science and education It should be noted that traditional Chinese education was mainly bookish, since the need to develop those forms of knowledge that prevailed in the Western system of education was considered secondary. The Chinese system developed in connection with specific

From the book Verboslov-2, or Notes of an insane person author Maksimov Andrey Markovich

The Science of Common Sense is a collection of prejudices acquired before the age of eighteen. Albert EINSTEIN, physicist, Nobel Prize winner Every serious business on Earth has an idea. There is, of course, and science. Great American writer and scientist

From the book The Truth of Myth the author Huebner Kurt

1. Science Semantic intersubjectivity means that a word or sentence is understood by everyone in the same way. How is this possible19? Some philosophers argue that this is possible if words and sentences are associated with certain forms of contemplation, perception

From the book 1000 wise thoughts for every day author Kolesnik Andrey Alexandrovich

Science Gustave Lebon (1841-1931) psychologist, doctor... In difficult circumstances, a people can be saved by heroism, but only a combination of small consistent virtues determines its greatness. ... It’s scary to even think sometimes about the strength that gives a person with an extraordinary

From the book Time, Forward! Cultural policy in the USSR author Team of authors

From the book Studies in the Conservation of Cultural Heritage. Release 3 author Team of authors

Experimental part 1. Researches of a cardboard of mounts. It should be noted that for the temporary exhibition, all the exhibits were mounted in new cardboard boxes without removing the permanent mount using protective glass. The inner and outer layers of the material were examined

From the book Soviet joke (Index of plots) author Melnichenko Misha

EXPERIMENTAL In terms of chemical composition, the white lead pigment is the main lead carbonate 2PbC03Pb(OH)2 with a small admixture of lead carbonate РbС03. Although both of these lead salts are present in nature - the common mineral cerussite PbCO3,

From the book World of Saga author Steblin-Kamensky Mikhail Ivanovich

Historians find the prerequisites for the emergence of experimental science in a number of economic, political and general cultural factors that developed in Europe in the 14th-15th centuries. These include the decomposition of feudal relations, accompanied by an increase in the exchange of goods, the transition from natural to monetary exchange, which contributed to the accumulation of capital and the gradual transition to capitalist relations. The development of trade required the expansion of areas of activity, the development of new countries and continents: geographical discoveries expanded the horizon of vision of the world of the medieval European. It turned out that the world is not limited to the territory of principalities or a separate state, it is inhabited by different peoples who speak different languages, have their own traditions and customs. There is an interest and a need to study them, as well as an exchange of ideas (trade relations with the Arab East led to the discovery of the natural philosophy of the Arabs for Western Europe).

Medieval universities, which later became centers of science, played an important role in the process of secularization (from Latin sacularis - worldly, secular), the liberation of culture from the authority of the church, the separation of philosophy and theology, science and scholasticism.

The growth of cities and, consequently, the expansion of crafts, the emergence of manufactories, the development of trade demanded new tools, tools that could be created by new technology based on experience and science. The demand for new inventions that have passed experimental testing has led to the rejection of speculative conclusions in science. Experimental science was declared "the mistress of the speculative sciences" (R. Bacon).

At the same time, the science of the Renaissance could not be free from the influence of Antiquity, but unlike the Middle Ages, which broadcast the experience of ideal modeling of reality, the Renaissance significantly revised and modified it.

At the origins of the formation of experimental (experimental) science are the figures of N. Copernicus (1473-1543) and Galileo Galilei (1564-1642).

N. Copernicus, relying on astronomical observations and calculations, made a discovery that allows us to talk about the first scientific revolution in natural science - this is a heliocentric system. The essence of his teaching is briefly reduced to the statement that the Sun, and not the Earth (as Ptolemy believed) is at the center of the universe and that the Earth revolves around its axis in a day, and around the Sun in a year. (At the same time, when conducting observations, Copernicus relied only on the eye without a special instrument and mathematical calculations.) This was a blow not only to the Ptolemaic picture of the world, but also to the religious one in general. Nevertheless, the Copernican doctrine contained many contradictions and gave rise to a lot of questions that he himself could not answer. For example, when asked why the Earth, as it rotates, does not throw everything off its surface, Copernicus, in the spirit of Aristotelian logic, answered that bad consequences cannot be caused by residual motion and that “the rotation of our planet does not cause a constant wind due to the presence atmosphere containing the earth (one of the four elements of Aristotle) ​​and thus revolving in harmony with the planet itself. This answer shows that Copernicus's thinking was not free from Aristotle's tradition and religious faith - he was a son of his time. Copernicus himself believed that his theory did not claim to be a real reflection of the structure of the Universe, but was merely a more convenient way to calculate the motion of the planets. Here is another quote from this source: Copernicus “… disputed the complexity of predicting the motion of the planets based on the Ptolemaic heritage, and tried to look at the available data differently.

This is the significance of Copernicus for the philosophy of science: he demonstrated the possibility of different interpretations of the same facts, putting forward alternative theories and choosing from them a simpler one, which allows drawing more accurate conclusions.

More than a century passed before another outstanding thinker - Galileo Galilei - was able to answer many unresolved questions and contradictions of Copernicus.

Galileo is considered the founder of the experimental study of nature, but at the same time he managed to combine experiment with mathematical description. Having set himself the goal of proving that nature lives according to certain mathematical laws, he conducted experiments using various instruments. One of these was the telescope he made from a spyglass, which helped him make a number of discoveries of tremendous importance for science in general and cosmology in particular. With his help, he discovered that moving stars (meaning planets) are not like fixed stars and are spheres that glow with reflected light. In addition, he was able to detect the phases of Venus, which proved its rotation around the Sun (and hence the rotation of the Earth around the same Sun), which confirmed the conclusion of Copernicus and refuted Ptolemy. The movement of the planets, the annual movements of sunspots, the ebb and flow - all this proved the actual rotation of the Earth around the Sun.

An example of the fact that Galileo often resorted to experiments is the following fact: trying to prove the conclusion that bodies fall down at the same speed, he threw balls of different weights from the Leaning Tower of Pisa and, measuring the time of their fall, refuted Aristotle in his statement that that the speed of a body increases as it moves toward the Earth in proportion to its weight.

I will give one more example, which is of great importance for the establishment of a scientific approach to the study of the world. As you know, Aristotle believed that the basis of all things in the world are four reasons: matter (physical substrate), form (design, appearance), action or movement (what caused their appearance), purpose (design, intention). Galileo, exploring the reasons for the acceleration of movement, comes to the conclusion that one should look not for the cause of any phenomenon (ie why it arose), but how it happens. Thus, the principle of causality subsequently, in the course of the development of science, is gradually eliminated from it.

Galileo not only conducted experiments, but also made their mental analysis, in which they received a logical interpretation. This technique greatly contributed to the ability not only to explain, but also to predict phenomena. It is also known that he widely used such methods as abstraction and idealization.

For the first time in the history of science, Galileo proclaims that when studying nature, it is possible to abstract from direct experience, since nature, as he believed, is “written” in mathematical language, and it can be unraveled only when, abstracted from sensory data, but based on them, mental constructions, theoretical schemes. Experience is material purified in mental assumptions and idealizations, and not just a description of facts. The role and importance of Galileo in the history of science can hardly be overestimated. He laid (in the opinion of most scientists) the foundation of the science of nature, introduced a thought experiment into scientific activity, substantiated the possibility of using mathematics to explain natural phenomena, which gave mathematics the status of a science. The laws that are clear and obvious today for every schoolchild were derived precisely by him (the law of inertia, for example), he set a certain style of thinking, brought scientific knowledge out of the framework of abstract conclusions to experimental research, liberated thinking, reformed the intellect. Associated with his name second scientific revolution in natural science and the birth of true science.

The second scientific revolution comes to an end in the name of Isaac Newton (1643-1727). J. Bernal called Newton's main work "The Mathematical Principles of Natural Philosophy" the "bible of science".

Newton is the founder of classical mechanics. And, although today, from the standpoint of modern science, Newton's mechanistic picture of the world seems rough, limited, it was it that gave impetus to the development of theoretical and applied sciences for the next almost 200 years. We owe Newton such concepts as absolute space, time, mass, force, speed, acceleration; he discovered the laws of motion of physical bodies, laying the foundation for the development of the science of physics. However, none of this could have happened if Galileo, Copernicus and others had not been before him. No wonder he himself said: "I stood on the shoulders of giants."

Newton perfected the language of mathematics by creating integral and differential calculus, he is the author of the idea corpuscular-wave the nature of the world. One could also enumerate much of what this scientist gave to science and understanding of the world.

Let us dwell on the main achievement of Newton's scientific research - the mechanistic picture of the world. It contains the following provisions:

The statement that the whole world, the Universe is nothing but a collection of a huge number of indivisible and unchanging particles moving in space and time, interconnected by gravitational forces transmitted from body to body through the void.

It follows that all events are rigidly predetermined and subject to the laws of classical mechanics, which makes it possible to predetermine and predict the course of events.

The elementary unit of the world is an atom, and all bodies consist of absolutely solid, indivisible, unchanging corpuscles - atoms. When describing mechanical processes, he used the concepts of "body" and "corpuscle".

The movement of atoms and bodies was presented as a simple movement of bodies in space and time. The properties of space and time, in turn, were presented as unchanging and independent of the bodies themselves.

Nature was presented as a large mechanism (machine), in which each part had its own purpose and strictly obeyed certain laws.

The essence of this picture of the world is the synthesis of natural science knowledge and the laws of mechanics, which reduced (reduced) the whole variety of phenomena and processes to mechanical ones.

It is possible to note the pros and cons of such a picture of the world. The pluses include the fact that it made it possible to explain many phenomena and processes occurring in nature, without resorting to myths and religion, but from nature itself.

As for the cons, there are many. For example, matter in the mechanistic interpretation of Newton was presented as an inert substance, doomed to the eternal repetition of things; time is an empty duration, space is a simple "receptacle" of matter, existing independently of neither time nor matter. The cognizing subject was eliminated from the picture of the world itself - it was a priori assumed that such a picture of the world always exists, by itself, and does not depend on the means and methods of the cognizing subject.

It should also be noted the methods (or principles) of studying nature, on which Newton relied. They can be presented in the form of a research program (or plan).

First of all, he suggested resorting to observation, experiment, experiments; then, using induction, isolate individual aspects of the observed object or process in order to understand how the main patterns and principles are manifested in it; then to carry out the mathematical expression of these principles, on the basis of which to build an integral theoretical system and, by deduction, "come to laws that have unlimited force in everything."

The mechanistic picture of the world, the methods of scientific explanation of nature, developed by Newton, gave a powerful impetus to the development of other sciences, the emergence of new areas of knowledge - chemistry, biology (for example, R. Boyle was able to show how elements combine and explain other chemical phenomena based on ideas about the movement of "small particles of matter" (corpuscles)). Lamarck, in search of an answer to the question about the source of changes in living organisms, relying on Newton's mechanistic paradigm, concluded that the development of all living things is subject to the principle of "increasing movement of fluids."

The mechanistic picture of the world had a huge impact on philosophy - it contributed to the establishment of a materialistic view of the world among philosophers. For example, T. Hobbes (1588-1679) criticized the "incorporeal substance", arguing that everything that exists must have a physical form. Everything is a moving matter - he even presented the mind as a kind of mechanism, and thoughts as matter moving in the brain. In general, philosophical disputes about the nature of reality contributed to the creation of the environment in which the formation of various sciences took place.

Until the 19th century, a mechanistic picture of the world reigned in natural science, and knowledge was based on methodological principles - mechanism and reductionism.

However, with the development of science, its various areas (biology, chemistry, geology, physics itself), it became obvious that the mechanistic picture of the world is not suitable for explaining many phenomena. Thus, while studying the electric and magnetic fields, Faraday and Maskwell discovered the fact that matter could be represented not only as a substance (in accordance with its mechanistic interpretation), but also as an electromagnetic field. Electromagnetic processes could not be reduced to mechanical ones, and therefore the conclusion suggested itself: not the laws of mechanics, but the laws of electrodynamics are fundamental in the universe.

In biology J.B. Lamarck (1744-1829) made a startling discovery about the constant change and complication of all living organisms in nature (and nature itself), proclaiming the principle evolution, which also contradicted the position of the mechanistic picture of the world about the immutability of the particles of the universe and the predetermined nature of events. Lamarck's ideas were completed in the evolutionary theory of Charles Darwin, who showed that animals and plant organisms are the result of a long development of the organic world, and revealed the causes of this process (which Lamarck could not do before him) - heredity and variability, as well as driving factors - natural and artificial selection. Later, many of the inaccuracies and assumptions of Darwin were supplemented by genetics, which explained the mechanism of heredity and variability.

The cellular theory of the structure of living organisms is also one of the links in the general chain of discoveries that undermined the foundations of the classical, mechanistic picture of the world. It is based on the idea that all living plants and organisms, from the simplest to the most complex (human), have a common structural unit - the cell. All living things have an internal unity and develop according to uniform laws (and not in isolation from each other).

Finally, the discovery of the law of conservation of energy in the 40s of the XIX century (J. Mayer, D. Joule, E. Lenz) showed that such phenomena as heat, light, electricity, magnetism are also not isolated from each other (as it is imagined before), but interact, pass under certain conditions one into another and are nothing but different forms of movement in nature.

Thus, the mechanistic picture of the world was undermined with its simplified idea of ​​motion as a simple movement of bodies in space and time, isolated from one another, of the only possible form of motion - mechanical, of space as a "receptacle" of matter and of time as an unchanging constant, not depending on the bodies themselves.



At the turn of the 16th and 17th centuries, when the foundations of new mathematics were laid, the foundations of experimental physics were also laid. The leading role here belongs to Galileo (1564-1642), who not only made numerous discoveries that made up the era, but in his books, letters and conversations taught his contemporaries a new method of obtaining knowledge. The impact of Galileo on the minds was enormous. Another person who played an important role in the development of experimental science was Francis Bacon (1561–1626), who made a philosophical analysis of scientific knowledge and the method of induction.

Unlike the ancient Greeks, European scholars were by no means disdainful of empirical knowledge and practical activity. At the same time, they fully mastered the theoretical heritage of the Greeks and have already embarked on the path of their own discoveries. The combination of these aspects gave rise to a new method. Bacon writes:

Those who practiced the sciences were either empiricists or dogmatists. The former, like the ant, only collects and uses what they have collected. The latter, like a spider, create fabric from themselves. The bee, on the other hand, chooses the middle way, she extracts material from the flowers of the garden and the field, but disposes and changes it with her own skill. The true work of philosophy does not differ from this either. For it does not rest exclusively or predominantly on the powers of the mind, and does not deposit in the consciousness untouched the material drawn from natural history and from mechanical experiments, but changes it and processes it in the mind. So, a good hope should be placed on a closer and indestructible (which has not been so far) union of these faculties of experience and reason.

13.2. scientific method

concept experiment presupposes a theory. There is no experiment without theory, there is only observation. From a cybernetic (systemic) point of view, an experiment is controlled surveillance; the control system is the scientific method, which, based on theory, dictates the setting of the experiment. Thus the transition from mere observation to experiment is a metasystemic transition in the realm of experience, and this is the first aspect of the emergence of the scientific method; its second aspect is the realization of the scientific method as something above theory, in other words, the mastery of the general principle of describing reality with the help of a formalized language, which we discussed in the previous chapter. In general, the emergence of the scientific method is a single metasystem transition that creates a new level of control, including control of observation (setting up an experiment) and control of language (development of a theory). The new metasystem is science in the modern sense of the word. Within the framework of this metasystem, close links are established between experiment and theory - direct and reverse. Bacon describes them thus:

Our way and our method ... are as follows: we do not extract practice from practice and experience from experience (as empiricists), but causes and axioms from practice and experience, and from causes and axioms again practice and experience, as true Interpreters Nature.

Now we can give a definitive answer to the question of what happened in Europe at the beginning of the 17th century: there was a major metasystem transition that captured both linguistic and non-linguistic activities. In the field of non-linguistic activity, it appeared in the form of an experimental method. In the field of linguistic activity, he gave rise to a new mathematics, which develops through metasystem transitions (staircase effect) along the line of ever-deepening self-awareness as a formalized language that serves to create models of reality. We described this process in the previous chapter, without going beyond mathematics. Now we can complete its description by pointing out the system within which this process becomes possible. This system is science as a whole with the scientific method as its governing device, i.e. (to decipher this short form of expression) the totality of all human beings who practice science and have mastered the scientific method, together with all the objects they use. In Chapter 5, speaking of the ladder effect, we noticed that it manifests itself when there is a metasystem Y, which continues to be a metasystem in relation to systems of the series X, X", X"", ..., where each next system is formed by a metasystem transition from the previous one, and which, remaining a metasystem, just provides the possibility of metasystem transitions of a smaller scale from X to X", from X" to X"", etc. Such a system Y has internal development potential; we named her ultrametasystem. With the development of material production by the ultrametasystem Y is a set of human beings who have the ability to turn a tool into an object of labor. With the development of the exact sciences by the ultrametasystem Y is a set of people who have mastered the scientific method, that is, who have the ability to create models of reality using a formalized language.

We have seen that in Descartes the scientific method, taken in its linguistic aspect, served as a lever for the reform of mathematics. But Descartes not only reformed mathematics; developing the same aspect of the same scientific method, he created many theoretical models, or hypotheses, to explain physical, cosmic, and biological phenomena. If Galileo can be called the founder of experimental physics, and Bacon - its ideologist, then Descartes is both the founder and ideologist of theoretical physics. True, Descartes' models were purely mechanical (there could be no other models then) and imperfect, most of them soon became outdated. However, this is not as important as the fact that Descartes approved the principle of constructing theoretical models. In the 19th century, when the initial knowledge of physics was accumulated and the mathematical apparatus improved, this principle showed all its fruitfulness.

We will not be able here even in a cursory review to touch on the evolution of the ideas of physics and its achievements, as well as the ideas and achievements of other natural sciences. We will focus on two aspects of the scientific method that are of universal importance, namely, the role of general principles in science and the criteria for choosing scientific theories, and then we will consider some consequences of the achievements of modern physics in view of their importance for the whole system of science and worldview in general. We conclude this chapter by discussing some perspectives on the development of the scientific method.

13.3. The role of general principles

Bacon put forward a program for the gradual introduction of theoretical propositions ("reasons and axioms") of greater and greater generality, starting with empirical single data. He called this process by induction(i.e. introduction) as opposed to deduction(derivation) of theoretical propositions of lesser generality from propositions of greater generality (principles). Bacon was a great opponent of general principles, he said that the mind does not need wings to lift it up, but lead to pull it to the ground. During the period of “initial accumulation” of experimental facts and the simplest empirical laws, as well as as a counterbalance to medieval scholasticism, this concept still had some justification, but later it turned out that the wings of the mind were still more necessary than lead. In any case, this is the case in theoretical physics. In confirmation, let's give the floor to such an undoubted authority in this field as Albert Einstein. In the article "Principles of Theoretical Physics" he writes:

In order to apply his method, the theorist needs as a foundation some general assumptions, so-called principles, from which he can deduce consequences. His work is thus divided into two stages. Firstly, he needs to find principles, and secondly, to develop the consequences arising from these principles. To perform the second task, he is thoroughly armed since school. Consequently, if for a certain area, i.e., a set of interdependencies, the first problem is solved, then the consequences will not be long in coming. The first of these tasks is of a completely different kind, that is, the establishment of principles that can serve as the basis for deduction. There is no method here that can be learned and systematically applied to achieve the goal. Rather, the researcher must elicit from nature well-defined general principles that reflect certain common features of a multitude of experimentally established facts.

In another paper (Physics and Reality), Einstein is very categorical:

Physics is a developing logical system of thinking, the foundations of which can be obtained not by extracting them by any inductive methods from experienced experiences, but only by free invention.

The words about "free fiction" mean, of course, not that general principles are completely independent of experience, but that they are not unambiguously determined by experience. An example that Einstein often gives is this. Newton's celestial mechanics and Einstein's general theory of relativity are built on the same experimental facts. However, they proceed from completely different, in a sense, even diametrically opposed general principles, which is also manifested in a different mathematical apparatus.

While the "number of storeys" of the building of theoretical physics was not large, and the consequences of general principles were easily and unambiguously deduced, people did not realize that they had a certain freedom in establishing principles. In the trial and error method, the distance between trial and error (or success) was so small that they did not notice that they were using trial and error, but believed that they were directly deriving (although this was called not deduction, but induction) principles from experience. . Einstein writes:

Newton, the creator of the first extensive fruitful system of theoretical physics, still thought that the basic concepts and principles of his theory follow from experience. Obviously, it is in this sense that one should understand his saying “hypotheses non fingo” (I do not make up hypotheses).

But over time, theoretical physics turned into a multi-story construction, and deriving consequences from general principles became a difficult and not always unambiguous matter, because it often turned out to be necessary to make additional assumptions in the process of deduction, most often “non-principled” simplifications, without which it would be impossible to bring the calculation to numbers. Then it became clear that there is a profound difference between the general principles of theory and facts that can be directly verified by experience: the former are free constructions of the human mind, the latter are the source material that the mind receives from nature. True, the depth of this difference should not be overestimated. If we abstract from human affairs and aspirations, then it turns out that the difference between theories and facts disappears - both of them are some reflections or models of reality outside of man. The difference lies in the level at which the model is reified. Facts, if they are completely “de-ideologized”, are determined by the influence of the outside world on the human nervous system, which we are forced to consider (for the time being) as not allowing for alteration, which is why we treat facts as primary reality. Theories are models embodied in language objects that are entirely in our power, so we can discard one theory and replace it with another as easily as replacing an outdated tool with a better one.

The increasing abstractness (constructivity) of the general principles of physical theories, their distance from direct experimental facts leads to the fact that in the trial and error method it becomes increasingly difficult to find a test that has a chance of success. The mind begins to simply need wings to soar, which is what Einstein says. On the other hand, an increase in the distance from general principles to verifiable consequences makes general principles, within certain limits, invulnerable to experiment, which was also often pointed out by the classics of modern physics. Having discovered a discrepancy between the consequences of theory and experiment, the researcher is faced with an alternative: to look for the reasons for the discrepancy in the general principles of the theory, or somewhere on the way from principles to specific consequences. Owing to the high cost of general principles and the great expense required to restructure the theory as a whole, the second way is always tried first. If one succeeds in a sufficiently elegant way in modifying the deduction of consequences from general principles so that they agree with experiment, then everyone calms down and the problem is considered solved. But sometimes the modification looks clearly like a rough patch, and sometimes the patches overlap each other and the theory begins to crack at the seams; nevertheless, its conclusions are consistent with the data of experience and it continues to retain its predictive power. Then questions arise: how should one treat the general principles of such a theory? Should we strive to replace them with some other principles? At what degree of "patching" does it make sense to discard the old theory?

13.4. Theory Selection Criteria

First of all, we note that a clear understanding of scientific theories as linguistic models of reality significantly reduces the severity of competition between scientific theories compared to the naive point of view (kind of Platonism), according to which the linguistic objects of a theory only express some kind of reality and therefore each theory is either "actually" is true if this reality "actually" exists, or "actually" is false if this reality is fictional. This point of view is generated by the transfer of the position, which takes place for the language of concrete facts, to the language of concepts-constructs. When we compare two competing statements: “there is pure alcohol in this glass” and “there is pure water in this glass”, we know that these statements allow for experimental verification, and the one that is not confirmed loses all model meaning, any share truth; it is actually false and only false. The situation is quite different with statements expressing the general principles of scientific theories. Many verifiable consequences are deduced from them, and if some of them turn out to be false, then it is usually said that the original principles (or ways of deriving consequences) are not applicable to this field of experience; it is usually possible to establish formal applicability criteria as well. Therefore, general principles are in a sense "always true", the exact notion of truth and falsity does not apply to them, but only the notion of their greater or lesser usefulness for describing actual facts. Like the axioms of mathematics, the general principles of physics are the abstract forms into which we strive to squeeze natural phenomena. Competing principles differ in how well they do it.

But what does good mean?

If a theory is a model of reality, then, obviously, it is the better, the wider the scope of its applicability and the more predictions it can make. This is the first criterion for comparing theories - the criterion of generality and predictive power of the theory.

These criteria are pretty obvious. If we consider scientific theories as something stable, not subject to development and improvement, then, perhaps, it would be difficult to put forward any other criteria in addition to these criteria. But humanity is constantly developing and improving its theories, and this gives rise to another criterion - dynamic, which turns out to be decisive. This criterion is well said by Philip Frank in his book "Philosophy of Science", and we will quote his words.

If we look at which theories were actually preferred because of their simplicity, we find that the decisive ground for accepting one theory or another was neither economic nor aesthetic, but rather that which was often called dynamic. This means that a theory was preferred that made science more dynamic, that is, more suitable for expansion into the unknown. This can be seen through an example we have often referred to in this book: the struggle between the Copernican and the Ptolemaic systems. In the period between Copernicus and Newton, a lot of evidence was given in favor of one or the other system. In the end, however, Newton put forward a theory of motion that brilliantly explained all the movements of celestial bodies (for example, comets), while Copernicus, like Ptolemy, explained only the movements in our planetary system ... However, Newton's laws were based on a generalization of the Copernican theory, and we can hardly imagine how they could be formulated if he proceeded from the Ptolemaic system. In this, as in many other respects, the Copernican theory was more "dynamic", that is, it had a greater heuristic value. It can be said that the Copernican theory was mathematically more "simple" and more dynamic than that of Ptolemy.

The aesthetic criterion, or criterion of the beauty of a theory, which Frank mentions, is difficult to defend as an independent one, independent of other criteria. However, it acquires great importance as an intuitive synthesis of all these criteria. The theory seems beautiful to the scientist if it is sufficiently general and simple and he has a presentiment that it will turn out to be dynamic. Of course, he can be wrong about this.

13.5. Physics of the microworld

In physics, as in pure mathematics, as theories became more abstract, an understanding of their linguistic character took root. This process received a decisive impetus after the beginning of the 20th century. physics invaded the limits of the world of atoms and elementary particles and the theory of relativity and quantum mechanics were created. Quantum mechanics played a particularly important role. It is impossible to understand this theory at all unless one constantly reminds oneself that it is only a linguistic model of the microcosm, and not a representation of what it would "really" look like if it could be seen through a microscope with monstrous magnification, and that there is no such image and cannot be. Therefore, the idea of ​​a theory as a language model of reality has become an integral part of modern physics, it has become necessary for physicists to work successfully. As a result, the internal attitude towards the nature of their activity began to change among physicists. If earlier a theoretical physicist felt himself a discoverer of something that existed before him and independently of him, like a navigator discovering new lands, now he feels himself, rather, a creator of something new, like a master skillfully owning his profession and creating new ones. buildings, machines, tools. This change manifested itself even in turns of speech. Newton is traditionally said to have "discovered" infinitesimal calculus and celestial mechanics; a modern scientist will be said to have "created" or "proposed" or "developed" a new theory; the expression "discovered" will sound archaic. This, of course, does not in the least infringe on the dignity of theoreticians, for creation is an occupation no less honorable and inspiring than discovery.

Why, then, did quantum mechanics demand an awareness of the “linguistic nature” of theories?

According to the original atomistic concept, atoms were simply very small particles of matter, small bodies, having, in particular, a certain shape and color, on which the physical properties and color of large clusters of atoms depend. Atomic physics at the beginning of the 20th century. transferred the concept of an atom ("indivisible") to elementary particles - electrons and protons (to which the neutron was soon added), and the word "atom" began to denote a structure consisting of an atomic nucleus (it, according to the initial hypothesis, was a cluster of protons and electrons), around which electrons revolve, like planets around the sun. This idea of ​​the structure of matter was considered hypothetical, but extremely plausible. Hypothesis itself was understood in the sense we spoke about above: the planetary model of the atom must be either true or false. If it is true (and there was almost no doubt about it), then electrons are “really” small particles of matter that describe certain trajectories around the nucleus. True, in comparison with the atoms of the ancients, elementary particles have already begun to lose some of the properties that seem to be absolutely necessary for particles of matter. It became clear that the concept of color is completely inapplicable to electrons and protons; it's not that we don't know what color they are, but simply this question does not make sense, because color is the result of interaction with light at least an atom as a whole, or rather, a cluster of many atoms. There were also doubts about the concepts of the shape and size of electrons. But the holy of holies of the idea of ​​a material particle - the presence of a particle at each moment of time in a certain position in space - remained undoubted and self-evident.

13.6. Uncertainty relation

Quantum mechanics has destroyed this notion. She was forced to do this under the pressure of new experimental data. It turned out that elementary particles behave under certain conditions not as particles, but as waves, but at the same time they do not “smear” over a large area of ​​space, but retain their small size and their discreteness, only the probability of their detection in one or another region is smeared. point in space.

Rice. 13.1. Electron diffraction

Let's take it as an illustration. It depicts an electron gun that sends electrons of a certain impulse to the diaphragm, behind which the screen is located. The diaphragm is made of a material that is opaque to electrons, but has two holes through which the electrons enter the screen. The screen is covered with a substance that glows under the influence of electrons, so that a flash occurs in the place where the electron hit. The flow of electrons from the gun is rather rare, so that each electron passes through the diaphragm and is fixed on the screen independently of the others. The distance between the holes in the diaphragm is many times greater than the size of the electrons obtained by any estimates, but is comparable in order to the value h/p, where h is Planck's constant, and p- the momentum of the electron, i.e. the product of its velocity and mass.

These are the conditions of the experiment. Its result is the distribution of flashes on the screen. The first conclusion from the analysis of the experimental results is as follows: the electrons hit different points of the screen, and it is impossible to predict which point each electron will hit, it is only possible to predict the probability of hitting one or another point, i.e., the average density of flashes after hitting the screen is very a large number of electrons.

But it's still half the trouble. It can be imagined that different electrons fly through different places of the holes in the diaphragm, experience different forces of influence from the edges of the holes, and therefore are deflected differently. The real trouble comes when we start to examine the average density of flashes on the screen and compare it with the results that we get when we close one of the holes in the aperture. If an electron is a small particle of matter, then when it enters the region of the diaphragm, it is either absorbed or passes through one of the two holes. Since the apertures of the diaphragm are arranged symmetrically with respect to the electron gun, on average half of the electrons pass through each aperture. So, if we close one of the holes and let a million electrons through the diaphragm, and then close the second hole, but open the first one and let another million electrons through, then we should get the same average flash density as if we let through the diaphragm with two holes two million electrons. But it turns out that this is not so! With two holes, the distribution is different, it contains maxima and minima, as in the case of wave diffraction.

The average density of flashes can be calculated using quantum mechanics by associating with electrons the so-called wave function, which is a kind of imaginary field, the intensity of which is proportional to the probability of observed events.

It would take too much space for us to describe all the attempts to reconcile the idea of ​​the electron as an "ordinary" particle (such particles have come to be called classical, in contrast to quantum ones) with experimental data on their behavior. An extensive literature, both specialized and popular, is devoted to this issue. All such attempts were unsuccessful. The following two things came to light.

First, if the coordinate of a quantum particle (any, not necessarily electrons) along some axis is measured simultaneously X and momentum in that direction R, then the measurement errors, which we denote by x; and p respectively, obey the Heisenberg uncertainty relation:

x × ∆ ph.

There is no way around this ratio. The more precisely we try to measure the coordinates, the greater the spread in the magnitude of the momentum turns out to be. R, and vice versa. The uncertainty relation is a universal law of nature, but since Planck's constant h is very small, it does not play a role in measurements with bodies of macroscopic size.

Secondly, the idea that, in fact, quantum particles move along some well-defined trajectories, i.e., at each moment of time, they actually have well-defined coordinates and speeds (and hence momentum), which we we simply cannot accurately measure, runs into insurmountable logical difficulties. On the contrary, the fundamental rejection of attributing a real trajectory to a quantum particle and the assumption that the most complete description of the state of particles is the assignment of its wave function leads to a logically flawless, but mathematically simple and elegant theory that is brilliantly consistent with experimental facts; in particular, the uncertainty relation immediately follows from it. This theory is quantum mechanics. In understanding the physical and logical foundations of quantum mechanics and in its philosophical understanding, the main role was played by the activities of the greatest scientist and philosopher of our time, Niels Bohr (1885–1962).

13.7. Visual and iconic models

So, the electron does not have a trajectory. The most that can be said about an electron is to indicate its wave function, the square of which will give us the probability of finding an electron near a particular point in space. At the same time, we say that an electron is a material particle of certain (and very small) sizes. The mixing of these two ideas, which the experimental facts demanded, turned out to be a very difficult matter, and there are still people who reject the usual interpretation of quantum mechanics (accepted after the Bohr school by the overwhelming majority of physicists) and wish to return quantum mechanics at all costs. particles their trajectory. Where does such persistence come from? After all, the expropriation of color from electrons was completely painless, and from a logical point of view, the recognition of the inapplicability of the concept of a trajectory to an electron is fundamentally no different from the recognition of the inapplicability of the concept of color. The difference here is that when we abandon the concept of color, we show a certain amount of hypocrisy. We say that the electron has no color, but we ourselves represent it in the form of a kind of gray (or shiny - this is a matter of taste) ball. Absence we replace colors with arbitrary color, and this does not in the least interfere with the use of our model. In relation to the position in space, this trick does not work. The idea of ​​an electron, which is somewhere at every moment, interferes with the understanding of quantum mechanics and comes into conflict with experimental data. Here we are forced to completely abandon the visual-geometric representation of the motion of a particle. This causes a painful reaction. We are so accustomed to connecting the space-time picture with the true reality, with what exists objectively and independently of us, that it is very difficult for us to believe in an objective reality that does not fit into these frameworks. And we ask ourselves again and again: but if the electron is not “smeared” in space, then in fact it must be somewhere?

It takes hard work of thought to recognize and feel the meaninglessness of this question. First of all, we must be aware that all our knowledge and theories are secondary models of reality, that is, models of primary models, which are the data of sensory experience. These data bear an indelible imprint of the structure of our nervous system, and since spatio-temporal concepts are embedded in the lowest floors of the nervous system, all our sensations and ideas, all products of our imagination cannot go beyond spatio-temporal pictures. However, these limits can be extended to a certain extent. But this must be done not by an illusory movement "down" to objective reality, "as it is, regardless of our sense organs", but by an "up" movement, i.e., by constructing secondary semiotic models of reality.

Of course, the signs of the theory retain a continuous spatio-temporal existence, as well as the primary data of experience. But in the relationship between the two, that is, in the semantics of the theory, we can afford considerable freedom if we are guided by the logic of new experimental facts, and not by the usual space-time intuition. And we can build such a sign system, which in its functioning is in no way bound by visual representations, but is subject only to the condition of an adequate description of reality. Quantum mechanics is such a system. A quantum particle in this system is not a gray or shiny ball and not a geometric point, but a certain concept, i.e., a functional node of the system, which, together with other nodes, provides a description and prediction of real experimental facts: flashes on the screen, instrument readings, etc. d.

Let us return to the question of how the electron "really" moves. We have seen that, because of the uncertainty relation, experiment cannot, in principle, give an answer to it. So, as an "external part" of the physical model of reality, this question is meaningless. It remains to ascribe to it a purely theoretical meaning. But then it loses its direct connection with the observed phenomena, and the expression "in fact" becomes a pure swindle! Whenever we go beyond the sphere of perception and declare that "in fact" this and that takes place, we are moving not down, but up - we are building a pyramid of linguistic objects, and only due to an optical illusion it seems to us that we we delve into the area below the sensory experience. Metaphorically speaking, the plane separating sensory experience from reality is absolutely impenetrable and, trying to see what is underneath, we see only an inverted reflection of the pyramid of theories. This does not mean that true reality is unknowable and that our theories are not models of it; it is only necessary to remember that all these models lie on this side of sensory experience and it is senseless to compare ghostly “realities” on the other side to individual elements of theories, as Plato, for example, did. The idea of ​​an electron as a small ball moving along a trajectory is the same construction as the concatenation of the signs of quantum theory. It differs only in that it includes a spatio-temporal picture, to which we habitually ascribe an illusory reality with the help of the meaningless in this case the expression "in fact."

The transition to the conscious construction of symbolic models of reality, not based on any visual representations of physical objects, is a great philosophical achievement of quantum mechanics. In fact, physics has become an iconic model since the time of Newton, and it was due to its iconicness that it owed its success (numerical calculations); however, visual representations were present as a necessary element. Now they have become optional, and this has expanded the class of possible models. Those who want to return visibility at all costs, although they see that the theory works better without it, are actually calling for a narrowing of the class of models. It is unlikely that they will succeed. They can be compared with that eccentric who harnessed a horse to a steam locomotive, because although he saw that the cart was moving without a horse, it was beyond his strength to recognize such a situation as normal. Iconic models are a locomotive that does not need a horse of visual representations at all for each of its concepts.

13.8. The collapse of determinism

The second important result of quantum mechanics, which has a general philosophical significance, is the collapse of determinism. Determinism is a philosophical concept. This name is given to the view that all events occurring in the world have well-defined causes and occur with necessity, that is, they cannot fail to occur. Attempts to clarify this definition reveal logical defects in it that prevent the exact formulation of this view in the form of a scientific position without introducing any additional ideas about objective reality. Indeed, what does “events have causes” mean? Is it possible to indicate some "finite" number of causes of a given event and say that there are no other causes? And what does it mean that the event “could not have happened”? If only that it happened, then the statement turns into a tautology.

However, philosophical determinism can be more accurately interpreted within the framework of a scientific theory that claims to be a universal description of reality. Indeed, he received such an interpretation within the framework of mechanism- a scientific and philosophical concept that arose on the basis of the successes of classical mechanics as applied to the movements of celestial bodies. According to the mechanistic concept, the world is a three-dimensional Euclidean space filled with many elementary particles that move along some trajectories. Forces act between particles, depending on their location relative to each other, and the movement of particles obeys the laws of Newtonian mechanics. With such a representation of the world, its exact state (i.e., the coordinates and velocities of all particles) at some fixed moment in time uniquely determines the exact state of the world at any other moment. The famous French mathematician and astronomer P. Laplace (1749–1827) expressed this position in the following words:

The mind, which would know for any given moment all the forces that animate nature, and the relative position of all its constituent parts, if in addition it turned out to be extensive enough to subject these data to analysis, would embrace in one formula the movements of the greatest bodies of the universe on an equal footing. with the movements of the smallest atoms: there would be nothing left that would be unreliable for him, and the future, as well as the past, would appear before his eyes.

This concept has been called Laplacian determinism. It is a legitimate and inevitable consequence of the mechanistic conception of the world. True, from a modern point of view, Laplace's formulation needs some clarification, since we cannot recognize the concepts of an omniscient mind and the absolute accuracy of measurement as legitimate. But it is easy to modernize, practically without changing the meaning. We say that if the coordinates and momenta of all particles in a sufficiently large volume of space are known with sufficient accuracy, then it is possible to calculate the behavior of any system in any given time interval with any given accuracy. From this formulation, as from Laplace's original formulation, one can conclude that all future states of the universe are predetermined. By indefinitely increasing the accuracy and coverage of measurements, we are indefinitely lengthening the timing of predictions. Since there are no fundamental restrictions on the accuracy and scope of measurements, i.e., such restrictions that would follow not from the limitations of human capabilities, but from the nature of the objects of measurement, we can imagine an extreme case and state that in fact all The future of the world is already determined and absolutely unambiguous. Here the expression "actually" acquires a quite distinct meaning; our intuition easily recognizes the legitimacy of this "really" and resists discrediting it.

So, the mechanistic conception of the world leads to the idea of ​​the complete determinism of phenomena. But this contradicts the subjective sense of freedom of choice that we have. There are two ways out of this: to recognize the feeling of freedom of choice as "illusory" or to recognize the mechanistic conception as unsuitable as a universal picture of the world. It is now difficult to say in what proportion the thinking people of the "pre-quantum" era were divided into these two points of view. If we approach the issue from a modern position, then, even without knowing anything about quantum mechanics, we must resolutely adopt the second point of view. We now understand that the mechanistic concept, like any other concept, is only a secondary model of the world in relation to the primary data of experience, therefore the direct data of experience always have priority over any theory. The feeling of freedom of choice is a primary experimental fact, like other primary facts of spiritual and sensory experience. The theory cannot reject this fact, it can only compare some new facts with it - a procedure that we, under certain conditions, call explanation fact. To declare freedom of choice "illusory" is as meaningless as to declare to a person with a toothache that his sensation is "illusory." A tooth can be perfectly healthy, and the sensation of pain can be the result of irritation of a certain part of the brain, but this does not make it “illusory”.

Quantum mechanics destroyed determinism. First of all, the idea of ​​elementary particles as small bodies moving along certain trajectories turned out to be false, and, consequently, the whole mechanistic picture of the world collapsed - so clear, familiar and, it would seem, completely undeniable. Physicists of the XX century. can no longer clearly and convincingly, as physicists of the 19th century were able to, tell people what actually represents the world in which they live. But determinism collapsed not only as part of a mechanistic concept, but also as part of any picture of the world. In principle, one could imagine such a complete description (picture) of the world, which includes only really observed phenomena, but gives unambiguous predictions of all phenomena that will ever be observed. Now we know that this is impossible. We know that there are situations in which it is fundamentally impossible to predict which of the many conceivable phenomena will actually occur. Moreover, these situations are, according to quantum mechanics, not an exception, but a general rule; strictly deterministic outcomes are just the exception to the rule. The quantum mechanical description of reality is essentially a probabilistic description, and it includes unambiguous predictions only as a limiting case.

As an example, consider the experiment with electron diffraction, depicted in . The conditions of the experiment are completely determined when all the geometrical parameters of the setup and the initial momentum of the electrons emitted by the gun are given. All electrons emitted from the gun and hit the screen are in equal conditions and are described by one wave function. Meanwhile, they are absorbed (give flashes) at different points on the screen, and it is impossible to predict in advance at what point the electron will flash; one cannot even predict whether it will deviate up or down in our figure, one can only indicate the probability of hitting different parts of the screen.

It is permissible, however, to ask the question: why are we sure that if quantum mechanics cannot predict the point at which an electron will hit, then no future theory will be able to do so either?

To this question we will give not one, but two whole answers; the issue deserves such attention.

The first answer can be called formal. He is. Quantum mechanics is based on the principle that a description using the wave function is the most complete description of the states of a quantum particle. This principle, in the form of the uncertainty relation that follows from it, has been confirmed by a huge number of experiments, the interpretation of which contains only low-level concepts that are directly related to the observed quantities. The conclusions of quantum mechanics, involving more complex mathematical calculations, are confirmed by an even greater number of experiments. And there is absolutely no indication that we should question this principle. But it is tantamount to the impossibility of predicting the exact outcome of an experiment. For example, to indicate the point on the screen where an electron hits, you need to know more about it than the wave function gives.

The second answer we will begin by trying to understand why we do not want to agree with the impossibility of predicting the point where the electron will fall. Centuries of the development of physics have accustomed people to the idea that the movement of inanimate bodies is regulated solely by causes external to them, and that by sufficiently subtle investigation these causes can always be discovered. peek them. This belief was fully justified as long as it was considered possible to spy on the system without influencing it, which took place in experiments on macroscopic bodies. Imagine that it's not electrons that scatter, but cannonballs, and that you study their motion. You see that in one case the nucleus deviates upward, and in the other downward, and you do not want to believe that this happens by itself, but are convinced that the difference in the behavior of the nuclei is due to some real reason. You shoot the flight of the nucleus on film or take some other action and, in the end, find such phenomena A 1 and A 2 associated with the flight of the nucleus, which, if available, A 1 core deviates upward, and if available A 2 - down. And you say that A 1 is the reason for the upward deviation of the nucleus, and A 2 - the reason for the downward deviation. It is possible that your camera will be imperfect or you will simply get bored with the study and you will not find the cause you are looking for. But you still remain convinced that in fact the cause exists, that is, if you looked better, then the phenomena A 1 and A 2 would be found.

How is the matter in the experiment with electrons? You again see that in some cases the electron deviates upwards, in others downwards, and in search of a reason you try to follow its movement, to spy on it. But here it turns out that you cannot spy on an electron without affecting its fate in the most catastrophic way. To "see" an electron, it is necessary to direct a stream of light at it. But light interacts with matter in portions, quanta, which are subject to the same uncertainty relation as electrons and other particles. Therefore, with the help of light, as well as with the help of any other means of investigation, it is not possible to go beyond the limits of the uncertainty relation. Trying to refine the position of electrons with the help of photons, we either give it such a large and indefinite momentum that spoils the whole experiment, or we measure the coordinate so roughly that we do not learn anything new about it. So the phenomena A 1 and A 2 , i.e., the reasons why the electron in some cases deviates upwards, and in other cases downwards, do not exist in reality. And the assertion that “in fact” there is some kind of reason loses all scientific meaning.

So, there are phenomena for which there are no causes, more precisely, there are a number of possibilities, of which one occurs without any reason. This does not mean that the principle of causality should be discarded altogether: in the same experiment, if the electron gun is turned off, then the flashes on the screen will disappear altogether and the reason for their disappearance will be the turning off of the gun. But this means that it must be significantly limited in comparison with how it was understood in classical mechanics and how it is still understood by everyday consciousness. Some phenomena have no causes, they must be taken simply as something given. Such is the world in which we live.

The second answer to the question about the reasons for our confidence in the existence of unpredictable phenomena is that with the help of the uncertainty relation, we understand not only a lot of new facts, but also the nature of the break in relation to causality and predictability that occurs when we invade the microcosm. We see that the belief in absolute causality stemmed from a tacit assumption about the existence of infinitely subtle means of research, "peeping" behind the object. But when they got to elementary particles, physicists discovered that there is a minimum quantum of action, measured by Planck's constant, and this creates a vicious circle when trying to overdetail the description of one particle with the help of another. And absolute causality collapsed, and with it determinism. From a general philosophical point of view, it seems quite natural that if there is no infinite divisibility of matter, then there is no infinite detail of description, so that the collapse of determinism seems more natural than if it had been preserved.

13.9. "Crazy" theories and metascience

The successes of quantum mechanics, which we spoke about above, relate mainly to the description of non-relativistic particles, i.e., particles moving at speeds much lower than the speed of light, so that the effects associated with the theory of relativity (relativistic effects) can be neglected . It is precisely nonrelativistic quantum mechanics that we had in mind when we spoke of its completeness and logical harmony. Non-relativistic quantum mechanics is sufficient to describe the phenomena of the atomic level, but the physics of high-energy elementary particles requires the creation of a theory that combines the ideas of quantum mechanics and the theory of relativity. So far, only partial success has been achieved along this path; there is no unified and consistent theory of elementary particles that explains the vast amount of material accumulated by experimenters. Attempts to build a new theory by means of unprincipled corrections of the old theory do not lead to significant results. The creation of a satisfactory theory of elementary particles rests on the extraordinary peculiarity of this field of phenomena, occurring as if in a completely different world and requiring completely unusual concepts for their description, fundamentally at odds with the scheme we are used to understand.

In the late 50s, Heisenberg proposed a new theory of elementary particles, after reading which Bohr said that it was unlikely to be true, because it was "not crazy enough." The theory really did not receive recognition, and Bohr's apt remark became known to all physicists and even got into popular literature. The word "crazy" was naturally associated with the epithet "weird" applied to the world of elementary particles. But does "crazy" mean only"strange", "unusual"? Perhaps if Bohr had said "not unusual enough", the aphorism would not have come out. The word “crazy” brings in the connotation of “crazy”, “coming from nowhere” and brilliantly characterizes the current situation in the theory of elementary particles, when everyone recognizes the need for a deep restructuring of the theory, but it is not known how to proceed with it.

The question arises: does the “strangeness” of the world of elementary particles, the inapplicability of our intuition developed in the macrocosm to it, doom us now and forever to wandering in the dark?

Let us consider the nature of the difficulties that have arisen. The principle of creating formalized language models of reality did not suffer in the transition to the study of the microworld. But if the wheels of these models - physical concepts - were taken basically from our everyday macroscopic experience and were only refined through formalization, then for the new "strange" world, new "strange" concepts are needed, which there is nowhere to take from and which, therefore, will have to be made anew, and even connect them properly into a complete circuit. At the first stage of the study of the microworld, one of these wheels - the wave function of nonrelativistic quantum mechanics - was made relatively easily, relying on the already existing mathematical apparatus that served to describe macroscopic phenomena (mechanics of a material point, mechanics of continuous media, matrix theory). Physicists were just lucky: they found the prototypes of the wheel they needed in two (completely different) wheels of macroscopic physics and made of them a "centaur" - the quantum concept of a wave-particle.

However, you can't rely on luck all the time. The deeper we penetrate into the microcosm, the more the necessary concepts-constructs differ from the usual concepts of macroscopic experience, and the less likely it is to build them on the go, without any tools, without any theory. Consequently, we must subject the very task of constructing scientific concepts and theories to scientific analysis, i.e. make another metasystem transition. In order to construct a certain physical theory in a qualified manner, we need a general theory of the construction of physical theories (metatheory), in the light of which the way of solving our specific problem will be clarified. A comparison of visual models of old physics with a horse, and abstract iconic models with a steam locomotive, can be developed as follows. Horses are placed at our disposal by nature. They grow and reproduce on their own, and to use them, you do not need to know their internal structure. But we must build the locomotive ourselves. To do this, we must understand the principles of its structure and the physical laws underlying them, as well as have some tools for work. Trying to build a theory of the "strange" world, without having a metatheory of physical theories, we become like a person who planned to build a steam locomotive with his bare hands or build an airplane, having no idea about the laws of aerodynamics.

So, another metasystem transition has matured. Physics requires ... I want to say "metaphysics", but, fortunately for our terminology, the metatheory we need is such in relation to any natural science theory that has a high degree of formalization, therefore its more properly called metascience. This term has the disadvantage that it creates the impression that metascience is something fundamentally outside of science, while in reality the new level of hierarchy created by this metasystem transition must, of course, be included in the general body of science, thereby expanding this body. The situation here is the same as with the term metamathematics; because metamathematics is also a part of mathematics. But since the term "metamathematics" was nevertheless accepted, the term "metascience" can also be considered acceptable. However, since the most important part of metascientific research is the study of the concepts of theory, one can also propose the term conceptology.

The main task of metascience can be formulated as follows. A certain set or a certain generator of facts is given. How to build a theory that effectively describes these facts and makes correct predictions?

If we want metascience to go beyond general reasoning, then we need to build it as a full-fledged mathematical theory, and for this its object - natural science theory - must appear in a formalized (albeit simplified - such is the price of formalization) form, subject to mathematics. Presented in this form, the scientific theory is a formalized language model, the mechanism of which is a hierarchical system of concepts - the point of view that we have cited throughout the book. From this point of view, the creation of a mathematical metascience seems to be another and natural metasystem transition, making which we make the subject of study of formalized languages ​​in general, not only in relation to their syntax, but also - and mainly - from the point of view of semantics, from the point of view of their application. to the description of reality. The whole path of development of physical and mathematical science brings us to this step.

However, so far we have proceeded in our reasoning from the needs of physics. But what about pure mathematics?

If theoretical physicists know what they need, but can do little, then "pure" mathematicians can rather be reproached for the fact that they can do a lot, but do not know what they need. There is no doubt that many purely mathematical works are needed to impart coherence and harmony to the entire edifice of mathematics, and it would be ridiculous to demand immediate "practical" applications from each work. But all the same, mathematics is created for the cognition of reality, and not for aesthetic or sporting purposes, like chess, and even its highest floors are needed, in the final analysis, only insofar as they contribute to the achievement of this goal.

Probably, the upward growth of the edifice of mathematics is always needed and is of unconditional value. But mathematics is also expanding in breadth, and it becomes increasingly difficult to determine what is not needed and what is needed, and if so, to what extent. Mathematical technology is now so developed that constructing several new mathematical objects within the framework of the axiomatic method and studying their properties has become almost as common, although not always easy, as it was for the ancient Egyptian scribes to perform calculations on fractions. But who knows if these objects will be needed? There is a need for a theory of the application of mathematics, and this, in essence, is metascience. Consequently, the development of metascience is a guiding and organizing task in relation to more specific mathematical problems.

The creation of an effective metascience is still a long way off. Now it is difficult to imagine even its general contours. To make them clear, it is necessary to perform a lot of preparatory work. Physicists must master "Bourbakism", feel the play of mathematical structures, which leads to the emergence of rich axiomatic theories suitable for a detailed description of reality. Together with mathematicians, they must learn to decompose symbolic models into separate bricks in order to put together the blocks they need from them. And, of course, it is necessary to develop the technique of carrying out formal calculations on arbitrary symbolic expressions (and not just numbers) with the help of electronic computers. Just as the transition from arithmetic to algebra occurs only after the full mastery of the technique of arithmetic calculations, so the transition to the theory of creating arbitrary symbolic systems requires a high technique of operations with symbolic expressions, requires the practical removal of the problem of performing cumbersome formal calculations. Whether the new methods will contribute to the resolution of those specific difficulties that are now facing the theory of elementary particles, or whether they will be resolved earlier by manual, "old-fashioned" methods, is unknown, and in the end it does not matter, because, undoubtedly, there will appear new difficulties. One way or another, the question of creating a metascience is on the agenda. Sooner or later it must be solved, and then people will receive new weapons to conquer the strangest fantastic worlds.

Bacon F. Novum Organum, Great books of the western world. Encyclopedia Britannica, 1955. Aphorism 95, p. 126.

Bacon F. Op. cit. Aphorism 117. R. 131.

See collection: Einstein A. Physics and reality. M.: Nauka, 1965. The following quotes are also taken from this collection.

Frank P. philosophy of science. Englewood Cliffs (New Jersey): Prentice-Hall, 1957.

Laplace P. Experience in the philosophy of probability theory. M., 1908. S. 9.

Formation of psychology as an experimental science

The transition from knowledge to science, which for a number of areas must be attributed to the 18th century, and for some (somehow mechanics) as early as the 17th century, in psychology takes place by the middle of the 19th century. Only by this time did diverse psychological knowledge take shape as an independent science, armed with its own research methodology specific to its subject and having its own system, i.e. the logic of constructing knowledge related to it, specific to its subject.
The methodological prerequisites for the formation of psychology as a science were prepared mainly by those trends connected with empirical philosophy, which proclaimed in relation to the knowledge of psychological, as well as all other phenomena, the need for a turn from speculation to experimental knowledge, carried out in natural science in relation to the knowledge of physical phenomena. A particularly significant role was played in this respect by the materialistic wing of the empirical trend in psychology, which connected mental processes with physiological ones.
However, in order for the transition of psychology from more or less substantiated knowledge and views to science to actually take place, a corresponding development of the scientific fields on which psychology should be based, and the development of appropriate research methods, were also necessary. These final prerequisites for the formalization of psychological science were provided by the works of physiologists of the first half of the 19th century.
Based on a number of important discoveries in the field of physiology of the nervous system (C. Bell, who showed the presence of various sensory and motor nerves and established the basic laws of conduction in 1811,22 I. Muller, E. Dubois-Reymond, G. Helmholtz, who subjected to measurement conduction of excitation along the nerve), physiologists have created a number of capital works devoted to the general patterns of sensitivity and specifically the work of various sense organs (the works of I. Muller and E.G. Weber, the work of T. Jung, G. Helmholtz and E. Goering on vision, G. Helmholtz by ear, etc.). Dedicated to the physiology of the sense organs, i.e. various types of sensitivity, these works, due to internal necessity, have already passed into the field of psychophysiology of sensations.
Of particular importance for the development of experimental psychology were the studies of E. G. Weber devoted to the question of the relationship between the increase in irritation and sensation, which were then continued, generalized and subjected to mathematical processing by G. T. Fechner (see below). This work laid the foundations for a new special field of experimental psychophysical research.
The results of all these studies were combined, partly further developed and systematized psychologically in his Fundamentals of Physiological Psychology (1874) by W. Wundt. He collected and improved for the purpose of psychological research the methods originally developed by physiologists.
In 1861, W. Wundt invents the first elementary device specifically for the purpose of experimental psychological research. In 1879, he organized a laboratory of physiological psychology in Leipzig, in the late 80s. transformed into the Institute of Experimental Psychology. The first experimental works of Wundt and numerous students were devoted to the psychophysiology of sensations, the speed of simple motor reactions, expressive movements, and so on. All these works were thus focused on elementary psycho-physiological processes; they still belonged entirely to what Wundt himself called physiological psychology. But soon the experiment, the penetration of which into psychology began with elementary processes lying, as it were, in the border area between physiology and psychology, began to be introduced step by step into the study of central psychological problems. Laboratories of experimental psychology began to be created in all countries of the world. E. B. Titchener pioneered experimental psychology in the United States, where it soon received significant development.
Experimental work began to expand and deepen rapidly. Psychology has turned into an independent, largely experimental science, which, with ever more rigorous methods, began to establish new facts and reveal new patterns. In the few decades that have passed since then, the actual experimental material available to psychology has increased considerably; methods have become more diverse and more accurate; The face of science has changed markedly. The introduction of experiment into psychology not only armed it with a very powerful special method of scientific research, but also raised the question of the methodology of psychological research as a whole in a different way, putting forward new requirements and criteria for the scientific nature of all types of experimental research in psychology. That is why the introduction of the experimental method into psychology played such a large, perhaps even decisive role in the formation of psychology as an independent science.
Along with the penetration of the experimental method, a significant role in the development of psychology was played by the penetration of the principle of evolution into it.
The evolutionary theory of modern biology, having extended to psychology, played a double role in it: firstly, it introduced into the study of mental phenomena a new, very fruitful point of view, linking the study of the psyche and its development not only with physiological mechanisms, but also with the development of organisms in process of adaptation to the environment. Even in the middle of the XIX century. G. Spencer builds his system of psychology based on the principle of biological adaptation. The principles of broad biological analysis extend to the study of psychic phenomena. In the light of this biological approach, mental functions themselves begin to be understood as phenomena of adaptation, based on the role of the functions that they perform in the life of the organism. This biological point of view on psychic phenomena subsequently gained considerable currency. Turning into a general concept that is not limited to phylogenesis, it soon reveals its Achilles' heel, leading to the biologization of human psychology.
The evolutionary theory, which extended to psychology, led, secondly, to the development of zoopsychology in the first place. At the end of the last century, thanks to a number of outstanding works (J. Loeb, C. Lloyd-Morgan, L. Hobhouse, G. Jennings, E. L. Thorndike and others), zoopsychology, freed from anthropomorphism, embarks on the path of objective scientific research. From research in the field of phylogenetic comparative psychology (zoopsychology), new trends in general psychology arise, and primarily behavioral psychology.<…>
The penetration into psychology of the principle of development could not but stimulate psychological research in terms of ontogeny. In the second half of the XIX century. Intensive development of this branch of genetic psychology, the psychology of the child, begins. In 1877, Charles Darwin published his Biographical Sketch of a Child. Around the same time, similar works by I. Ten, E. Egger and others appeared. Soon, in 1882, these scientific diary essays devoted to observations of children were followed by the work of W. Preyer, “The Soul of a Child,” which continues them on a broader and more systematic plane. Preyer finds many followers in various countries. Interest in child psychology becomes universal and takes on an international character. In many countries, special research institutes are being created and special journals devoted to child psychology are being published. There are a number of works on the psychology of the child. Representatives of every major psychological school are beginning to pay considerable attention to it. In the psychology of the child, all currents of psychological thought are reflected.
Along with the development of experimental psychology and the flourishing of various branches of genetic psychology as a significant fact in the history of psychology, indicating the importance of its scientific research, it is also necessary to note the development of various special areas of so-called applied psychology, which approach the resolution of various issues of life, based on the results of scientific, in particular experimental, research. Psychology finds extensive application in the field of education and training, in medical practice, in litigation, economic life, military affairs, and art.<…>
The crisis of the methodological foundations of psychology
Formed as an independent science in the middle of the 19th century, psychology, in its philosophical foundations, was a science of the 18th century. Not G.T. Fechner and W. Wundt - eclecticists and epigones in philosophy, but the great philosophers of the 17th-18th centuries. determined its methodological foundations. The formation of psychology as an experimental discipline in Wundt took place already in the conditions of the imminent crisis of its philosophical foundations.
Therefore, the very widespread point of view that transforms the formation of experimental physiological psychology in Fechner and Wundt into the culminating point in the development of psychology, approaching which psychology went up and starting from which, passing into a state of crisis, began to steadily descend downwards, must be rejected radically. . The introduction of the experimental method into psychology and the singling out of psychology as a special experimental discipline is an undeniably significant stage in the development of psychological science. But the formation of a new psychological science cannot be drawn into one point. This is a long process that has not yet ended, in which three peak points must be distinguished: the first must be attributed to the same 18th century. or the turning point from the 17th to the 18th centuries, which was singled out by F. Engels for the entire history of science, the second - by the time of the formation of experimental physiological psychology in the middle of the 19th century; the third - by the time when the system of psychology will finally take shape, combining the perfection of research methods with a new truly scientific methodology. The first stones of this new building were laid by K. Marx in his early works.
The development of psychology in the second period is characterized by the absence of large original systems, in any way comparable with those created by the 18th century. or the beginning of the 19th century, the subordination of psychology to such constructions as the eclectic "inductive metaphysics" of W. Wundt, the pragmatic philosophy of W. James or the empirio-criticism of E. Mach and R. Avenarius, and the growing struggle from idealistic positions against spontaneous materialistic tendencies, sensationalistic and the mechanistic principles on which experimental physiological psychology is initially built; at the end of this period, this struggle brings psychology to an obvious crisis. Along with this, there is a further development of special experimental studies and improvement of research techniques.
Almost everything in the development of experimental research belongs to this period proper. In the preceding period, only the very birth of psychophysics and psychophysiology, or physiological psychology, took place. The development of experimental research beyond the scope of psychophysiology, beginning with the work of E. Ebbinghaus on memory (1885), E. Müller's research on memory and attention, etc., refers mainly to the end of the 19th century. (80s and 90s). The development of zoopsychology dates back to the same time (the classic work of E. L. Thorndike was published in 1898). Especially significant development of the psychology of the child, starting with the work of V. Preyer (1882), refers mainly to an even later time (V. Stern's work "Psychology of Early Childhood" in 1914, the work of K. Groos, K. Buhler and others in subsequent years).
Physiological, experimental psychology, according to its main most progressive methodological principles and philosophical traditions, was, as we have seen, by the time of its formation, still a science of the 18th century.<…>The struggle against the methodological principles on which the building of experimental psychology was originally erected begins already at the turn of the 20th century. It goes along many lines, throughout this struggle the opposition of one opposite to another continues. Rationalism (the psychology of “pure thinking” of the Würzburg school and A. Binet: again Descartes against Locke) is opposed to the sensationalism of various kinds that initially dominates physiological psychology; mechanistic atomism in psychology - associationism - the integrity of various types (holistic psychology of the Berlin school, Leipzig, etc.) and the principle of activity ("apperception", "creative synthesis" in; Leibniz against Descartes); naturalism physiological (in psychophysiology) or biological (Darwin, Spencer) - various forms of spiritualistic "psychology of the spirit" and idealistic "social psychology" (French sociological school in psychology). Further, new contradictions are raised: intellectualism - sensationalistic and rationalistic - begins to be opposed to various forms of irrationalism; to the mind, which the French Revolution of the 18th century deified, - dark deep drives, instincts. Finally, a struggle begins from different sides against the best progressive aspects of the Cartesian concept of consciousness with its clear and distinct knowledge; against it, on the one hand, a diffuse feeling-like experience of the psychology of the Leipzig school is put forward (K. Boehme and the German mystics against Descartes); it is opposed, on the other hand, by various varieties of the psychology of the unconscious (psychoanalysis, etc.). Against him, finally, bringing the crisis to its extreme limits, is behavioral psychology, which rejects not only the specific concept of consciousness, but also the psyche as a whole: “Man-machine” by J.O. La Mettrie tries to overcome all the contradictions of the human spirit, completely abolishing it ( reflex against consciousness, Descartes against Descartes).
This struggle in its main tendencies is an ideological struggle, but the reference points for those specific forms that it takes in the practice of psychological research provide contradictions between the specific factual material that reveals the progressive course of scientific psychological research, and those methodological foundations from which psychology proceeded. .
The struggle in all these areas, beginning at the turn of the 20th century, continues in foreign psychology to the present day. But in different periods, different motives are dominant. Here one has to distinguish, first of all, the period before 1918 (until the end of the First World War and the victory of the Great Socialist Revolution in Russia) and the subsequent period. In the second of these periods, psychology enters a period of open crisis; at first he is being prepared. Already in the first of these periods, many of the trends that will become dominant in the subsequent period begin to take shape - and the irrational intuitionism of A. Bergson, and the psychoanalysis of S. Freud, and the psychology of the spirit of V. Dilthey, etc., but characteristic of of this period are mainly directions leading the fight against sensationalism and partly mechanistic atomism of associative psychology, which is at first the dominant trend in psychology (G. Spencer, A. Bain - in England, I. Ten, T. A. Ribot - in France , E. Muller, T. Ziegen - in Germany, M.M. Troitsky - in Russia). During this period, the tendency of rationalistic idealism still dominates. In the subsequent period, in the post-war years, which also become years of acute crisis for psychology, irrationalistic, mystical tendencies increasingly become dominant.
Anti-sensualistic tendencies are first identified in connection with the formulation of the problem of thinking in psychology - in the most subtle form in A. Binet in France, in D.E. Moore and E. Aveling in England, in the most pointed idealistic form in Germany, among representatives of the Würzburg school, directly influenced by the idealistic philosophy of E. Husserl, resurrecting Platonic idealism and "realism" of scholastic philosophy. The Würzburg school builds the psychology of thinking on the basis of "experimental self-observation". Its main goal is to show that thinking is basically a purely spiritual act, irreducible to sensations and independent of sensuously visual images; its core is the “intention” (orientation) towards the ideal object, the main content is the direct “grasping” of relations. Thus, the Würzburgers revive the ideas of rationalist philosophy within the framework of "experimental psychology", just as their opponents implement the principles of the philosophy of empiricism. At the same time, both directions, for all their antagonism, are united by a common metaphysical approach to the question of the relationship between thinking and feeling. Sensational psychology stands on the positions of vulgar metaphysical empiricism, for which there is no transition from sensation to thinking. Thus, one has either to completely deny the qualitative specificity of thinking, reducing thinking to sensations, or to consider thinking in isolation from sensation. The formulation of the problem of thinking in terms of psychological research must inevitably lead on this basis to a rationalistic opposition of thinking to sensation, in general to sensory visualization.
Following the struggle against the sensualistic principle, a struggle also begins against the mechanistic-atomistic principle of associative psychology, against the "psychology of elements" and its tendency, inspired by the ideals of mechanistic natural science, to decompose all complex formations of consciousness into elements and consider them as the result of coupling, association of these elements. Even W. Wundt tries to take into account the qualitative originality of the whole in relation to the elements, introducing the concept of apperception and creative synthesis, which he contrasts with a simple external association. Experimental facts compel Wundt to this innovation. So, already the first psychological works on auditory sensations, namely the studies of K. Stumpf (1883), showed that tones, merging, and not only externally associating, form diverse integral structures that act as new specific qualities that cannot be reduced to the qualities of their constituents. elements. Then X. Ehrenfels (1890) showed this on visual perceptions and for the first time introduced the term "Gestaltqualitat" to designate this specific new quality of the whole. Subsequent studies on the perception of musical tones and a number of other studies revealed extensive factual material that did not fit within the framework of the psychology of the elements and forced to go beyond it.
At first, this going beyond the limits of the mechanistic psychology of the elements is accomplished primarily by opposing the mechanism of associations of various forms of "creative synthesis" as manifestations of spiritual activity (), "transitional states of consciousness" (James), etc. In the subsequent post-war period of the crisis, the same question of integral formations that cannot be reduced to the sum of elements is resolved on the basis of significantly different positions of structural formalism (Gestalt psychology) and irrational completeness (Leipzig school).
The struggle against associations as the main explanatory principle of experimental psychology also finds expression in another very symptomatic tendency - the tendency to completely abandon the explanation of more complex meaningful ("spiritual") mental phenomena and confine oneself to describing the forms in which these spiritual phenomena are given ("descriptive psychology"). » V. Dilthea). But even these tendencies (observed already by Wundt, who opposes physiological psychology to the historical psychology of peoples, which studies higher spiritual formations - speech, thinking, etc.) come to the fore already in the subsequent post-war years - during the period of crisis.
In the years following the end of the First World War, the crisis takes on acute forms. Just like the crisis in physics, which V.I. Lenin wrote about in Materialism and Empiriocriticism, in mathematics, etc., this is a crisis associated with the ideological struggle for the methodological foundations of science. The methodological foundations on which the edifice of experimental psychology was originally erected are crumbling; more and more widespread in psychology is the rejection not only of the experiment, but also of the tasks of scientific explanation in general (“understanding psychology” by E. Spranger); psychology is overwhelmed by a wave of vitalism, mysticism, irrationalism. The instinct coming from the depths of the organism (A. Bergson), “horme” (by W. MacDougall) displaces the intellect. The center of gravity is transferred from the higher historical forms of consciousness to its prehistoric, primitive, “deep” foundations, from consciousness to the unconscious, instinctive. Consciousness is reduced to the role of a camouflage mechanism, devoid of real influence on behavior controlled by unconscious drives (). Along with this, mechanism takes extreme forms, coming to a complete denial of the human psyche and consciousness; human activity is reduced to a set of unconscious reflex reactions (behavioral psychology). In the psychology of peoples and in the doctrine of personality, in characterology, reactionary racial fatalistic theories (E. Kretschmer, E. Jensch) become dominant in foreign bourgeois psychology; in the psychology of the child, pedology is widely spread, in pedagogical and applied psychology in general - testology.<…>