Quantum mechanics basis. Fundamentals of quantum theory

BASIC PRINCIPLES OF QUANTUM MECHANICS.

Parameter name Meaning
Article subject: BASIC PRINCIPLES OF QUANTUM MECHANICS.
Rubric (thematic category) Mechanics

In 1900 ᴦ. German physicist Max Planck suggested that the emission and absorption of light by matter occurs in finite portions - quanta, and the energy of each quantum is proportional to the frequency of the emitted radiation:

where is the frequency of the emitted (or absorbed) radiation, and h is a universal constant called Planck's constant. According to modern data

h \u003d (6.62618 0.00004) ∙ 10 -34 J ∙ s.

Planck's hypothesis was the starting point for the emergence of quantum concepts, which formed the basis of a fundamentally new physics - the physics of the microworld, called quantum physics. The deep ideas of the Danish physicist Niels Bohr and his school played a huge role in its formation. At the root of quantum mechanics lies a consistent synthesis of corpuscular and wave properties of matter. A wave is a very extended process in space (remember waves on water), and a particle is a much more local object than a wave. Light under certain conditions behaves not like a wave, but like a stream of particles. At the same time, elementary particles sometimes exhibit wave properties. Within the framework of the classical theory, it is impossible to combine wave and corpuscular properties. For this reason, the creation of a new theory that describes the patterns of the microcosm has led to the rejection of conventional ideas that are valid for macroscopic objects.

From a quantum point of view, both light and particles are complex objects that exhibit both wave and particle properties (the so-called wave-particle duality). The creation of quantum physics was stimulated by attempts to comprehend the structure of the atom and the regularities of the emission spectra of atoms.

At the end of the 19th century, it was discovered that when light falls on the surface of a metal, electrons are emitted from the latter. This phenomenon has been called photoelectric effect.

In 1905 ᴦ. Einstein explained the photoelectric effect on the basis of quantum theory. He introduced the assumption that the energy in a beam of monochromatic light consists of portions, the size of which is equal to h. The physical dimension of h is time∙energy=length∙momentum= moment of momentum. This dimension is possessed by a quantity called action, and in connection with this, h is called the elementary quantum of action. According to Einstein, an electron in a metal, having absorbed such a portion of energy, does the work of exit from the metal and acquires kinetic energy

E k \u003d h − A out.

This is Einstein's equation for the photoelectric effect.

Discrete portions of light were later (in 1927 ᴦ.) called photons.

In science, when determining the mathematical apparatus, one should always proceed from the nature of the observed experimental phenomena. The German physicist Schrödinger achieved grandiose achievements by trying a different strategy of scientific research: first mathematics, and then understanding its physical meaning and, as a result, interpreting the nature of quantum phenomena.

It was clear that the equations of quantum mechanics must be wavelike (after all, quantum objects have wave properties). These equations must have discrete solutions (elements of discreteness are inherent in quantum phenomena). Equations of this kind were known in mathematics. Focusing on them, Schrödinger suggested using the concept of the wave function ʼʼψʼʼ. For a particle moving freely along the X axis, the wave function ψ=e - i|h(Et-px) , where p is the momentum, x is the coordinate, E-energy, h-Planck's constant. The function ʼʼψʼʼ is usually called a wave function because an exponential function is used to describe it.

The state of a particle in quantum mechanics is described by a wave function, which makes it possible to determine only the probability of finding a particle at a given point in space. The wave function does not describe the object itself or even its potentialities. Operations with the wave function make it possible to calculate the probabilities of quantum mechanical events.

The fundamental principles of quantum physics are principles of superposition, uncertainty, complementarity and identity.

Principle superpositions in classical physics allows you to get the resulting effect from the superimposition (superposition) of several independent influences as the sum of the effects caused by each influence separately. It is valid for systems or fields described by linear equations. This principle is very important in mechanics, the theory of oscillations and the wave theory of physical fields. In quantum mechanics, the principle of superposition refers to wave functions: if a physical system can be in states described by two or more wave functions ψ 1, ψ 2 ,…ψ ń, then it can be in a state described by any linear combination of these functions:

Ψ=c 1 ψ 1 +c 2 ψ 2 +….+с n ψ n ,

where с 1 , с 2 ,…с n are arbitrary complex numbers.

The principle of superposition is a refinement of the corresponding concepts of classical physics. According to the latter, in a medium that does not change its properties under the influence of perturbations, waves propagate independently of each other. Consequently, the resulting perturbation at any point in the medium when several waves propagate in it is equal to the sum of the perturbations corresponding to each of these waves:

S \u003d S 1 + S 2 + .... + S n,

where S 1 , S 2,….. S n are perturbations caused by the wave. In the case of a non-harmonic wave, it can be represented as a sum of harmonic waves.

Principle uncertainties is that it is impossible to simultaneously determine two characteristics of a microparticle, for example, velocity and coordinates. It reflects the dual corpuscular-wave nature of elementary particles. Errors, inaccuracies, errors in the simultaneous determination of additional quantities in the experiment are related by the uncertainty ratio established in 1925ᴦ. Werner Heisenberg. The uncertainty relation lies in the fact that the product of the inaccuracies of any pair of additional quantities (for example, the coordinate and the projection of the momentum on it, energy and time) is determined by Planck's constant h. Uncertainty relations indicate that the more specific the value of one of the parameters included in the relationship, the more uncertain the value of the other parameter and vice versa. It means that the parameters are measured simultaneously.

Classical physics has taught that all the parameters of objects and the processes occurring with them can be measured simultaneously with any accuracy. This position is refuted by quantum mechanics.

The Danish physicist Niels Bohr came to the conclusion that quantum objects are relative to the means of observation. The parameters of quantum phenomena can be judged only after their interaction with the means of observation, ᴛ.ᴇ. with appliances. The behavior of atomic objects cannot be sharply distinguished from their interaction with measuring instruments that fix the conditions under which these phenomena occur. At the same time, it is necessary to take into account that the instruments that are used to measure the parameters are of different types. The data obtained under different conditions of the experiment should be considered as additional in the sense that only a combination of different measurements can give a complete picture of the properties of the object. This is the content of the complementarity principle.

In classical physics, the measurement was considered not perturbing the object of study. The measurement leaves the object unchanged. According to quantum mechanics, each individual measurement destroys the micro-object. To carry out a new measurement, it is necessary to re-prepare the micro-object. This complicates the measurement synthesis process. In this regard, Bohr asserts the complementarity of quantum measurements. The data of classical measurements are not complementary, they have an independent meaning independently of each other. Complementation takes place where the objects under study are indistinguishable from each other and interconnected.

Bohr related the principle of complementarity not only to the physical sciences: ʼʼthe integrity of living organisms and the characteristics of people with consciousness, as well as human cultures, represent features of integrity, the display of which requires a typically complementary way of descriptionʼʼ. According to Bohr, the possibilities of living beings are so diverse and so closely interconnected that when studying them, one again has to turn to the procedure for complementing observational data. At the same time, this idea of ​​Bohr did not receive proper development.

Features and specificity of interactions between the components of complex micro- and macrosystems. as well as external interactions between them leads to their enormous diversity. Individuality is characteristic of micro- and macrosystems, each system is described by a set of all possible properties inherent only to it. You can name the differences between the nucleus of hydrogen and uranium, although both refer to microsystems. There are no less differences between Earth and Mars, although these planets belong to the same solar system.

Thus it is possible to speak about identity of elementary particles. Identical particles have the same physical properties: mass, electric charge and other internal characteristics. For example, all the electrons of the Universe are considered identical. Identical particles obey the principle of identity - the fundamental principle of quantum mechanics, according to which: the states of a system of particles obtained from each other by rearranging identical particles in places cannot be distinguished in any experiment.

This principle is the main difference between classical and quantum mechanics. In quantum mechanics, identical particles are devoid of individuality.

STRUCTURE OF THE ATOM AND THE NUCLEAR. ELEMENTARY PARTICLES.

The first ideas about the structure of matter arose in Ancient Greece in the 6th-4th centuries. BC. Aristotle considered matter to be continuous, ᴛ.ᴇ. it can be divided into arbitrarily small parts, but never reach the smallest particle that would not be further divided. Democritus believed that everything in the world consists of atoms and emptiness. Atoms are the smallest particles of matter, which means "indivisible", and in the representation of Democritus, atoms are spheres with a jagged surface.

Such a worldview existed until the end of the 19th century. In 1897ᴦ. Joseph John Thomson (1856-1940ᴦ.ᴦ.), son of W. Thomson, twice Nobel Prize winner, discovered an elementary particle, which was called an electron. It was found that the electron flies out of the atoms and has a negative electric charge. The magnitude of the electron charge e\u003d 1.6.10 -19 C (Coulomb), electron mass m\u003d 9.11.10 -31 kᴦ.

After the discovery of the electron, Thomson in 1903 put forward the hypothesis that the atom is a sphere on which a positive charge is smeared, and electrons with negative charges are interspersed in the form of raisins. The positive charge is equal to the negative, in general, the atom is electrically neutral (the total charge is 0).

In 1911, conducting an experiment, Ernst Rutherford found that the positive charge is not spread over the volume of the atom, but occupies only a small part of it. After that, he put forward a model of the atom, which later became known as the planetary one. According to this model, an atom really is a sphere, in the center of which there is a positive charge, occupying a small part of this sphere - about 10 -13 cm. The negative charge is located on the outer, so-called electron shell.

A more perfect quantum model of the atom was proposed by the Danish physicist N. Bohr in 1913, who worked in Rutherford's laboratory. He took Rutherford's model of the atom as a basis and supplemented it with new hypotheses that contradict classical ideas. These hypotheses are known as Bohr's postulates. Οʜᴎ are reduced to the following.

1. Each electron in an atom can make a stable orbital motion along a certain orbit, with a certain energy value, without emitting or absorbing electromagnetic radiation. In these states, atomic systems have energies that form a discrete series: E 1 , E 2 ,…E n . Any change in energy as a result of the emission or absorption of electromagnetic radiation can occur in a jump from one state to another.

2. When an electron moves from one stationary orbit to another, energy is emitted or absorbed. If during the transition of an electron from one orbit to another, the energy of the atom changes from E m to E n, then h v= E m - E n , where v is the radiation frequency.

Bohr used these postulates to calculate the simplest hydrogen atom,

The area in which the positive charge is concentrated is called the nucleus. There was an assumption that the nucleus consists of positive elementary particles. These particles, called protons (in Greek, proton means first), were discovered by Rutherford in 1919. Their modulo charge is equal to the electron charge (but positive), the proton mass is 1.6724.10 -27 kᴦ. The existence of the proton was confirmed by an artificial nuclear reaction that converts nitrogen into oxygen. Nitrogen atoms were irradiated with helium nuclei. The result was oxygen and a proton. The proton is a stable particle.

In 1932, James Chadwick discovered a particle that had no electric charge and had a mass almost equal to that of a proton. This particle was called the neutron. The mass of the neutron is 1.675.10 -27 kᴦ. The neutron was discovered by irradiating a beryllium plate with alpha particles. The neutron is an unstable particle. The lack of charge explains its easy ability to penetrate the nuclei of atoms.

The discovery of the proton and neutron led to the creation of the proton-neutron model of the atom. It was proposed in 1932 by the Soviet physicists Ivanenko, Gapon and the German physicist Heisenberg. According to this model, the nucleus of an atom consists of protons and neutrons, with the exception of the hydrogen nucleus, ĸᴏᴛᴏᴩᴏᴇ consists of one proton.

The charge of the nucleus is determined by the number of protons in it and is denoted by the symbol Z . The entire mass of an atom is contained in the mass of its nucleus and is determined by the mass of the protons and neutrons entering it, since the mass of the electron is negligible compared to the masses of the proton and neutron. The serial number in Mendel-Eev's periodic table corresponds to the charge of the nucleus of a given chemical element. Mass number of an atom BUT is equal to the mass of neutrons and protons: A=Z+N, where Z is the number of protons, N is the number of neutrons. Conventionally, any element is denoted by the symbol: A X z .

There are nuclei that contain the same number of protons but different numbers of neutrons, ᴛ.ᴇ. different mass numbers. Such nuclei are called isotopes. For example, 1 H 1 - regular hydrogen 2 N 1 - deuterium, 3 N 1 - tritium. The most stable nuclei are those in which the number of protons is equal to the number of neutrons or both at the same time = 2, 8, 20, 28, 50, 82, 126 - magic numbers.

The dimensions of an atom are approximately 10 -8 cm. The atom consists of a nucleus 10-13 cm in size. Between the nucleus of the atom and the boundary of the atom there is a huge space in terms of scale in the microcosm. The density in the nucleus of an atom is enormous, approximately 1.5·108 t/cm 3 . Chemical elements with mass A<50 называются легкими, а с А>50 - heavy. It's a bit crowded in the nuclei of heavy elements, ᴛ.ᴇ. an energy prerequisite for their radioactive decay is created.

The energy required to split a nucleus into its constituent nucleons is called the binding energy. (Nuclons are a generalized name for protons and neutrons, and translated into Russian means ʼʼnuclear particlesʼʼ):

E sv \u003d Δm∙s 2,

where ∆m is the nuclear mass defect (difference between the masses of the nucleons forming the nucleus and the mass of the nucleus).

In 1928ᴦ. The theoretical physicist Dirac proposed the theory of the electron. Elementary particles can behave like a wave - they have wave-particle duality. Dirac's theory made it possible to determine when an electron behaves like a wave, and when it behaves like a particle. He concluded that there must be an elementary particle that has the same properties as an electron, but with a positive charge. Such a particle was later discovered in 1932 and named the positron. The American physicist Andersen discovered in a photograph of cosmic rays a trace of a particle similar to an electron, but with a positive charge.

It followed from the theory that an electron and a positron, interacting with each other (annihilation reaction), form a pair of photons, ᴛ.ᴇ. quanta of electromagnetic radiation. The reverse process is also possible, when a photon, interacting with the nucleus, turns into an electron-positron pair. Each particle is associated with a wave function, the square of the amplitude of which is equal to the probability of finding a particle in a certain volume.

In the 1950s, the existence of the antiproton and antineutron was proved.

Even 30 years ago, it was believed that neutrons and protons are elementary particles, but experiments on the interaction of protons and electrons moving at high speeds showed that protons consist of even smaller particles. These particles were first studied by Gell Mann and called them quarks. Several varieties of quarks are known. It is assumed that there are 6 flavors: U - quark (up), d-quark (down), strange quark (strange), charm quark (charm), b - quark (beauty), t-quark (truth) ..

Each flavor quark has one of three colors: red, green, blue. This is just a designation, because Quarks are much smaller than the wavelength of visible light and therefore have no color.

Let's consider some characteristics of elementary particles. In quantum mechanics, each particle is assigned a special mechanical moment of its own, which is not associated with either its movement in space or its rotation. This own mechanical moment is called. back. So, if you rotate an electron by 360 o, then you would expect it to return to its original state. In this case, the initial state will be reached only with one more 360° rotation. That is, in order to return the electron to its original state, it must be rotated by 720 o, in comparison with the spin, we perceive the world only half. For example, on a double wire loop, the bead will return to its original position when rotated 720 degrees. Such particles have a half-integer spin ½. The spin tells us what the particle looks like when viewed from different angles. For example, a particle with spin ʼʼ0ʼʼ looks like a point: it looks the same from all sides. A particle with a spin of ʼʼ1ʼʼ can be compared to an arrow: it looks different from different sides and returns to its former form when rotated through 360 o. A particle with a spin of ʼʼ2ʼʼ can be compared with an arrow sharpened on both sides: any of its positions is repeated from a half turn (180 o). Higher spin particles return to their original state when rotated by an even smaller fraction of a full revolution.

Particles with half-integer spin are called fermions, and particles with integer spin are called bosons. Until recently, it was believed that bosons and fermions are the only possible types of indistinguishable particles. In fact, there are a number of intermediate possibilities, and fermions and bosons are only two limiting cases. Such a class of particles is called anions.

Particles of matter obey the Pauli exclusion principle, discovered in 1923 by the Austrian physicist Wolfgang Pauli. The Pauli principle states that in a system of two identical particles with half-integer spins, more than one particle cannot be in the same quantum state. There are no restrictions for particles with integer spin. This means that two identical particles cannot have coordinates and velocities that are the same with the accuracy specified by the uncertainty principle. If the particles of matter have very close coordinates, then their velocities must be different, and, therefore, they cannot stay at points with these coordinates for a long time.

In quantum mechanics, it is assumed that all forces and interactions between particles are carried by particles with an integer spin equal to 0.1.2. This happens as follows: for example, a particle of matter emits a particle that is the carrier of interaction (for example, a photon). As a result of recoil, the speed of the particle changes. Next, the carrier particle ʼʼbumpsʼʼ onto another particle of the substance and is absorbed by it. This collision changes the speed of the second particle, as if there is a force acting between these two particles of matter. Carrier particles that are exchanged between particles of matter are called virtual, because, unlike real ones, they cannot be registered using a particle detector. However, they exist because they create an effect that can be measured.

Carrier particles can be classified into 4 types based on the amount of interaction they carry and on which particles they interact with and which particles they interact with:

1) Gravitational force. Any particle is under the action of a gravitational force, the magnitude of which depends on the mass and energy of the particle. This is a weak force. Gravitational forces act at large distances and are always attractive forces. So, for example, the gravitational interaction keeps the planets in their orbits and us on Earth.

In the quantum mechanical approach to the gravitational field, it is believed that the force acting between the particles of matter is transferred by a particle with a spin of ʼʼ2ʼʼ, which is commonly called a graviton. The graviton does not have its own mass, and in connection with this, the force transferred by it is long-range. The gravitational interaction between the Sun and the Earth is explained by the fact that the particles that make up the Sun and the Earth exchange gravitons. The effect of the exchange of these virtual particles is measurable, because this effect is the rotation of the Earth around the Sun.

2) The next kind of interaction is created electromagnetic forces that act between electrically charged particles. The electromagnetic force is much stronger than the gravitational force: the electromagnetic force acting between two electrons is about 1040 times greater than the gravitational force. Electromagnetic interaction determines the existence of stable atoms and molecules (interaction between electrons and protons). The carrier of electromagnetic interaction is a photon.

3) Weak interaction. It is responsible for radioactivity and exists between all particles of matter with spin ½. Weak interaction ensures a long and even burning of our Sun, which provides energy for the flow of all biological processes on Earth. The carriers of the weak interaction are three particles - W ± and Z 0 -bosons. Οʜᴎ were discovered only in 1983ᴦ. The radius of the weak interaction is extremely small, in connection with this, its carriers must have large masses. In accordance with the uncertainty principle, the lifetime of particles with such a large mass should be extremely short - 10 -26 s.

4) Strong interaction is an interaction, ĸᴏᴛᴏᴩᴏᴇ keeps quarks inside protons and neutrons, and protons and neutrons inside the atomic nucleus. The carrier of the strong interaction is considered to be a particle with a spin of ʼʼ1ʼʼ, which is commonly called a gluon. Gluons interact only with quarks and with other gluons. Quarks, thanks to gluons, are connected in pairs or triplets. The strong force at high energies weakens and quarks and gluons begin to behave like free particles. This property is called asymptotic freedom. As a result of experiments on powerful accelerators, photographs of tracks (traces) of free quarks, born as a result of collisions of high-energy protons and antiprotons, were obtained. The strong interaction ensures the relative stability and existence of atomic nuclei. Strong and weak interactions are characteristic of the processes of the microcosm leading to the mutual transformations of particles.

Strong and weak interactions became known to man only in the first third of the 20th century in connection with the study of radioactivity and understanding the results of the bombardment of atoms of various elements by α-particles. alpha particles knock out both protons and neutrons. The purpose of reasoning has led physicists to believe that protons and neutrons sit in the nuclei of atoms, being tightly bound to each other. There are strong interactions. On the other hand, radioactive substances emit α-, β- and γ-rays. When in 1934 Fermi created the first theory sufficiently adequate to the experimental data, he had to assume the presence in the nuclei of atoms of negligible intensities of interactions, which began to be called weak.

Now attempts are being made to combine the electromagnetic, weak and strong interactions, so that the result is the so-called GRAND UNIFIED THEORY. This theory sheds light on our very existence. It is possible that our existence is a consequence of the formation of protons. Such a picture of the beginning of the Universe seems to be the most natural. The terrestrial matter mainly consists of protons, but there are neither antiprotons nor anti-neutrons in it. Experiments with cosmic rays have shown that the same is true for all matter in our galaxy.

Characteristics of strong, weak, electromagnetic and gravitational interactions are given in the table.

The order of intensity of each interaction, indicated in the table, is determined in relation to the intensity of the strong interaction, taken as 1.

Let us give a classification of the most well-known elementary particles at the present time.

PHOTON. The rest mass and its electric charge are equal to 0. The photon has an integer spin and is a boson.

LEPTONS. This class of particles does not participate in the strong interaction, but has electromagnetic, weak and gravitational interactions. Leptons have half-integer spin and are fermions. The elementary particles included in this group are assigned a certain characteristic called lepton charge. The lepton charge, unlike the electric one, is not a source of any interaction, its role has not yet been fully elucidated. The value of the lepton charge for leptons is L=1, for antileptons L= -1, for all other elementary particles L=0.

MESONS. These are unstable particles, which are characterized by a strong interaction. The name ʼʼmesonsʼʼ means ʼʼintermediateʼʼ and is due to the fact that the initially discovered mesons had a mass greater than that of an electron, but less than that of a proton. Today mesons are known, the masses of which are greater than the mass of protons. All mesons have integer spin and are therefore bosons.

BARYONS. This class includes a group of heavy elementary particles with a half-integer spin (fermions) and a mass not less than that of a proton. The only stable baryon is the proton, the neutron is stable only inside the nucleus. Baryons are characterized by 4 types of interaction. In any nuclear reactions and interactions, their total number remains unchanged.

BASIC PRINCIPLES OF QUANTUM MECHANICS. - concept and types. Classification and features of the category "BASIC PRINCIPLES OF QUANTUM MECHANICS." 2017, 2018.

Quantum mechanics is the mechanics of the microworld. The phenomena it studies are mostly beyond our sensory perception, so one should not be surprised at the seeming paradox of the laws governing these phenomena.

The basic laws of quantum mechanics cannot be formulated as a logical consequence of the results of a certain set of fundamental physical experiments. In other words, the formulation of quantum mechanics based on a system of axioms verified by experience is still unknown. Moreover, some of the fundamental principles of quantum mechanics do not, in principle, allow experimental verification. Our confidence in the validity of quantum mechanics is based on the fact that all physical results of the theory agree with experiment. Thus, only the consequences of the basic provisions of quantum mechanics, and not its basic laws, are tested experimentally. Apparently, these circumstances are connected with the main difficulties arising in the initial study of quantum mechanics.

Of the same nature, but obviously much greater difficulties faced the creators of quantum mechanics. The experiments clearly indicated the existence of special quantum regularities in the microcosm, but in no way suggested the form of quantum theory. This can explain the truly dramatic history of the creation of quantum mechanics and, in particular, the fact that the original formulations of quantum mechanics were purely prescription in nature. They contained some rules that made it possible to calculate experimentally measured quantities, and the physical interpretation of the theory appeared after its mathematical formalism was basically created.

In constructing quantum mechanics in this course, we will not follow the historical path. We will very briefly describe a number of physical phenomena, attempts to explain which on the basis of the laws of classical physics led to insurmountable difficulties. Next, we will try to find out what features of the scheme of classical mechanics described in the previous paragraphs should be preserved in the mechanics of the microworld and what can and should be abandoned. We will see that the rejection of only one statement of classical mechanics, namely the statement that the observables are functions on the phase space, will allow us to construct a scheme of mechanics that describes systems with behavior significantly different from the classical one. Finally, in the following sections we will see that the constructed theory is more general than classical mechanics and contains the latter as a limiting case.

Historically, the first quantum hypothesis was put forward by Planck in 1900 in connection with the theory of equilibrium radiation. Planck managed to obtain a formula consistent with experience for the spectral distribution of the energy of thermal radiation, putting forward the assumption that electromagnetic radiation is emitted and absorbed in discrete portions - quanta, the energy of which is proportional to the frequency of radiation

where is the frequency of oscillations in a light wave, is Planck's constant.

Planck's hypothesis of light quanta allowed Einstein to give an extremely simple explanation of the patterns of the photoelectric effect (1905). The phenomenon of the photoelectric effect consists in the fact that under the action of a light flux, electrons are knocked out of the metal. The main task of the theory of the photoelectric effect is to find the dependence of the energy of ejected electrons on the characteristics of the light flux. Let V be the work that needs to be spent on knocking out an electron from the metal (work function). Then the law of conservation of energy leads to the relation

where T is the kinetic energy of the ejected electron. We see that this energy depends linearly on the frequency and does not depend on the intensity of the light flux. In addition, at a frequency (the red border of the photoelectric effect), the phenomenon of the photoelectric effect becomes impossible, since . These conclusions, based on the hypothesis of light quanta, are in complete agreement with experiment. At the same time, according to the classical theory, the energy of ejected electrons must depend on the intensity of light waves, which contradicts the experimental results.

Einstein supplemented the concept of light quanta by introducing the momentum of a light quantum according to the formula

Here k is the so-called wave vector, which has the direction of propagation of light waves; the length of this vector k is related to the wavelength, frequency and speed of light with the relations

For light quanta, the formula is valid

which is a special case of the formula of the theory of relativity

for a particle with rest mass .

Note that historically the first quantum hypotheses were related to the laws of radiation and absorption of light waves, i.e., to electrodynamics, and not to mechanics. However, it soon became clear that not only for electromagnetic radiation, but also for atomic systems, discrete values ​​of a number of physical quantities are characteristic. The experiments of Frank and Hertz (1913) showed that in collisions of electrons with atoms, the energy of electrons changes in discrete portions. The results of these experiments can be explained by the fact that the energy of atoms can only have certain discrete values. Later, in 1922, the experiments of Stern and Gerlach showed that the projection of the angular momentum of atomic systems onto a certain direction has a similar property. At present, it is well known that the discreteness of the values ​​of a number of observables, although a characteristic, but not an obligatory feature of the systems of the microcosm. For example, the energy of an electron in a hydrogen atom has discrete values, while the energy of a freely moving electron can take any positive value. The mathematical apparatus of quantum mechanics must be adapted to the description of observables that take both discrete and continuous values.

In 1911, Rutherford discovered the atomic nucleus and proposed a planetary model of the atom (Rutherford's experiments on the scattering of a-particles on samples of various elements showed that the atom has a positively charged nucleus, the charge of which is - the number of the element in the periodic table, and - the charge of the electron , the dimensions of the nucleus do not exceed the atoms themselves have linear dimensions of the order of cm). The planetary model of the atom contradicts the basic principles of classical electrodynamics. Indeed, moving around the nucleus in classical orbits, electrons, like any rapidly moving charges, must radiate electromagnetic waves. In this case, the electrons must lose their energy and eventually fall into the nucleus. Therefore, such an atom cannot be stable, which, of course, is not true. One of the main tasks of quantum mechanics is to explain the stability and describe the structure of atoms and molecules as systems consisting of positively charged nuclei and electrons.

From the point of view of classical mechanics, the phenomenon of diffraction of microparticles is absolutely surprising. This phenomenon was predicted by de Broglie in 1924, who suggested that a freely moving particle with momentum p

and energy Е in some sense corresponds to a wave with wave vector k and frequency , and

i.e., relations (1) and (2) are valid not only for light quanta, but also for particles. The physical interpretation of de Broglie waves was given later by Born, and we will not discuss it yet. If a moving particle corresponds to a wave, then no matter what exact meaning is put into these words, it is natural to expect that this will manifest itself in the existence of diffraction phenomena for particles. Electron diffraction was first observed in the experiments of Devisson and Germer in 1927. Subsequently, diffraction phenomena were also observed for other particles.

Let us show that diffraction phenomena are incompatible with classical ideas about the motion of particles along trajectories. The reasoning is most conveniently carried out on the example of a thought experiment on the diffraction of an electron beam by two slits, the scheme of which is shown in Fig. 1. Let the electrons from the source A move to the screen B and, passing through the slots and in it, fall on the screen C.

We are interested in the distribution of electrons along the y-coordinate falling on the screen B. The phenomena of diffraction by one and two slits are well studied, and we can assert that the distribution of electrons has the form a shown in Fig. 2, if only the first slit is open, view (Fig. 2), - if the second one is open and view c, - if both slits are open. If we assume that each electron moved along a certain classical trajectory, then all the electrons that hit screen B can be divided into two groups depending on which slit they passed through. For electrons of the first group, it is completely indifferent whether the second gap is open, and therefore their

the distribution on the screen should be represented by curve a; similarly, the electrons of the second group should have a distribution. Therefore, in the case when both slits are open, a distribution should appear on the screen that is the sum of distributions a and b. Such a sum of distributions has nothing to do with the interference pattern c. This contradiction shows that the division of electrons into groups according to the criterion through which slit they passed is impossible under the conditions of the described experiment, which means that we are forced to abandon the concept of a trajectory.

The question immediately arises as to whether it is possible to set up an experiment in such a way as to find out through which slit the electron passed. Of course, such a setting of the experiment is possible, for this it is enough to place a light source between the screens and B and observe the scattering of light quanta by electrons. In order to achieve sufficient resolution, we must use quanta with a wavelength that does not exceed in order the distance between the slits, i.e., with a sufficiently large energy and momentum. By observing the quanta scattered by the electrons, we can actually determine which slit the electron has passed through. However, the interaction of quanta with electrons will cause an uncontrolled change in their momenta, and, consequently, the distribution of electrons that hit the screen must change. Thus, we come to the conclusion that it is possible to answer the question through which slit the electron passed through only by changing both the conditions and the final result of the experiment.

In this example, we are faced with the following general feature of the behavior of quantum systems. The experimenter does not have the opportunity to follow the progress of the experiment, as this leads to a change in its final result. This feature of quantum behavior is closely related to the features of measurements in the microworld. Any measurement is possible only when the system interacts with the measuring device. This interaction leads to perturbation of the motion of the system. In classical physics it is always assumed that

this perturbation can be made arbitrarily small, just like the duration of the measurement process. Therefore, it is always possible to simultaneously measure any number of observables.

A detailed analysis of the process of measuring some observables for microsystems, which can be found in many textbooks on quantum mechanics, shows that with an increase in the accuracy of measuring observables, the impact on the system increases and the measurement introduces uncontrollable changes in the numerical values ​​of some other observables. This leads to the fact that the simultaneous accurate measurement of some observables becomes fundamentally impossible. For example, if the scattering of light quanta is used to measure the coordinate of a particle, then the error of such a measurement is of the order of the wavelength of light. It is possible to increase the measurement accuracy by choosing quanta with a shorter wavelength, and therefore, with a large momentum. In this case, an uncontrolled change in the order of the quantum momentum is introduced into the numerical values ​​of the particle momentum. Therefore, the measurement errors of the position and momentum are related by the relation

A more precise reasoning shows that this relation connects only the same-named coordinate and momentum projection. The relations relating the fundamentally possible accuracy of the simultaneous measurement of two observables are called the Heisenberg uncertainty relations. They will be obtained in the exact formulation in the following sections. The observables, on which the uncertainty relations do not impose any restrictions, are simultaneously measurable. We will see later that the Cartesian coordinates of a particle or the projection of the momentum are simultaneously measurable, and the coordinates of the same name and the projection of the momentum or two Cartesian projections of the angular momentum are simultaneously immeasurable. When constructing quantum mechanics, we must keep in mind the possibility of the existence of simultaneously immeasurable quantities.

Now, after a short physical introduction, we will try to answer the already posed question: what features of classical mechanics should be preserved and what should be naturally abandoned when constructing the mechanics of the microworld. The basic concepts of classical mechanics were the concepts of the observable and the state. The task of physical theory is to predict the results of experiments, and an experiment is always a measurement of some characteristic of a system or an observable under certain conditions that determine the state of the system. Therefore, the concepts of observable and state should appear

in any physical theory. From the point of view of the experimenter, to define an observable means to specify a method for measuring it. We will denote the observables by the symbols a, b, c, ... and for the time being we will not make any assumptions about their mathematical nature (recall that in classical mechanics the observables are functions on the phase space). The set of observables, as before, we will denote by .

It is reasonable to assume that the experimental conditions determine at least the probabilistic distributions of the measurement results of all observables, so it is reasonable to retain the definition of a state given in § 2. As before, we will denote the states by the corresponding observable a, the probability measure on the real axis, by the distribution function of the observable a in the state by and, finally, the average value of the observable a in the state by .

The theory must contain the definition of a function of the observable. For the experimenter, the statement that the observed b is a function of the observed a means that to measure b, it is enough to measure a, and if the measurement of the observed a results in a number, then the numerical value of the observed b is . For the corresponding a and probability measures, we have the equality

for any states.

Note that all possible functions of one observable a are simultaneously measurable, since to measure these observables it is sufficient to measure the observable a. Later we will see that in quantum mechanics this example exhausts the cases of simultaneous measurability of observables, i.e., if the observables are simultaneously measurable, then there is such an observable a and such functions that .

Among the set of functions of the observable a, obviously, are defined , where is a real number. The existence of the first of these functions shows that observables can be multiplied by real numbers. The statement that an observable is a constant implies that its numerical value in any state coincides with this constant.

Let us now try to find out what meaning can be attached to the sum and product of observables. These operations would be defined if we had a definition of a function of two observables. Here, however, there are fundamental difficulties associated with the possibility of the existence of simultaneously unmeasurable observables. If a and b

are measurable at the same time, then the definition is completely analogous to the definition of . To measure the observable, it is sufficient to measure the observables a and b, and such a measurement will lead to a numerical value , where are the numerical values ​​of the observables a and b, respectively. For the case of immeasurable simultaneously observed a and b, there is no reasonable definition of the function . This circumstance forces us to abandon the assumption that the observables are functions on the phase space, since we have physical grounds for considering q and p to be simultaneously immeasurable and looking for observables among mathematical objects of a different nature.

We see that it is possible to determine the sum and the product using the concept of a function of two observables only if they are simultaneously measurable. However, another approach is possible, allowing one to introduce the sum in the general case. We know that all information about states and observables is obtained as a result of measurements, so it is reasonable to assume that there are enough states so that observables can be distinguished from them, and similarly there are enough observables that states can be distinguished from them.

More precisely, we assume that from the equality

valid for any state a, it follows that the observables a and b coincide and from the equality

valid for any observable a, it follows that the STATES and coincide.

The first of the assumptions made makes it possible to define the sum of observables as such an observable for which the equality

in any condition a. We note right away that this equality is an expression of the well-known theorem of probability theory about the mean value of the sum only in the case when the observed a and b have a common distribution function. Such a general distribution function can exist (and indeed exists in quantum mechanics) only for simultaneously measurable quantities. In this case, the definition of the sum by formula (5) coincides with that done before. A similar definition of the product is impossible, since the average of the product

is not equal to the product of means even for simultaneously measurable observables.

The definition of the sum (5) does not contain any indication of the method of measuring the observable according to the known methods of measuring the observables a and b, and in this sense is implicit.

To give an idea of ​​how the concept of the sum of observables can differ from the usual concept of the sum of random variables, we will give an example of an observable, which will be studied in detail later. Let be

The observed H (the energy of a one-dimensional harmonic oscillator) is the sum of two observables proportional to the squares of momentum and coordinate. We will see that these last observables can take any non-negative numerical values, while the values ​​of the observable H must match the numbers where , i.e., the observed H with discrete numerical values ​​is the sum of observables with continuous values.

In fact, all our assumptions come down to the fact that when constructing quantum mechanics, it is reasonable to preserve the structure of the algebra of observables of classical mechanics, but we should abandon the implementation of this algebra by functions on the phase space, since we admit the existence of immeasurable simultaneously observables.

Our immediate task is to verify that there exists a realization of the algebra of observables that is different from the realization of classical mechanics. In the next section, we give an example of such an implementation by constructing a finite-dimensional model of quantum mechanics. In this model, the algebra of observables is the algebra of self-adjoint operators in -dimensional complex space. By studying this simplified model, we will be able to trace the main features of quantum theory. At the same time, after giving a physical interpretation of the constructed model, we will see that it is too poor to correspond to reality. Therefore, the finite-dimensional model cannot be considered as the final version of quantum mechanics. However, the improvement of this model by replacing it with a complex Hilbert space will seem quite natural.

Quantum mechanics
Δ x ⋅ Δ p x ⩾ ℏ 2 (\displaystyle \Delta x\cdot \Delta p_(x)\geqslant (\frac (\hbar )(2)))
Introduction
Mathematical Foundations
See also: Portal:Physics

Quantum mechanics is a branch of theoretical physics that describes physical phenomena in which the action is comparable in magnitude to Planck's constant. The predictions of quantum mechanics can differ significantly from those of classical mechanics. Since Planck's constant is an extremely small quantity compared to the action of objects in macroscopic motion, quantum effects mostly appear on microscopic scales. If the physical action of the system is much greater than Planck's constant, quantum mechanics goes organically into classical mechanics. In turn, quantum mechanics is a non-relativistic approximation (that is, an approximation of small energies compared to the rest energy of the massive particles of the system) of quantum field theory.

Classical mechanics, which describes macroscopic systems well, is not able to describe all phenomena at the level of molecules, atoms, electrons and photons. Quantum mechanics adequately describes the basic properties and behavior of atoms, ions, molecules, condensed matter, and other systems with an electron-nuclear structure. Quantum mechanics is also able to describe: the behavior of electrons, photons, and other elementary particles, however, a more accurate relativistically invariant description of the transformations of elementary particles is built within the framework of quantum field theory. Experiments confirm the results obtained with the help of quantum mechanics.

The basic concepts of quantum kinematics are the concepts of observable and state.

The basic equations of quantum dynamics are the Schrödinger equation, the von Neumann equation, the Lindblad equation, the Heisenberg equation, and the Pauli equation.

The equations of quantum mechanics are closely related to many branches of mathematics, among which are: operator theory, probability theory, functional analysis, operator algebras, group theory.

Story

At a meeting of the German Physical Society, Max Planck read out his historic paper "On the theory of radiation energy distribution in the normal spectrum", in which he introduced the universal constant h (\displaystyle h). It is the date of this event, December 14, 1900, that is often considered the birthday of quantum theory.

To explain the structure of the atom, Niels Bohr proposed in 1913 the existence of stationary states of the electron, in which the energy can only take on discrete values. This approach, developed by Arnold Sommerfeld and other physicists, is often referred to as the old quantum theory (1900-1924). A distinctive feature of the old quantum theory is the combination of classical theory with additional assumptions that contradict it.

  • The pure states of the system are described by non-zero vectors of the complex separable Hilbert space H (\displaystyle H), and the vectors | ψ 1 ⟩ (\displaystyle |\psi _(1)\rangle ) and | ψ 2 ⟩ (\displaystyle |\psi _(2)\rangle ) describe the same state if and only if | ψ 2 ⟩ = c | ψ 1 ⟩ (\displaystyle |\psi _(2)\rangle =c|\psi _(1)\rangle ), where c (\displaystyle c) is an arbitrary complex number.
  • Each observable can be uniquely associated with a linear self-adjoint operator. When measuring the observed A ^ (\displaystyle (\hat(A))), in a clean state of the system | ψ ⟩ (\displaystyle |\psi \rangle ) on average, the value is equal to
⟨A⟩ = ⟨ψ | A ^ ψ ⟩ ⟨ ψ | ψ ⟩ = ⟨ ψ A ^ | ψ ⟩ ⟨ ψ | ψ ⟩ (\displaystyle \langle A\rangle =(\frac (\langle \psi |(\hat (A))\psi \rangle )(\langle \psi |\psi \rangle ))=(\frac (\ langle \psi (\hat (A))|\psi \rangle )(\langle \psi |\psi \rangle )))

where through ⟨ψ | ϕ ⟩ (\displaystyle \langle \psi |\phi \rangle ) denoted by the scalar product of vectors | ψ ⟩ (\displaystyle |\psi \rangle ) and | ϕ ⟩ (\displaystyle |\phi \rangle ).

  • The evolution of a pure state of a Hamiltonian system is determined by the Schrödinger equation
i ℏ ∂ ∂ t | ψ ⟩ = H^ | ψ ⟩ (\displaystyle i\hbar (\frac (\partial )(\partial t))|\psi \rangle =(\hat (H))|\psi \rangle )

where H ^ (\displaystyle (\hat(H))) is the Hamiltonian.

The main consequences of these provisions are:

  • When measuring any quantum observable, it is possible to obtain only a series of its fixed values, equal to the eigenvalues ​​of its operator - the observable.
  • Observables are simultaneously measurable (do not affect each other's measurement results) if and only if the corresponding self-adjoint operators are permutable.

These provisions make it possible to create a mathematical apparatus suitable for describing a wide range of problems in quantum mechanics of Hamiltonian systems in pure states. Not all states of quantum mechanical systems, however, are pure. In the general case, the state of the system is mixed and is described by the density matrix , for which the generalization of the Schrödinger equation - the von Neumann equation (for Hamiltonian systems) is valid. Further generalization of quantum mechanics to the dynamics of open, non-Hamiltonian, and dissipative quantum systems leads to the Lindblad equation.

Stationary Schrödinger equation

Let the amplitude of the probability of finding a particle at a point M. The stationary Schrödinger equation allows us to determine it.
Function ψ (r →) (\displaystyle \psi ((\vec (r)))) satisfies the equation:

− ℏ 2 2 m ∇ 2 ψ + U (r →) ψ = E ψ (\displaystyle -((\hbar )^(2) \over 2m)(\nabla )^(\,2)\psi +U( (\vec (r)))\psi =E\psi )

where ∇ 2 (\displaystyle (\nabla )^(\,2)) is the Laplace operator, and U = U (r →) (\displaystyle U=U((\vec (r)))) is the potential energy of the particle as a function of .

The solution of this equation is the main problem of quantum mechanics. It is noteworthy that the exact solution of the stationary Schrödinger equation can be obtained only for a few relatively simple systems. Among such systems one can single out the quantum harmonic oscillator and the hydrogen atom. For most real systems, various approximate methods such as perturbation theory can be used to obtain solutions.

Solution of the stationary equation

Let E and U be two constants independent of r → (\displaystyle (\vec (r))).
By writing the stationary equation as:

∇ 2 ψ (r →) + 2 m ℏ 2 (E − U) ψ (r →) = 0 (\displaystyle (\nabla )^(\,2)\psi ((\vec (r)))+( 2m \over (\hbar )^(2))(E-U)\psi ((\vec (r)))=0)
  • If a E - U > 0, then:
ψ (r →) = A e − i k → ⋅ r → + B e i k → ⋅ r → (\displaystyle \psi ((\vec (r)))=Ae^(-i(\vec (k))\cdot (\vec (r)))+Be^(i(\vec (k))\cdot (\vec (r)))) where: k = 2 m (E − U) ℏ (\displaystyle k=(\frac (\sqrt (2m(E-U)))(\hbar )))- wave vector modulus ; A and B are two constants determined by boundary conditions.
  • If a E-U< 0 , then:
ψ (r →) = C e − k → ⋅ r → + D e k → ⋅ r → (\displaystyle \psi ((\vec (r)))=Ce^(-(\vec (k))\cdot ( \vec (r)))+De^((\vec (k))\cdot (\vec (r)))) where: k = 2 m (U − E) ℏ (\displaystyle k=(\frac (\sqrt (2m(U-E)))(\hbar )))- wave vector modulus ; C and D are two constants, also determined by boundary conditions.

Heisenberg uncertainty principle

The uncertainty relation arises between any quantum observables defined by non-commuting operators.

Uncertainty between position and momentum

Let be the standard deviation of the particle coordinate M (\displaystyle M) moving along the axis x (\displaystyle x), and - standard deviation of its momentum . Quantities ∆ x (\displaystyle \Delta x) and ∆ p (\displaystyle \Delta p) are related by the following inequality:

Δ x Δ p ⩾ ℏ 2 (\displaystyle \Delta x\Delta p\geqslant (\frac (\hbar )(2)))

where h (\displaystyle h) is Planck's constant, and ℏ = h 2 π . (\displaystyle \hbar =(\frac (h)(2\pi )).)

According to the uncertainty relation, it is impossible to absolutely accurately determine both the coordinates and momentum of a particle. With an increase in the accuracy of measuring the coordinate, the maximum accuracy of measuring the momentum decreases and vice versa. Those parameters for which such a statement is true are called canonically conjugate.

This centering on the dimension, coming from N. Bohr, is very popular. However, the uncertainty relation is derived theoretically from the postulates of Schrodinger and Born and concerns not the measurement, but the states of the object: it states that for any possible state, the corresponding uncertainty relations hold. Naturally, it will be carried out for measurements as well. Those. instead of "with increasing accuracy of measuring the coordinate, the maximum accuracy of measuring the momentum decreases" one should say: "in states where the uncertainty of the coordinate is less, the uncertainty of the momentum is greater."

Uncertainty between energy and time

Let be ∆ E (\displaystyle \Delta E) is the root-mean-square deviation when measuring the energy of a certain state of a quantum system, and Δt (\displaystyle \Delta t) is the lifetime of this state. Then the following inequality holds,

Δ E Δ t ⩾ ℏ 2 . (\displaystyle \Delta E\Delta t\geqslant (\frac (\hbar )(2)).)

In other words, a state that lives for a short time cannot have a well-defined energy.

At the same time, although the form of these two uncertainty relations is similar, their nature (physics) is completely different.

PLAN

INTRODUCTION 2

1. HISTORY OF THE CREATION OF QUANTUM MECHANICS 5

2. THE PLACE OF QUANTUM MECHANICS AMONG OTHER SCIENCES OF MOTION. fourteen

CONCLUSION 17

LITERATURE 18

Introduction

Quantum mechanics is a theory that establishes the method of describing and the laws of motion of microparticles (elementary particles, atoms, molecules, atomic nuclei) and their systems (for example, crystals), as well as the relationship of quantities characterizing particles and systems with physical quantities directly measured in macroscopic experiments . The laws of quantum mechanics (hereinafter referred to as quantum mechanics) form the foundation for studying the structure of matter. They made it possible to elucidate the structure of atoms, establish the nature of the chemical bond, explain the periodic system of elements, understand the structure of atomic nuclei, and study the properties of elementary particles.

Since the properties of macroscopic bodies are determined by the motion and interaction of the particles of which they are composed, the laws of quantum mechanics underlie the understanding of most macroscopic phenomena. The quantum mechanics made it possible, for example, to explain the temperature dependence and to calculate the heat capacity of gases and solids, to determine the structure and understand many properties of solids (metals, dielectrics, and semiconductors). Only on the basis of quantum mechanics was it possible to consistently explain such phenomena as ferromagnetism, superfluidity, and superconductivity, to understand the nature of such astrophysical objects as white dwarfs and neutron stars, and to elucidate the mechanism of thermonuclear reactions in the Sun and stars. There are also phenomena (for example, the Josephson effect) in which the laws of quantum mechanics are directly manifested in the behavior of macroscopic objects.

Thus, quantum mechanical laws underlie the operation of nuclear reactors, determine the possibility of carrying out thermonuclear reactions under terrestrial conditions, manifest themselves in a number of phenomena in metals and semiconductors used in the latest technology, and so on. The foundation of such a rapidly developing field of physics as quantum electronics is the quantum mechanical theory of radiation. The laws of quantum mechanics are used in the purposeful search for and creation of new materials (especially magnetic, semiconductor, and superconducting materials). Quantum mechanics is becoming largely an "engineering" science, the knowledge of which is necessary not only for research physicists, but also for engineers.

1. The history of the creation of quantum mechanics

At the beginning of the 20th century two (seemingly unrelated) groups of phenomena were discovered, indicating the inapplicability of the usual classical theory of the electromagnetic field (classical electrodynamics) to the processes of interaction of light with matter and to the processes occurring in the atom. The first group of phenomena was associated with the establishment by experience of the dual nature of light (dualism of light); the second - with the impossibility of explaining on the basis of classical concepts the stable existence of the atom, as well as the spectral patterns discovered in the study of the emission of light by atoms. The establishment of a connection between these groups of phenomena and attempts to explain them on the basis of a new theory ultimately led to the discovery of the laws of quantum mechanics.

For the first time, quantum representations (including the quantum constant h) were introduced into physics in the work of M. Planck (1900), devoted to the theory of thermal radiation.

The theory of thermal radiation that existed by that time, built on the basis of classical electrodynamics and statistical physics, led to a meaningless result, which consisted in the fact that thermal (thermodynamic) equilibrium between radiation and matter cannot be achieved, because all energy must sooner or later turn into radiation. Planck resolved this contradiction and obtained results in perfect agreement with experiment, on the basis of an extremely bold hypothesis. In contrast to the classical theory of radiation, which considers the emission of electromagnetic waves as a continuous process, Planck suggested that light is emitted in certain portions of energy - quanta. The value of such an energy quantum depends on the light frequency n and is equal to E=h n. From this work of Planck, two interrelated lines of development can be traced, culminating in the final formulation of K. m. in its two forms (1927).

The first one begins with the work of Einstein (1905), in which the theory of the photoelectric effect was given - the phenomenon of pulling electrons out of matter by light.

In developing Planck's idea, Einstein suggested that light is not only emitted and absorbed in discrete portions - radiation quanta, but light propagation occurs in such quanta, i.e. that discreteness is inherent in light itself - that light itself consists of separate portions - light quanta (which were later called photons). Photon energy E is related to the oscillation frequency n of the wave by the Planck relation E= hn.

Further proof of the corpuscular nature of light was obtained in 1922 by A. Compton, who showed experimentally that the scattering of light by free electrons occurs according to the laws of elastic collision of two particles - a photon and an electron. The kinematics of such a collision is determined by the laws of conservation of energy and momentum, and the photon, along with the energy E= hn momentum must be assigned p = h / l = h n / c, where l- the length of the light wave.

The energy and momentum of a photon are related by E = cp , valid in relativistic mechanics for a particle with zero mass. Thus, it was experimentally proved that, along with the known wave properties (manifested, for example, in the diffraction of light), light also has corpuscular properties: it consists, as it were, of particles - photons. This manifests the dualism of light, its complex corpuscular-wave nature.

Dualism is already contained in the formula E= hn, which does not allow choosing any one of the two concepts: on the left side of the equality, the energy E refers to the particle, and on the right, the frequency n is the characteristic of the wave. A formal logical contradiction arose: to explain some phenomena, it was necessary to assume that light has a wave nature, and to explain others - corpuscular. In essence, the resolution of this contradiction led to the creation of the physical foundations of quantum mechanics.

In 1924, L. de Broglie, trying to find an explanation for the conditions for quantization of atomic orbits postulated in 1913 by N. Bohr, put forward a hypothesis about the universality of wave-particle duality. According to de Broglie, each particle, regardless of its nature, should be associated with a wave whose length L related to the momentum of the particle R ratio. According to this hypothesis, not only photons, but also all “ordinary particles” (electrons, protons, etc.) have wave properties, which, in particular, should manifest themselves in the phenomenon of diffraction.

In 1927, K. Davisson and L. Germer first observed electron diffraction. Later, wave properties were discovered in other particles, and the validity of the de Broglie formula was confirmed experimentally

In 1926, E. Schrödinger proposed an equation describing the behavior of such "waves" in external force fields. This is how wave mechanics was born. The Schrödinger wave equation is the basic equation of nonrelativistic quantum mechanics.

In 1928, P. Dirac formulated a relativistic equation describing the motion of an electron in an external force field; The Dirac equation has become one of the fundamental equations of relativistic quantum mechanics.

The second line of development begins with the work of Einstein (1907) on the theory of the heat capacity of solids (it is also a generalization of Planck's hypothesis). Electromagnetic radiation, which is a set of electromagnetic waves of different frequencies, is dynamically equivalent to a certain set of oscillators (oscillatory systems). The emission or absorption of waves is equivalent to the excitation or damping of the corresponding oscillators. The fact that the emission and absorption of electromagnetic radiation by matter occur in energy quanta h n. Einstein generalized this idea of ​​quantizing the energy of an electromagnetic field oscillator to an oscillator of an arbitrary nature. Since the thermal motion of solids is reduced to vibrations of atoms, then a solid body is dynamically equivalent to a set of oscillators. The energy of such oscillators is also quantized, i.e., the difference between neighboring energy levels (the energies that an oscillator can have) should be equal to h n, where n is the frequency of vibrations of atoms.

Einstein's theory, refined by P. Debye, M. Born, and T. Karman, played an outstanding role in the development of the theory of solids.

In 1913, N. Bohr applied the idea of ​​energy quantization to the theory of the structure of the atom, whose planetary model followed from the results of E. Rutherford's experiments (1911). According to this model, in the center of the atom there is a positively charged nucleus, in which almost the entire mass of the atom is concentrated; Negatively charged electrons revolve around the nucleus.

Consideration of such a motion on the basis of classical concepts led to a paradoxical result - the impossibility of a stable existence of atoms: according to classical electrodynamics, an electron cannot stably move in orbit, since a rotating electric charge must radiate electromagnetic waves and, therefore, lose energy. The radius of its orbit should decrease and in a time of about 10 -8 sec the electron should fall on the nucleus. This meant that the laws of classical physics are not applicable to the motion of electrons in an atom, since atoms exist and are extremely stable.

To explain the stability of atoms, Bohr suggested that of all the orbits allowed by Newtonian mechanics for the motion of an electron in the electric field of an atomic nucleus, only those that satisfy certain quantization conditions are actually realized. That is, discrete energy levels exist in the atom (as in an oscillator).

These levels obey a certain pattern, deduced by Bohr based on a combination of the laws of Newtonian mechanics with quantization conditions requiring that the magnitude of the action for the classical orbit be an integer multiple of Planck's constant.

Bohr postulated that, being at a certain energy level (i.e., performing the orbital motion allowed by the conditions of quantization), the electron does not emit light waves.

Radiation occurs only when an electron moves from one orbit to another, i.e., from one energy level E i , to another with less energy E k , in this case, a light quantum is born with an energy equal to the difference in the energies of the levels between which the transition is carried out:

h n= E i- E k . (one)

This is how the line spectrum arises - the main feature of atomic spectra, Bohr received the correct formula for the frequencies of the spectral lines of the hydrogen atom (and hydrogen-like atoms), covering a set of previously discovered empirical formulas.

The existence of energy levels in atoms was directly confirmed by Frank-Hertz experiments (1913-14). It was found that electrons bombarding a gas lose only certain portions of energy when they collide with atoms, equal to the difference in the energy levels of the atom.

N. Bohr, using the quantum constant h, reflecting the dualism of light, showed that this quantity also determines the motion of electrons in an atom (and that the laws of this motion differ significantly from the laws of classical mechanics). This fact was later explained on the basis of the universality of the wave-particle duality contained in the de Broglie hypothesis. The success of Bohr's theory, like the previous successes of quantum theory, was achieved by violating the logical integrity of the theory: on the one hand, Newtonian mechanics was used, on the other hand, artificial quantization rules alien to it were involved, which, moreover, contradicted classical electrodynamics. In addition, Bohr's theory was unable to explain the movement of electrons in complex atoms, the emergence of molecular bonds.

Bohr's "semi-classical" theory could also not answer the question of how an electron moves during the transition from one energy level to another.

Further intense development of questions of the theory of the atom led to the conviction that, while maintaining the classical picture of the motion of an electron in orbit, it is impossible to construct a logically coherent theory.

The realization of the fact that the movement of electrons in an atom is not described in terms (concepts) of classical mechanics (as movement along a certain trajectory), led to the idea that the question of the movement of an electron between levels is incompatible with the nature of the laws that determine the behavior of electrons in an atom, and that a new theory is needed, which would include only quantities related to the initial and final stationary states of the atom.

In 1925, W. Heisenberg succeeded in constructing such a formal scheme in which, instead of the coordinates and velocities of an electron, some abstract algebraic quantities - matrices - appeared; the relationship of matrices with observable quantities (energy levels and intensities of quantum transitions) was given by simple consistent rules. Heisenberg's work was developed by M. Born and P. Jordan. This is how matrix mechanics arose. Shortly after the appearance of the Schrödinger equation, the mathematical equivalence of wave (based on the Schrödinger equation) and matrix mechanics was shown. In 1926 M. Born gave a probabilistic interpretation of de Broglie waves (see below).

An important role in the creation of quantum mechanics was played by Dirac's works dating back to the same time. The final formation of quantum mechanics as a consistent physical theory with clear foundations and a coherent mathematical apparatus occurred after the work of Heisenberg (1927), in which the uncertainty relation was formulated - the most important relation that illuminates the physical meaning of the equations of quantum mechanics, its connection with classical mechanics, and other questions of principle as well as qualitative results of quantum mechanics. This work was continued and summarized in the writings of Bohr and Heisenberg.

A detailed analysis of the spectra of atoms led to the representation (introduced for the first time by J. Yu. Uhlenbeck and S. Goudsmit and developed by W. Pauli) that the electron, in addition to charge and mass, must be assigned one more internal characteristic (quantum number) - spin.

An important role was played by the so-called exclusion principle discovered by W. Pauli (1925), which is of fundamental importance in the theory of the atom, molecule, nucleus, and solid state.

Within a short time, quantum mechanics was successfully applied to a wide range of phenomena. Theories of atomic spectra, the structure of molecules, chemical bonding, the periodic system of D. I. Mendeleev, metallic conductivity and ferromagnetism were created. These and many other phenomena have become (at least qualitatively) understandable.

The formation of quantum mechanics as a consistent theory with specific physical foundations is largely associated with the work of W. Heisenberg, in which he formulated uncertainty relation (principle). This fundamental position of quantum mechanics reveals the physical meaning of its equations, and also determines its connection with classical mechanics.

Uncertainty principle postulates: an object of the microcosm cannot be in states in which the coordinates of its center of inertia and momentum simultaneously take on quite definite, exact values.

Quantitatively, this principle is formulated as follows. If a ∆x is the uncertainty of the coordinate value x , a ∆p is the momentum uncertainty, then the product of these uncertainties cannot be less than Planck's constant in order of magnitude:

x p h.

It follows from the uncertainty principle that the more precisely one of the quantities included in the inequality is determined, the less accurately the value of the other is determined. No experiment can simultaneously accurately measure these dynamic variables, and this is not due to the influence of measuring instruments or their imperfections. The uncertainty relation reflects the objective properties of the microworld, stemming from its corpuscular-wave dualism.

The fact that the same object manifests itself both as a particle and as a wave destroys traditional ideas, deprives the description of processes of the usual visibility. The concept of a particle implies an object enclosed in a small region of space, while a wave propagates in its extended regions. It is impossible to imagine an object possessing these qualities at the same time, and one should not try. It is impossible to build a model that is illustrative for human thinking and that would be adequate to the microworld. The equations of quantum mechanics, however, do not set such a goal. Their meaning is in a mathematically adequate description of the properties of microworld objects and the processes occurring with them.

If we talk about the connection between quantum mechanics and classical mechanics, then the uncertainty relation is a quantum limitation of the applicability of classical mechanics to objects of the microworld. Strictly speaking, the uncertainty relation applies to any physical system, however, since the wave nature of macroobjects practically does not manifest itself, the coordinates and momentum of such objects can be simultaneously measured with a sufficiently high accuracy. This means that it is quite sufficient to use the laws of classical mechanics to describe their motion. Recall that the situation is similar in relativistic mechanics (special relativity): at velocities much lower than the speed of light, the relativistic corrections become insignificant and the Lorentz transformations turn into Galilean transformations.

So, the uncertainty relation for coordinates and momentum reflects the corpuscular-wave dualism of the microworld and not related to the influence of measuring devices. A somewhat different meaning has a similar uncertainty relation for energyE and timet :

E t h.

It follows from this that the energy of the system can be measured only with an accuracy not exceeding h /∆ t, where t – measurement duration. The reason for such uncertainty lies in the very process of interaction of the system (microobject) withmeasuring device. For a stationary situation, the above inequality means that the energy of interaction between the measuring device and the system can only be taken into account with an accuracy of h /∆t. In the limiting case of instantaneous measurement, the exchange of energy that takes place turns out to be completely indeterminate.

If under E is understood as the uncertainty of the value of the energy of a non-stationary state, then t is a characteristic time during which the values ​​of physical quantities in the system change significantly. From this, in particular, follows an important conclusion regarding the excited states of atoms and other microsystems: the energy of an excited level cannot be strictly determined, which indicates the presence natural width this level.

The objective properties of quantum systems reflect another fundamental position of quantum mechanics - Bohr's complementarity principle, Whereby Obtaining information about some physical quantities describing a micro-object by any experimental means is inevitably associated with the loss of information about some other quantities that are additional to the first ones..

Mutually complementary are, in particular, the coordinate of the particle and its momentum (see above - the uncertainty principle), kinetic and potential energy, electric field strength and the number of photons.

The considered fundamental principles of quantum mechanics indicate that, due to the corpuscular-wave dualism of the microworld studied by it, the determinism of classical physics is alien to it. A complete departure from the visual modeling of processes gives particular interest to the question of what is the physical nature of de Broglie waves. In answering this question, it is customary to "start" from the behavior of photons. It is known that when a light beam is passed through a translucent plate S part of the light passes through it, and part is reflected (Fig. 4).

Rice. 4

What then happens to individual photons? Experiments with light beams of very low intensity using modern technology ( BUT- a photon detector), which allows you to monitor the behavior of each photon (the so-called photon counting mode), show that there can be no talk of splitting an individual photon (otherwise the light would change its frequency). It is reliably established that some photons pass through the plate, and some are reflected from it. It means that the same particlesunder the same conditions may behave differently,i.e., the behavior of an individual photon when it encounters the surface of the plate cannot be predicted unambiguously.

Reflection of a photon from a plate or passage through it are random events. And the quantitative patterns of such events are described with the help of probability theory. A photon can with probability w 1 pass through the plate and with probability w 2 reflect from her. The probability that one of these two alternative events will happen to a photon is equal to the sum of the probabilities: w 1 +w 2 = 1.

Similar experiments with a beam of electrons or other microparticles also show the probabilistic nature of the behavior of individual particles. Thus, the problem of quantum mechanics can be formulated as a predictionprobabilities of processes in the microworld, in contrast to the problem of classical mechanics - predict the reliability of events in the macrocosm.

It is known, however, that the probabilistic description is also used in classical statistical physics. So what is the fundamental difference? To answer this question, let's complicate the experiment on the reflection of light. With a mirror S 2 turn the reflected beam by placing the detector A, registering photons in the zone of its suppression with the transmitted beam, i.e., we will provide the conditions for the interference experiment (Fig. 5).

Rice. 5

As a result of interference, the light intensity, depending on the location of the mirror and the detector, will periodically change over the cross section of the beam overlap region over a wide range (including vanishing). How do individual photons behave in this experiment? It turns out that in this case the two optical paths to the detector are no longer alternative (mutually exclusive) and therefore it is impossible to say which path the photon passed from the source to the detector. We have to admit that it could hit the detector simultaneously in two ways, resulting in an interference pattern. Experience with other microparticles gives a similar result: successively passing particles create the same pattern as a photon flux.

This is already a cardinal difference from classical ideas: after all, it is impossible to imagine the movement of a particle simultaneously along two different paths. However, quantum mechanics does not pose such a problem. It predicts the result that the bright bands correspond to a high probability of the appearance of a photon.

Wave optics easily explains the result of an interference experiment with the help of the principle of superposition, according to which light waves are added taking into account the ratio of their phases. In other words, the waves are first added in amplitude, taking into account the phase difference, a periodic amplitude distribution is formed, and then the detector registers the corresponding intensity (which corresponds to the mathematical operation of squaring modulo, i.e., there is a loss of information about the phase distribution). In this case, the intensity distribution is periodic:

I = I 1 + I 2 + 2 A 1 A 2 cos (φ 1 – φ 2 ),

where BUT , φ , I = | A | 2 amplitude,phase and intensity waves, respectively, and indices 1, 2 indicate their belonging to the first or second of these waves. It is clear that at BUT 1 = BUT 2 and cos(φ 1 φ 2 ) = – 1 intensity value I = 0 , which corresponds to the mutual damping of light waves (with their superposition and interaction in amplitude).

To interpret wave phenomena from the corpuscular point of view, the principle of superposition is transferred to quantum mechanics, i.e., the concept is introduced probability amplitudes – by analogy with optical waves: Ψ = BUT exp ( ). This means that the probability is the square of this value (modulo), i.e. W = |Ψ| 2 .The probability amplitude is called in quantum mechanics wave function . This concept was introduced in 1926 by the German physicist M. Born, thereby giving probabilistic interpretation de Broglie waves. Satisfying the principle of superposition means that if Ψ 1 and Ψ 2 are the probability amplitudes for the passage of the particle in the first and second paths, then the probability amplitude for the passage of both paths should be: Ψ = Ψ 1 + Ψ 2 . Then, formally, the statement that "the particle went two ways" acquires a wave meaning, and the probability W = |Ψ 1 + Ψ 2 | 2 exhibits the property interference distribution.

Thus, the quantity describing the state of a physical system in quantum mechanics is the wave function of the system under the assumption that the superposition principle is valid. With respect to the wave function, the basic equation of wave mechanics is written - the Schrödinger equation. Therefore, one of the main problems of quantum mechanics is to find the wave function corresponding to a given state of the system under study.

It is important that the description of the state of a particle with the help of the wave function is of a probabilistic nature, since the square of the modulus of the wave function determines the probability of finding a particle at a given time in a certain limited volume. In this quantum theory fundamentally differs from classical physics with its determinism.

At one time, classical mechanics owed its triumphal march to the high accuracy of predicting the behavior of macroobjects. Naturally, among scientists for a long time there was an opinion that the progress of physics and science in general would be inseparably linked with an increase in the accuracy and reliability of such predictions. The principle of uncertainty and the probabilistic nature of the description of microsystems in quantum mechanics radically changed this point of view.

Then other extremes began to appear. Since it follows from the uncertainty principle impossibility of simultaneousdetermining position and momentum, we can conclude that the state of the system at the initial moment of time is not exactly determined and, therefore, subsequent states cannot be predicted, i.e., principle of causality.

However, such a statement is possible only with a classical view of non-classical reality. In quantum mechanics, the state of a particle is completely determined by the wave function. Its value, set for a certain point in time, determines its subsequent values. Since causality acts as one of the manifestations of determinism, it is expedient in the case of quantum mechanics to speak of probabilistic determinism based on statistical laws, i.e., providing the higher accuracy, the more events of the same type are recorded. Therefore, the modern concept of determinism presupposes an organic combination, a dialectical unity need and chance.

The development of quantum mechanics thus had a marked influence on the progress of philosophical thought. From an epistemological point of view, of particular interest is the already mentioned conformity principle, formulated by N. Bohr in 1923, according to which any new, more general theory, which is a development of the classical one, does not completely reject it, but includes the classical theory, indicating the limits of its applicability and passing into it in certain limiting cases.

It is easy to see that the correspondence principle perfectly illustrates the relationship of classical mechanics and electrodynamics with the theory of relativity and quantum mechanics.