Beginnings of thermodynamics. Zero (general) law of thermodynamics

INTRODUCTION

CHAPTER 1

BASIC CONCEPTS AND INITIAL PROVISIONS OF THERMODYNAMICS

1.1. Closed and open thermodynamic systems.

1.2. Zero start of thermodynamics.

1.3. First law of thermodynamics.

1.4. The second law of thermodynamics.

1.4.1. Reversible and irreversible processes.

1.4.2. Entropy.

1.5. Third law of thermodynamics.

CHAPTER 2

2.1. General characteristics of open systems.

2.1.1. dissipative structures.

2.2. Self-organization of various systems and synergetics.

2.3. Examples of self-organization of various systems.

2.3.1. Physical systems.

2.3.2. Chemical systems.

2.3.3. Biological systems.

2.3.4. social systems.

Formulation of the problem.

CHAPTER 3

ANALYTICAL AND NUMERICAL INVESTIGATIONS OF SELF-ORGANIZATION OF VARIOUS SYSTEMS.

3.1. Benard cells.

3.2. Laser as a self-organized system.

3.3. biological system.

3.3.1. Population dynamics. Ecology.

3.3.2. System "Victim - Predator".

CONCLUSION.

LITERATURE.

INTRODUCTION

Science originated a very long time ago, in the Ancient East, and then developed intensively in Europe. In the scientific tradition for a long time remained insufficiently studied the question of

relationship between whole and part. As it became clear in the middle

In the 20th century, a part can transform the whole in radical and unexpected ways.

It is known from classical thermodynamics that isolated thermodynamic systems, in accordance with the second law of thermodynamics for irreversible processes, the entropy of the system S increases until it reaches its maximum value in a state of thermodynamic equilibrium. The increase in entropy is accompanied by the loss of information about the system.

With the discovery of the second law of thermodynamics, the question arose of how it is possible to reconcile the increase in entropy with time in closed systems with the processes of self-organization in living and non-living nature. For a long time it seemed that there was a contradiction between the conclusion of the second law of thermodynamics and the conclusions of Darwin's evolutionary theory, according to which in living nature, due to the principle of selection, the process of self-organization is continuously going on.

The contradiction between the second law of thermodynamics and examples of the highly organized world around us was resolved with the advent of more than fifty years ago and the subsequent natural development of non-linear non-equilibrium thermodynamics. It is also called the thermodynamics of open systems. A great contribution to the formation of this new science was made by I.R. Prigogine, P. Glensdorf, G. Haken. The Belgian physicist of Russian origin Ilya Romanovich Prigogine was awarded the Nobel Prize in 1977 for his work in this field.

As a result of the development of non-linear non-equilibrium thermodynamics, a completely new scientific discipline, synergetics, has emerged - the science of self-organization and stability of the structures of various complex non-equilibrium systems. : physical, chemical, biological and social.

In the present work, the self-organization of various systems is studied by analytical and numerical methods.


CHAPTER 1

BASIC CONCEPTS AND BACKGROUND

THERMODYNAMICS.

1.1. CLOSED AND OPEN THERMODYNAMIC

SYSTEMS.

Any material object, any body consisting of a large number of particles, is called macroscopic system. The dimensions of macroscopic systems are much larger than the dimensions of atoms and molecules. All macroscopic features that characterize such a system and its relation to the surrounding bodies are called macroscopic parameters. These include, for example, such as density, volume, elasticity, concentration, polarization, magnetization, etc. Macroscopic parameters are divided into external and internal.

The quantities determined by the position of external bodies not included in our system are called external settings, for example, the strength of the force field (since they depend on the position of the field sources - charges and currents that are not included in our system), the volume of the system (since it is determined by the location of external bodies), etc. Therefore, the external parameters are functions of the coordinates of the external bodies. The quantities determined by the cumulative motion and distribution in space of the particles included in the system are called internal settings, for example, energy, pressure, density, magnetization, polarization, etc. (since their values ​​depend on the motion and position of the particles of the system and the charges included in them).

The set of independent macroscopic parameters determines the state of the system, i.e. form of her being. Values ​​that do not depend on the history of the system and are completely determined by its state at a given moment (i.e., a set of independent parameters) are called state functions.

The state is called stationary, if the system parameters do not change over time.

If, in addition, in the system not only all parameters are constant in time, but also there are no stationary flows due to the action of any external sources, then such a state of the system is called equilibrium(state of thermodynamic equilibrium). Thermodynamic systems are usually called not all, but only those macroscopic systems that are in thermodynamic equilibrium. Similarly, thermodynamic parameters are those parameters that characterize a system in thermodynamic equilibrium.

The internal parameters of the system are divided into intensive and extensive. Parameters that do not depend on the mass and number of particles in the system are called intense(pressure, temperature, etc.). Parameters proportional to the mass or number of particles in the system are called additive or extensive(energy, entropy, etc.). Extensive parameters characterize the system as a whole, while intensive parameters can take on certain values ​​at each point of the system.

According to the method of transferring energy, matter and information between the system under consideration and the environment, thermodynamic systems are classified:

1. Closed (isolated) system- this is a system in which there is no exchange with external bodies of either energy, or matter (including radiation), or information.

2. closed system- a system in which there is an exchange only with energy.

3. Adiabatically isolated system - is a system in which there is an exchange of energy only in the form of heat.

4. open system is a system that exchanges energy, matter, and information.

1.2. ZERO BEGINNING OF THERMODYNAMICS.

The zero law of thermodynamics, formulated only about 50 years ago, is essentially a logical justification obtained "backdating" for introducing the concept of temperature of physical bodies. Temperature is one of the most profound concepts of thermodynamics. Temperature plays just as important a role in thermodynamics as, for example, processes. For the first time, a completely abstract concept took center stage in physics; it replaced the concept of force, introduced back in the time of Newton (17th century), - at first glance, more specific and "tangible" and, moreover, successfully "mathematical" by Newton.

The first law of thermodynamics establishes that the internal energy of a system is an unambiguous function of its state and changes only under the influence of external influences.

In thermodynamics, two types of external interactions are considered: an impact associated with a change in the external parameters of the system (the system does work W), and an impact not associated with a change in external parameters and due to a change in internal parameters or temperature (a certain amount of heat Q is reported to the system).

Therefore, according to the first law, the change in the internal energy U 2 -U 1 of the system during its transition under the influence of these influences from the first state to the second is equal to the algebraic sum of Q and W, which for the final process will be written as an equation

U 2 - U 1 \u003d Q - W or Q \u003d U 2 - U 1 + W (1.1)

The first beginning is formed as a postulate and is a generalization of a large amount of experimental data.

For elementary process first law equation like this:

dQ = dU + dW (1.2)

dQ and dW are not a total differential, since they depend on the path.

The dependence of Q and W on the path is seen in the simplest example of gas expansion. The work done by the system during its transition from state 1 to state 2 (Fig. 1) along the way a depicted by an area bounded by a contour A1a2BA :

W a = p(V, T) dV ;

and work when moving along the path in- an area limited by a contour A1v2BA:

Wb = p(V,T)dV.

Rice. one

Since the pressure depends not only on volume, but also on temperature, then with different temperature changes on the path a and b during the transition of the same initial state (p 1, V 1) to the same final state (p 2, V 2) work is different. This shows that in a closed process (cycle) 1a2v1, the system does work that is not equal to zero. This is the basis for the operation of all heat engines.

From the first law of thermodynamics, it follows that work can be done either by changing the internal energy, or by communicating the amount of heat to the system. If the process is circular, the initial and final states coincide U 2 - U 1 \u003d 0 and W \u003d Q, that is, work in a circular process can only be performed due to the system receiving heat from external bodies.

The first principle can be formulated in several ways:

1. The emergence and destruction of energy is impossible.

2. Any form of movement is capable and must be transformed into any other form of movement.

3. Internal energy is an unambiguous form of state.

4. A perpetual motion machine of the first kind is impossible.

5. An infinitesimal change in internal energy is a total differential.

6. The sum of the amount of heat and work does not depend on the path of the process.

First law of thermodynamics, postulating a conservation law

energy for a thermodynamic system. does not indicate the direction of the processes occurring in nature. The direction of thermodynamic processes establishes the second law of thermodynamics.

1.4. THE SECOND ORIGIN OF THERMODYNAMICS.

The second law of thermodynamics establishes the presence of fundamental asymmetry in nature, i.e. unidirectionality of all spontaneous processes occurring in it.

The second basic postulate of thermodynamics is also connected with other properties of thermodynamic equilibrium as a special type of thermal motion. Experience shows that if two equilibrium systems A and B are brought into thermal contact, then regardless of the difference or equality of their external parameters, they either remain in a state of thermodynamic equilibrium, or their equilibrium is disturbed and after some time in the process of heat transfer (exchange energy) both systems come to another equilibrium state. In addition, if there are three equilibrium systems A, B and C and if systems A and B are separately in equilibrium with system C, then systems A and B are in thermodynamic equilibrium with each other (properties of transitivity of thermodynamic equilibrium).

Let there be two systems. In order to make sure that they are in a state of thermodynamic equilibrium, it is necessary to measure independently all the internal parameters of these systems and make sure that they are constant in time. This task is extremely difficult.

It turns out, however, that there is such a physical quantity that makes it possible to compare the thermodynamic states of two systems and two parts of one system without a detailed study and internal parameters. This quantity, which expresses the state of internal motion of an equilibrium system, which has the same value for all parts of a complex equilibrium system, regardless of the number of particles in them, and is determined by external parameters and energy, is called temperature.

Temperature is an intensive parameter and serves as a measure of the intensity of the thermal motion of molecules.

The stated position about the existence of temperature as a special function of the state of an equilibrium system is the second postulate of thermodynamics.

In other words, the state of thermodynamic equilibrium is determined by a combination of external parameters and temperature.

R. Fowler and E. Guggenheim called it the zero principle, since it, like the first and second principles, determining the existence of some state functions, establishes the existence of temperature in equilibrium systems. This was mentioned above.

So, all internal parameters of an equilibrium system are functions of external parameters and temperatures.(Second postulate of thermodynamics).

Expressing the temperature in terms of external parameters and energy, the second postulate can be formulated as follows : at thermodynamic equilibrium, all internal parameters are functions of external parameters and energy.

The second postulate allows you to determine the change in body temperature by changing any of its parameters, on which the device of various thermometers is based.

1.4.1. REVERSIBLE AND IRREVERSIBLE PROCESSES.

The process of transition of the system from state 1 to state 2 is called reversible, if the return of this system to its original state from 2 to 1 can be carried out without any changes in the surrounding external bodies.

The process of transition of the system from state 1 to state 2 is called irreversible, if the reverse transition of the system from 2 to 1 cannot be carried out without a change in the surrounding bodies.

The measure of the irreversibility of the process in a closed system is the change in the new state function - entropy, the existence of which in an equilibrium system establishes the first position of the second law about the impossibility of a perpetual motion machine of the second kind. The uniqueness of this state function leads to the fact that any irreversible process is nonequilibrium.

It follows from the second law that S is a single-valued state function. This means that dQ/T for any circular equilibrium process is zero. If this were not done, i.e. if entropy were an ambiguous state function, then it would be possible to implement a perpetual motion machine of the second kind.

The proposition that any thermodynamic system has a new unambiguous entropy state function S , which does not change during adiabatic equilibrium processes and constitutes the content of the second law of thermodynamics for equilibrium processes.

Mathematically, the second law of thermodynamics for equilibrium processes is written by the equation:

dQ/T = dS or dQ = TdS (1.3)

The integral equation of the second law for equilibrium circular processes is the Clausius equation:

For a nonequilibrium circular process, the Clausius inequality has the following form:

dQ/T< 0 (1.5)

Now we can write down the basic equation of thermodynamics for the simplest system under uniform pressure:

TdS = dU + pdV (1.6)

Let us discuss the question of the physical meaning of entropy.

1.4.2. ENTROPY.

The second law of thermodynamics postulates the existence of a state function called "entropy" (which means "evolution" in Greek) and has the following properties:

a) The entropy of a system is an extensive property. If the system consists of several parts, then the total entropy of the system is equal to the sum of the entropy of each part.

c) The change in entropy d S consists of two parts. Let us denote by d e S the flow of entropy due to interaction with the environment, and by d i S - the part of entropy due to changes within the system, we have

d S = d e S + d i S (1.7)

Entropy increment d i S due to change within the system never has a negative value. The value d i S = 0 only when the system undergoes reversible changes, but it is always positive if the same irreversible processes are going on in the system.

In this way

(reversible processes);

d i S > 0 (1.9)

(irreversible processes);

For an isolated system, the entropy flux is zero and expressions (1.8) and (1.9) are reduced to the following form:

d S = d i S > 0 (1.10)

(isolated system).

For an isolated system, this relation is equivalent to the classical formulation that entropy can never decrease, so that in this case the properties of the entropy function provide a criterion for detecting the presence of irreversible processes. Similar criteria exist for some other special cases.

Suppose that the system, which we will denote by the symbol 1 , is inside the system 2 larger and that the overall system, consisting of systems 1 and 2 , is isolated.

The classical formulation of the second law of thermodynamics then has the form:

dS = dS 1 +dS 2 ³ 0 (1.11)

Applying equations (1.8) and (1.9) separately to each part of this expression, postulates that d i S 1 ³ 0 , d i S 2 ³ 0

The situation in which d i S 1 > 0 and d i S 2 < 0 , а d(S 1 +S 2 )>0 , is physically impossible. Therefore, it can be argued that a decrease in entropy in a separate part of the system, compensated by a sufficient increase in entropy in another part of the system, is a forbidden process. From this formulation it follows that in any macroscopic region of the system, the entropy increment due to the course of irreversible processes is positive. The concept of "macroscopic region" of the system means any region of the system, which contains a large enough number of molecules to be able to neglect microscopic fluctuations. The interaction of irreversible processes is possible only when these processes occur in the same parts of the system.

Such a formulation of the second law might be called the "local" formulation as opposed to the "global" formulation of classical thermodynamics. The significance of such a new formulation lies in the fact that on its basis a much deeper analysis of irreversible processes is possible.

1.5 THIRD ORIGIN OF THERMODYNAMICS.

The discovery of the third law of thermodynamics is associated with the discovery of a chemical agent - quantities that characterize the ability of various substances to chemically react with each other. This value is determined by the work W of chemical forces during the reaction. The first and second laws of thermodynamics allow us to calculate the chemical agent W only up to some indefinite function. To determine this function, in addition to both principles of thermodynamics, new experimental data on the properties of bodies are needed. Therefore, Nernston undertook extensive experimental studies of the behavior of substances at low temperatures.

As a result of these studies, it was formulated third law of thermodynamics: as the temperature approaches 0 K, the entropy of any equilibrium system during isothermal processes ceases to depend on any thermodynamic state parameters and, in the limit (T = 0 K), takes on the same universal constant value for all systems, which can be taken equal to zero.

The generality of this statement lies in the fact that, firstly, it refers to any equilibrium system and, secondly, that as T tends to 0 K, the entropy does not depend on the value of any parameter of the system. Thus, according to the third law,

lin [ S (T, X 2) - S (T, X 1) ] = 0 (1.12)

lim [ dS/dX ] T = 0 as Т ® 0 (1.13)

where X is any thermodynamic parameter (a i or A i).

The limiting value of entropy, since it is the same for all systems, has no physical meaning and is therefore assumed to be zero (Planck's postulate). As a static consideration of this issue shows, entropy is inherently defined up to a certain constant (like, for example, the electrostatic potential of a system of charges at some point in the field). Thus, it makes no sense to introduce some kind of "absolute entropy", as Planck and some other scientists did.

CHAPTER 2

BASIC CONCEPTS AND PROVISIONS OF SYNERGETICS.

SELF-ORGANIZATION OF VARIOUS SYSTEMS.

About 50 years ago, as a result of the development of thermodynamics, a new discipline arose - synergetics. Being a science of self-organization of various systems - physical, chemical, biological and social - synergetics shows the possibility of at least partial removal of interdisciplinary barriers not only within the natural scientific growth of knowledge, but also between natural scientific and humanitarian cultures.

Synergetics deals with the study of systems consisting of many subsystems of a very different nature, such as electrons, atoms, molecules, cells, neutrons, mechanical elements, photons, organs, animals, and even people.

When choosing a mathematical apparatus, it must be borne in mind that it must be applicable to the problems faced by a physicist, chemist, biologist, electrical engineer and mechanical engineer. It should act no less smoothly in the field of economics, ecology and sociology.

In all these cases, we will have to consider systems consisting of a very large number of subsystems, about which we may not have all the complete information. To describe such systems, approaches based on thermodynamics and information theory are often used.

In all systems of interest to synergetics, dynamics play a decisive role. How and what macroscopic states are formed are determined by the rate of growth (or decay) of collective "modes". It can be said that in a certain sense we come to a kind of generalized Darvenism, the effect of which is recognized not only on the organic, but also on the inorganic world: the emergence of macroscopic structures due to the birth of collective modes under the influence of fluctuations, their competition and, finally, the selection of the “most adapted” modes or combinations of such modes.

It is clear that the parameter "time" plays a decisive role. Therefore, we must investigate the evolution of systems over time. That is why the equations of interest to us are sometimes called "evolutionary".

2.1. GENERAL CHARACTERISTICS OF OPEN SYSTEMS.

open systems- these are thermodynamic systems that exchange with surrounding bodies (environment), matter, energy and momentum. If the deviation of an open system from the equilibrium state is small, then the non-equilibrium state can be described by the same parameters (temperature, chemical potential, and others) as the equilibrium one. However, the deviation of the parameters from the equilibrium values ​​causes flows of matter and energy in the system. Such transport processes lead to the production of entropy. Examples of open systems are: biological systems, including a cell, information processing systems in cybernetics, energy supply systems, and others. To maintain life in systems from cells to humans, a constant exchange of energy and matter with the environment is necessary. Consequently, living organisms are open systems, similarly with other given parameters. Prigogine in 1945 formulated an extended version of thermodynamics.

In an open system, the change in entropy can be broken down into the sum of two contributions:

d S = d S e + d S i (2.1)

Here d S e is the entropy flow due to the exchange of energy and matter with the environment, d S i is the production of entropy within the system (Fig. 2.1).

Rice. 2.1. Schematic representation of open

systems: production and flow of entropy.

X - set of characteristics :

C - composition of the system and the environment ;

P - pressure; T - temperature.

So, an open system differs from an isolated one by the presence of a term in the expression for the change in entropy corresponding to the exchange. In this case, the sign of the term d S e can be any, in contrast to d S i .

For non-equilibrium state:

The non-equilibrium state is more highly organized than the equilibrium state, for which

Thus, evolution to a higher order can be thought of as a process in which a system reaches a state with a lower entropy compared to the initial one.

The fundamental theorem on entropy production in an open system with time-independent boundary conditions was formulated by Prigogine: in a linear region, the system evolves to a stationary state characterized by the minimum entropy production consistent with the imposed boundary conditions.

So the state of any linear open system with time-independent boundary conditions always changes in the direction of decreasing entropy production P = d S / d t until the current equilibrium state is reached, at which the entropy production is minimal:

dP< 0 (условие эволюции)

P = min , d P = 0 (current equilibrium condition)

d P/ d t< 0 (2.2)

2.1.1. DISSIPATIVE STRUCTURES.

Each system consists of elements (subsystems). These elements are in a certain order and are connected by certain relationships. The structure of the system can be called the organization of elements and the nature of the relationship between them.

Real physical systems have spatial and temporal structures.

Structure formation- this is the emergence of new properties and relationships in the set of elements of the system. Concepts and principles play an important role in the processes of formation of structures:

1. Constant negative entropy flow.

2. The state of the system far from equilibrium.

3. Nonlinearity of equations describing processes.

4. Collective (cooperative) behavior of subsystems.

5. The universal criterion of evolution by Prigogine - Glensdorf.

The formation of structures during irreversible processes should be accompanied by a qualitative jump (phase transition) when critical values ​​of parameters are reached in the system. In open systems, the external contribution to the entropy (2.1) d S can, in principle, be chosen arbitrarily by changing the system parameters and the properties of the environment accordingly. In particular, entropy can decrease due to the transfer of entropy to the external environment, i.e. when d S< 0 . Это может происходить, если изъятие из системы в единицу времени превышает производство энтропии внутри системы, то есть

¾ < 0 , если ¾ >¾ > 0 (2.3)

To begin the formation of a structure, the return of entropy must exceed a certain critical value. In a strongly nonequilibrium distance, the variables of the system satisfy non-linear equations.

Thus, two main classes of irreversible processes can be distinguished:

1. Destruction of the structure near the equilibrium position. This is a universal property of systems under arbitrary conditions.

2. The birth of a structure far from equilibrium in an open system under special critical external conditions and with nonlinear internal dynamics. This property is not universal.

Spatial, temporal or spatio-temporal structures that can arise far from equilibrium in a nonlinear region at critical values ​​of system parameters are called dissipative structures.

Three aspects are interconnected in these structures:

1. State function expressed by equations.

2. Spatio-temporal structure arising due to instability.

3. Fluctuations responsible for instabilities.


Rice. 1. Three aspects of dissipative structures.

The interaction between these aspects leads to unexpected phenomena - to the emergence of order through fluctuations, the formation of a highly organized structure out of chaos.

Thus, in dissipative structures, the formation from being takes place, what arises from the existing is formed.

2.2. SELF-ORGANIZATION OF DIFFERENT SYSTEMS AND

SENERGETIKA.

The transition from chaos to order, which occurs when the values ​​of parameters change from critical to supercritical, changes the symmetry of the system. Therefore, such a transition is similar to thermodynamic phase transitions. Transitions in nonequilibrium processes are called kinetic phase transitions. In the vicinity of nonequilibrium phase transitions, there is no consistent macroscopic description. Fluctuations are just as important as the average. For example, macroscopic fluctuations can lead to new types of instabilities.

So, far from equilibrium, there is an unexpected relationship between the chemical, kinetic, and spatiotemporal structure of reacting systems. True, the interactions that determine the interaction of the rate constants and transfer coefficients are due to short-range forces (valence forces, hydrogen bonds, and van der Waals forces). However, the solutions of the corresponding equations also depend on global characteristics. For the emergence of dissipative structures, it is usually required that the dimensions of the system exceed a certain critical value - a complex function of the parameters describing reaction-diffusion processes. We can therefore assert that the chemical instabilities determine the further order by which the system acts as a whole.

If diffusion is taken into account, then the mathematical formulation of the problems associated with dissipative structures will require the study differential equations in partial derivatives. Indeed, the evolution of the concentration of components X with time is determined by an equation of the form

(2.4)

where the first term gives the contribution of chemical reactions to the change in concentration X i and usually has a simple polynomial form, and the second term means diffusion along the r axis.

It is truly amazing how many different phenomena are described by the reaction-diffusion equation (2.4), therefore it is interesting to consider the ² main solution ² , which would correspond to the thermodynamic branch. Other solutions could be obtained with successive instabilities that arise as we move away from the equilibrium state. It is convenient to study instabilities of this type using the methods of bifurcation theory [Nikolis and Prigogine, 1977] . In principle, a bifurcation is something else than the emergence of a new solution of equations at a certain critical value of the parameter. Let's assume that we have a chemical reaction corresponding to the kinetic equation [McLane and Wallis, 1974] .

¾ = a X (X-R) (2.5)

It is clear that for R< 0 существует только одно решение, независящее от времени, X = 0 . В точке R = 0 происходит бифуркация, и появляется новое решение X = R .

Rice. 2.3. Bifurcation diagram for equation (2.5.) .

The solid line corresponds to the stable branch,

points - unstable branch.

Stability analysis in a linear approximation allows one to verify that the solution X = 0 becomes unstable upon passing through R = 0, while the solution X = R becomes stable. In general, with an increase in some characteristic parameter R successive bifurcations occur. Figure 2.4. shows the only solution for p = p 1 , but at

p = p 2 uniqueness gives way to multiple solutions.

It is interesting to note that the bifurcation in a sense introduces into physics and chemistry, history - an element that was previously considered the prerogative of the sciences involved in the study of biological, social and cultural phenomena.

Rice. 2.4. Successive bifurcations:

A and A 1 - points of primary bifurcations from

thermodynamic branch,

B and B 1 - points of secondary bifurcation.

It is known that when the control parameters change in the system, various transient phenomena are observed. Let us now single out from these observations certain common features characteristic of a large number of other transitions in physicochemical systems.

To this end, let us graphically represent (Fig. 2.5) the dependence of the vertical component of the fluid flow velocity at a certain point on the external constraint, or, more generally, the dependence of the system state variable X (or x = X - X s) on the control parameter l. Thus we get a graph known as a bifurcation diagram.

Rice. 2.5. Bifurcation diagram:

a is the stable part of the thermodynamic branch,

and 1 is the unstable part of the thermodynamic branch,

in 1 , in 2 - dissipative structures born in

supercritical region.

For small values ​​of l, only one solution is possible, which corresponds to the state of rest in the Benard experiment. It is a direct extropolation of thermodynamic equilibrium, and, like equilibrium, is characterized by an important property - asymptotic stability, since in this region the system is able to dampen internal fluctuations or external perturbations. For this reason, we will call such a branch of states the thermodynamic branch. At the transition of the critical value of the parameter l, indicated by l c in Figure 2.5. , which are on this branch, becomes unstable, since fluctuations or small external perturbations are no longer damped. Acting like an amplifier, the system deviates from the stationary state and passes to a new regime, in the case of the Benard experiment corresponding to the state of stationary convection. Both of these regimes merge at l = l c and differ at l > l c . This phenomenon is called bifurcation. It is easy to see the reasons why this phenomenon should be associated with catastrophic change and conflict. Indeed, at the decisive moment of the transition, the system must make a critical choice (in the vicinity of l = l c), which in the Benard problem is associated with the emergence of right- or left-handed cells in a certain region of space (Fig. 2.5. , branches in 1 or in 2) .

In the vicinity of the equilibrium state, the stationary state is asymptotically stable (according to the theorem on the minimum production of entropy), therefore, due to continuity, this thermodynamic branch extends throughout the entire subcritical region. When a critical value is reached, the thermodynamic branch can become unstable, so that any, even a small perturbation, transfers the system from the thermodynamic branch to a new stable state, which can be ordered. So, at a critical value of the parameter, a bifurcation occurred and a new branch of solutions arose and, accordingly, a new state. In the critical region, therefore, the event develops according to the following scheme:

Fluctuation® Bifurcation®

nonequilibrium phase transition®

The birth of an ordered structure.

Bifurcation in a broad sense - the acquisition of a new quality by the movements of a dynamic system with a small change in its parameters (the appearance of a new solution of equations at a certain critical value of the parameter). Note that in the case of a bifurcation, the choice of the next state is purely random, so that the transition from one necessary stable state to another necessary stable state passes through random (dialectics of necessary and random). Any description of a system undergoing a bifurcation includes both deterministic and probabilistic elements; from bifurcation to bifurcation, the behavior of the system is determined, and in the neighborhood of bifurcation points, the choice of the next path is random. Drawing an analogy with biological evolution, we can say that mutations are fluctuations, and the search for new stability plays the role of natural selection. Bifurcation in a sense introduces an element of historicism into physics and chemistry - the analysis of the state in 1 , for example, implies knowledge of the history of the system that has passed the bifurcation.

The general theory of self-organization processes in open strongly non-equilibrium systems is developed on the basis of the universal criterion of evolution by Prigogine - Glensdorf. This criterion is a generalization of Prigogine's theorem on the minimum production of entropy. The entropy production rate due to the change in thermodynamic forces X, according to this criterion, obeys the condition

d x P / t £ 0 (2.6)

This inequality does not depend on any assumptions about the nature of the connections between flows and forces under conditions of local equilibrium, and therefore has a universal character. In the linear region, inequality (2.6.) turns into Prigogine's theorem on the minimum production of entropy. So, in a nonequilibrium system, the processes proceed as follows, i.e. the system evolves in such a way that the rate of entropy production decreases with a change in thermodynamic forces (or is equal to zero in the stationary state).

Ordered structures that are born far from equilibrium, in accordance with criterion (2.6.), are dissipative structures.

The evolution of the bifurcation and subsequent self-organization is thus determined by the corresponding non-equilibrium constraints.

The evolution of variables X will be described by the system of equations

(2.7)

where the functions F can depend in any complex way on the variables X themselves and their spatial derivatives of coordinates r and time t. In addition, these functions will depend on the control parameters, i.e. those changing characteristics that can greatly change the system. At first glance, it seems obvious that the structure of the function (F) will be strongly determined by the type of the corresponding system under consideration. However, it is possible to single out some basic universal features that do not depend on the type of systems.

The solution of equation (2.7) , if there are no external constraints, must correspond to equilibrium for any form of the function F . Since the equilibrium state is stationary, then

F i ((X equal ),l equal) = 0 (2.8)

In a more general case, for a nonequilibrium state, one can similarly write the condition

F i ((X),l) = 0 (2.9)

These conditions impose certain restrictions of a universal nature, for example, the laws of evolution of the system must be such that the requirement of positive temperature or chemical concentration, obtained as solutions of the corresponding equations, is satisfied.

Another universal trait is non-linear. Let, for example, some unique characteristic of the system

satisfies the equation

(2.10)

where k is some parameter, l are external control constraints. Then the stationary state is determined from the following algebraic equation

l - kX = 0 (2.11)

Xs = l / k (2.12)

In the steady state, therefore, the value of the characteristic, for example, the concentration, varies linearly with the values ​​of the control constraint l , and there is for each l a single state X s . It is absolutely unambiguous to predict the stationary value of X for any l, if we have at least two experimental values ​​of X

(l) The control parameter may, in particular, correspond to the degree of remoteness of the system from equilibrium. The behavior in this case of the system is very similar to equilibrium even in the presence of strongly nonequilibrium constraints.

Rice. 2.6. An illustration of the universal feature of non-linearity in the self-organization of structures.

If the stationary value of the characteristic X does not depend linearly on the control constraint for some values, then for the same value there are several different solutions. For example, under constraints, the system has three stationary solutions, Figure 2.6.c. Such a universal difference from linear behavior occurs when the control parameter reaches a certain critical value l - a bifurcation appears. At the same time, in the nonlinear region, a small increase can lead to an inadequately strong effect - the system can jump to a stable branch with a small change near the critical value l, Figure 2.6.c. In addition, transitions AB 1 (or vice versa) can occur from the states on the branch A 1 B (or vice versa) even before states B or A are reached, if the perturbations imposed on the stationary state are greater than the value corresponding to the intermediate branch A B. The perturbations can be either external impact or internal fluctuations in the system itself. Thus, a system with multiple stationary states is inherent in the universal properties of internal excitability and variability in jumps.

The fulfillment of the theorem on the minimum production of entropy in the linear region, and, as a generalization of this theorem, the fulfillment of the universal criterion (2.6.) in both the linear and non-linear regions guarantee the stability of stationary nonequilibrium states. In the region of linearity of irreversible processes, entropy production plays the same role as thermodynamic potentials in equilibrium thermodynamics. In the non-linear region, the value dP / dt does not have any general property, however, the value d x P/dt satisfies the general inequality (2.6.) , which is a generalization of the minimum entropy production theorem.

2.3 EXAMPLES OF SELF-ORGANIZATION OF VARIOUS

SYSTEM.

As an illustration, let us consider some examples of self-organization of systems in physics, chemistry, biology and society.

2.3.1. PHYSICAL SYSTEMS.

In principle, even in thermodynamic equilibrium, examples of self-organization can be pointed out as the results of collective behavior. These are, for example, all phase transitions in physical systems, such as a liquid-gas transition, a ferromagnetic transition, or the occurrence of superconductivity. In a non-equilibrium state, examples of high organization can be mentioned in hydrodynamics, in lasers of various types, in solid state physics - the Gunn oscillator, tunnel diodes, and crystal growth.

In open systems, by changing the flow of matter and energy from outside, it is possible to control processes and direct the evolution of systems to states that are increasingly far from equilibrium. In the course of non-equilibrium processes, at a certain critical value of the external flow, disordered and chaotic states can give rise to ordered states and dissipative structures due to the loss of their stability.

2.3.1a. BENARD CELLS.

A classic example of the appearance of a structure from a completely chaotic phase is the convective Benard cells. In 1900, an article by H. Benard was published with a photograph of a structure that looked like a honeycomb (Fig. 2.7).

Rice. 2.7. Benard cells :

a) - general view of the structure

b) - a separate cell.

This structure was formed in mercury poured into a flat wide vessel, heated from below, after the temperature gradient exceeded a certain critical value. The entire layer of mercury (or other viscous liquid) disintegrated into identical vertical hexagonal prisms with a certain ratio between side and height (Benard cells). In the central region of the prism, the liquid rises, and near the vertical faces, it falls. There is a temperature difference T between the lower and upper surfaces DT \u003d T 2 - T 1\u003e 0. For small to critical differences DT< DТ kp жидкость остается в покое, тепло снизу вверх передается путем теплопроводности. При достижении температуры подогрева критического значения Т 2 = Т kp (соответственно DТ = DТ kp) начинается конвекция. При достижении критического значения параметра Т, рождается, таким образом, пространственная диссипативная структура. При равновесии температуры равны Т 2 =Т 1 , DТ = 0 . При кратковременном подогреве (подводе тепла) нижней плоскости, то есть при кратковременном внешнем возмущении температура быстро станет однородной и равной ее первоначальному значению. Возмущение затухает, а состояние - асимптотически устойчиво. При длительном, но до критическом подогреве (DТ < DТ kp) в системе снова установится простое и единственное состояние, в котором происходит перенос к верхней поверхности и передачи его во внешнюю среду (теплопроводность) , рис. 2.8 , участок a. The difference between this state and the equilibrium state is that the temperature, density, pressure will become inhomogeneous. They will vary approximately linearly from a warm region to a cold one.

Rice. 2.8. Heat flow in a thin liquid layer.

An increase in the temperature difference DT, that is, a further deviation of the system from equilibrium, leads to the fact that the state of the immobile heat-conducting fluid becomes unstable. b in figure 2.8. This state is replaced by a stable state (section in in fig. 2.8), characterized by the formation of cells. At large temperature differences, a fluid at rest does not provide a large transfer of heat, the fluid is “forced” to move, moreover, in a cooperative collective coordinated manner.

2.3.1c. LASER AS SELF-ORGANIZING

SYSTEM.

So, as an example of a physical system, the ordering of which is a consequence of an external influence, let us consider a laser.

In the most rough description, a laser is a kind of glass tube into which light enters from an incoherent source (an ordinary lamp), and a narrowly directed coherent light beam comes out of it, while a certain amount of heat is released.


At low pump power, these electromagnetic waves that the laser emits are uncorrelated, and the radiation is similar to that of an ordinary lamp. Such incoherent radiation is noise, chaos. With an increase in external influence in the form of pumping to a critical threshold value, incoherent noise is converted into a "pure tone", that is, a sinusoidal wave emits a number - individual atoms behave in a strictly correlated manner, self-organize.

Lamp ® Laser

Chaos ® Order

Noise ® Coherent radiation

In the supercritical region, the "ordinary lamp" mode is not stable, but the laser mode is stable, Figure 2.9.

Rice. 2.9. Laser radiation in up to critical (a) and

supercritical (b) region.

It can be seen that the formation of a structure in a liquid and in a laser is formally described in a very similar way. The analogy is related to the presence of the same types of bifurcations in the corresponding dynamic levels.

We will consider this issue in more detail in the practical part, in Chapter 3.

2.3.2. CHEMICAL SYSTEMS.

In this area, synergetics focuses its attention on those phenomena that are accompanied by the formation of macroscopic structures. Usually, if the reactants are allowed to interact, intensively mixing the reaction mixture, then the final product is homogeneous. But in some reactions, temporal, spatial or mixed (spatial-temporal) structures may arise. The most famous example is the Belousov-Zhabotinsky reaction.

2.3.2a. REACTION OF BELAUSSOV - ZHABOTSKY.

Consider the Belousov-Zhabotinsky reaction. Ce 2 (SO 4), KBrO 3 , CH 2 (COOH) 2 , H 2 SO 4 are poured into the flask in certain proportions, a few drops of the oxidation-reduction indicator - ferroin are added and mixed. More specifically, redox reactions are being studied.

Ce 3+ _ _ _ Ce 4+ ; Ce 4+ _ _ _ Ce 3+

in a solution of cerium sulfate, potassium bromide, malic acid and sulfuric acid. The addition of a ferrogen makes it possible to monitor the course of the reaction by color change (by spectral absorption). At high concentrations of reactants that exceed the critical affinity value, unusual phenomena are observed.

With the composition

cerium sulfate - 0.12 mmol / l

potassium bromide - 0.60 mmol / l

malic acid - 48 mmol/l

3 normal sulfuric acid,

some ferroin

At 60 C, the change in the concentration of cerium ions acquires the character of relaxation oscillations - the color of the solution periodically changes over time from red (with an excess of Ce 3+) to blue (with an excess of Ce 4+), Figure 2.10a.


Rice. 2.10. Temporal (a) and spatial (b)

periodic structures in reaction

Belousov - Zhabotinsky.

This system and effect is called the chemical clock. If a perturbation is imposed on the Belousov-Zhabotinsky reaction - a concentration or temperature impulse, that is, by introducing a few millimoles of potassium bromate or touching the flask for several seconds, then after a certain transition mode, oscillations will again occur with the same amplitude and period as before the perturbation . dissipative

Belousov-Zhabotinsky is thus asymptotically stable. The birth and existence of undamped oscillations in such a system indicates that the individual parts of the system act in concert with the maintenance of certain relationships between the phases. With the composition

cerium sulfate - 4.0 mmol / l,

potassium bromide - 0.35 mmol / l,

malic acid - 1.20 mol/l,

sulfuric acid - 1.50 mol / l,

some ferroin

at 20 C, periodic color changes occur in the system with a period of about 4 minutes. After several such fluctuations, concentration inhomogeneities spontaneously arise and are formed for some time (30 minutes), if new substances are not brought in, stable spatial structures, Figure 2.10b. If the reagents are continuously supplied and the final products are withdrawn, then the structure is preserved indefinitely.

2.3.3. BIOLOGICAL SYSTEMS.

The animal world exhibits many highly ordered structures and superbly functioning ones. The organism as a whole continuously receives flows of energy (solar energy, for example, in plants) and substances (nutrients) and releases waste products into the environment. A living organism is an open system. In this case, living systems function definitely far from equilibrium. In biological systems, self-organizing processes allow biological systems to "transform" energy from the molecular to the macroscopic level. Such processes, for example, are manifested in muscle contraction, leading to all kinds of movements, in the formation of a charge in electric fish, in the recognition of images, speech, and in other processes in living systems. The most complex biological systems are one of the main objects of research in synergetics. The possibility of a complete explanation of the features of biological systems, for example, their evolution, using the concepts of open thermodynamic systems and synergetics is currently completely unclear. However, several examples of a clear connection between the conceptual and mathematical apparatus of open systems and biological order can be pointed out.

We will look more specifically at biological systems in Chapter 3, look at the dynamics of populations of one species and the prey-predator system.

2.3.4. SOCIAL SYSTEMS.

social system is a certain holistic formation, where the main elements are people, their norms and connections. As a whole, the system forms a new quality that cannot be reduced to the sum of the qualities of its elements. In this there is some analogy with the change in properties during the transition from a small to a very large number of particles in static physics - the transition from dynamic to static laws. At the same time, it is very obvious that any analogies with physicochemical and biological systems are very conditional, therefore, to draw an analogy between a person and a molecule, or even something like that, would be an unacceptable delusion. However, the conceptual and mathematical apparatus of non-linear non-equilibrium thermodynamics and synergetics turn out to be useful in describing and analyzing the elements of self-organization in human society.

Social self-organization is one of the manifestations of spontaneous or forced processes in society, aimed at streamlining the life of the social system, at greater self-regulation. A social system is an open system capable, even forced, of exchanging information, matter, and energy with the outside world. Social self-organization arises as a result of purposeful individual actions of its constituents.

Let us consider self-organization in a social system, for example, an urbanization zone. Analyzing the urbanization of geographical areas, it can be assumed that the growth of the local population of a given territory will be due to the presence of jobs in this area. However, there is some dependence here: the state of the market, which determines the need for goods and services and employment. This gives rise to a non-linear feedback mechanism in the process of population density growth. Such a problem is solved on the basis of a logistic equation, where the zone is characterized by an increase in its productivity N , new economic functions S - a function in the local area i of the city. The logistic equation describes the evolution of the population and can then be represented as

¾ = Kn i (N + å R k S ik ​​- n i) - dn i (2.13)

where R k is the weight of the given k -th function, its significance. The economic function changes with the growth of the population: it is determined by the demand for the k -th product in the i -th region, depending on the increase in the population and the competition of enterprises in other areas of the city. The emergence of a new economic function plays the role of socio-economic fluctuations and disrupts the even distribution of population density. Such numerical calculations using logistic equations can be useful in predicting many problems.

FORMULATION OF THE PROBLEM.

In the examples considered in the literature, there are only general conclusions and conclusions, specific analytical calculations or numerical ones are not given.

The purpose of this thesis work is analytical and numerical research of self-organization of various systems.

CHAPTER 3

ANALYTICAL AND NUMERICAL STUDIES

SELF-ORGANIZATION OF VARIOUS SYSTEMS.

3.1. BENARD CELLS.

In order to experimentally study the structures, it is enough to have a frying pan, some oil and some fine powder, so that the movement of the liquid is noticeable. Pour oil into the pan with the powder mixed in it and heat it from below (Fig. 3.1)

Rice. 3.1. Convective Benard cells.

If the bottom of the pan is flat and we heat it evenly, then we can assume that constant temperatures are maintained at the bottom and on the surface, T 1 from below, T 2 from above. As long as the temperature difference DT \u003d T 1 - T 2 is small, the powder particles are motionless, and therefore the liquid is also motionless.

We will gradually increase the temperature T 1 . With an increase in the temperature difference to the value DТ c, the same picture is observed, but when DТ > DТ c , the entire medium is divided into regular hexagonal cells (see Fig. 3.1) in the center of each of which the liquid moves up, along the cuts down. If we take another frying pan, we can see that the size of the resulting cells is practically independent of its shape and size. This remarkable experiment was first carried out by Benard at the beginning of our century, and the cells themselves were called Benard cells.

An elementary qualitative explanation of the cause of fluid motion is as follows. Due to thermal expansion, the liquid is stratified, and in the lower layer the liquid density r 1 is less than in the upper r 2 . An inverse density gradient arises, directed opposite to the force of gravity. If we single out the elementary volume V , which shifts slightly upward as a result of perturbation, then the Archimedean force in the neighboring layer will become greater than the force of gravity, since r 2 > r 1 . In the upper part, a small volume, moving downwards, falls into an area of ​​low density, and the Archimedean force will be less than the force of gravity F A< F T , возникает нисходящее движение жидкости. Направление движения нисходящего и восходящего потоков в данной ячейке случайно, движение же потоков в соседних ячейках, после выбора направлений в данной ячейке детерминировано. Полный поток энтропии через границы системы отрицателен, то есть система отдает энтропию, причем в стационарном состоянии отдает столько, сколько энтропии производится внутри системы (за счет потерь на трение).

dS e q q T 1 - T 2

¾ = ¾ - ¾ = q * ¾¾¾< 0 (3.1)

dt T 2 T 1 T 1 * T 2

The formation of a honeycomb cellular structure is explained by the minimum energy consumption in the system for the creation of just such a form of spatial structure. In this case, in the central part of the cell, the liquid moves up, and on its periphery - down.

Further supercritical heating of the liquid leads to the destruction of the spatial structure - a chaotic turbulent regime arises.


Rice. 3.2. Illustration of the occurrence of thermal

convection in liquid.

This question is accompanied by a graphic illustration of the occurrence of thermal convection in a liquid.

3.2 LASER AS A SELF-ORGANIZING SYSTEM.

We have already dealt with this issue in the second chapter. Here, we consider a simple laser model.

Laser - This is a device in which photons are generated during the process of stimulated emission.

Change over time in the number of photons n, or in other words, the rate of photon generation, is determined by an equation of the form:

dn / dt = "Growth" - "Loss" (3.2)

The increase is due to the so-called stimulated emission. It is proportional to the number of photons already present and to the number of excited atoms N . In this way:

Growth = G N n (3.3)

Here G is the gain, which can be derived from microscopic theory. The loss term is due to the escape of photons through the ends of the laser. The only assumption we make is that the escape rate is proportional to the number of photons available. Consequently,

Losses = 2cn (3.4)

2c = 1/ t 0 , where t 0 is the photon lifetime in the laser.

Now one important circumstance should be taken into account, which makes (2.1) a nonlinear equation of the form:

(3.5)

The number of excited atoms decreases due to the emission of photons. This decrease in DN is proportional to the number of photons present in the laser, since these photons are constantly causing the atoms to return to the ground state.

Thus, the number of excited atoms is

N = N 0 - DN (3.7)

where N 0 is the number of excited atoms supported by the external

pumping, in the absence of laser generation.

Substituting (3.3) - (3.7) into (3.2) , we obtain the main equation of our simplified laser model:

(3.8)

where is the constant k gives the expression:

k = 2c - GN 0 >< 0 (3.9)

If the number of excited atoms N 0 (produced by pumping) is small, then k is positive, while for sufficiently large N 0 k - can become negative. The sign change occurs when

GN 0 = 2c (3.10)

This condition is the condition for the threshold of lasing.

It follows from the bifurcation theory that when k > 0 there is no laser generation, while at k< 0 the laser emits photons.

Below or above the threshold, the laser operates in completely different modes.

We solve equation (3.8) and analyze it analytically:

This is the equation for a single-mode laser.

We write equation (3.8) in the following form:

Divide the original equation by n 2 .

and introduce a new function Z :

1 / n \u003d n -1 \u003d Z Þ Z 1 \u003d - n -2 therefore the equation will take the form:

Let's rewrite it in the following form:

we divide both sides of this equation by -1, we get

(3.11)

Equation (3.11) is a Bernoulli equation, so we make the following substitution Z = U× V, where U and V are still unknown functions n, then Z 1 \u003d U 1 V + U V 1.

Equation (3.11), after the change of variables, takes the form

U 1 V + UV 1 - k UV = k 1

transform, get

U 1 V + U(V 1 - k V) = k 1 (3.12)

We solve equation (3.12)

V 1 - k V = 0 ® dV/dt = kV

let's separate the variables dV/V =k dt ® log V = k t

result V = e kt (3.13)

From here we can rewrite equation (3.12) in the form:

U 1 e kt = k 1

This is the same as dU/dt = k 1 e -kt , dU = k 1 e -kt dt express U from here, we get

(3.14)

According to the Bernoulli equation, we made the substitution Z = U V substituting equations (3.13) and (3.14) into this replacement, we obtain

Previously introduced the function Z=n-1 , Consequently

(3.15)

Initial condition n 0 =1/(c-k 1 /k) , from this condition we can determine the constant With in the following way

Substituting the constant we found into equation (3.15), we get

(3.16)

Let us study the function (3.16) for k = 0 , k< 0 , k> 0 .

For k®0 ; e kt ® 0 ; (e kt - 1)®0 , i.e. (e kt - 1)×k 1 /k®0×¥ (uncertainty) , let's reveal this uncertainty according to L'Hopital's rule. This uncertainty of the form 0×¥ should be reduced to the form . In this case, as always when applying the L'Hopital rule, it is recommended to simplify the resulting expressions in the course of calculations, as follows:

n(k) as k ® 0 ® 0 , hence

Let us rewrite (3.16) in the following form

We linearize the nonlinear equation, we get

ln n = - kt + c

Let's build a graph for these conditions

Rice. 3.3 Toward self-organization in a single-mode laser:

curve 1 : k< 0 , laser generation mode

curve 2 : k = 0 ,bifurcation point, threshold

curve 3 : k > 0, lamp mode.

For k = 0, equation (3.8) takes the form

solving it, we get

(3.8)

On condition ; n(t) = const , the function (3.8) approaches the stationary state, regardless of the initial value n 0 , but depending on the signs of k and k 1 (see figure 3.3).

Thus, function (3.8) takes a stationary solution

3.3. POPULATION DYNAMICS.

Extensive information has been collected on the distribution and abundance of species. A macroscopic characteristic describing a population may be the number of individuals in the population. This number plays the role of an order parameter. If different species are supported by a common food resource, then interspecific struggle begins, and then Darwin's principle applies: the fittest species survive.(It is impossible not to note the strong analogy that exists between the competition of laser modes and interspecies struggle). If there are food resources of the same type, then the coexistence of species becomes possible. The number of species may be subject to temporary fluctuations.

ONE VIEW.

Consider first one population with the number of individuals in it n. In the presence of food resources BUT individuals reproduce at a rate of:

and die at a speed

Here k and d- some birth and death rates, in the general case, depending on the parameters of the external environment. If the amount of food was unlimited, then the evolutionary equation would look like this:

We introduce the notation a = kA-d

It would be linear and would describe unlimited experimental growth (for kA > d) or experimental death (for kA< d) популяции.

Rice. 3.4 Curve 1 : Exponential Growth; a>0 , kA>d

Curve 2 : exponential doom; a>0 , kA>d.

In general, however, food resources are limited, so the rate of food consumption

However, in the general case, it is possible to restore food resources at a rate of:

Here, of course, the marginal case of conservation of the total amount of organic matter is considered

A + n = N = const

N is the ability of the habitat to support the population.

Then, taking into account A = N - n, we get the following equation for the evolution of a population of one species (Verhulst logistic equation):

(3.17)

We solve equation (3.17) analytically and rewrite it as follows

, denote kN - d = k 1

We use the table integral, the resulting equation will take the form:

solve this equation by transforming

reduce the resulting expression by k , and transfer the variable k 1 to the right side, we get

hence n(t) ®

Initial conditions:

Substituting c into the solution, we obtain an equation in the following form

We previously stated that , substitute and transform

we reduce by k - the birth rate, we finally obtain the solution of equation (3.17)

So, an analytical solution of the logistic equation has been obtained - this solution indicates that the population growth stops at some finite stationary level:

that is, the parameter n 1 indicates the height of the saturation plateau towards which n(t) tends over time.

The parameter n 0 indicates the initial value of the number of one kind of population: n 0 = n(t 0) . Really, , that is, n 1 - the maximum number of species in a given habitat. In other words, the parameter n 1 characterizes the capacity of the environment in relation to the given population. And finally, the parameter (kN - d) specifies the steepness of the initial growth.

Note that for a small initial number n 0 (the initial number of individuals), the initial population growth will be almost exponential

Rice. 3.5. logistic curve.

(evolution of a population of one species)

The solution to equation (3.17) can be represented using a logistic curve (Fig. 3.5). Evolution is completely determined. The population stops growing when the resource of the environment is exhausted.

Self-organization - with a limited food resource. The system is self-organized and the explosive growth of the population (Fig. 3.4 Curve 1) is replaced by a saturation curve.

We emphasize that when describing this biological system, the conceptual and physical-mathematical apparatus from non-linear non-equilibrium thermodynamics is used.

It may happen, however, that always after events not controlled within the framework of the model, new species (characterized by different ecological parameters k, N and d) will appear in the same environment, initially in small quantities. This ecological fluctuation raises the question of structural sustainability: new species can either disappear or displace the original inhabitants. Using linear stability analysis, it is not difficult to show that new species crowd out old ones only if

The sequence in which species fill an ecological niche is shown in Figure 3.6.

Rice. 3.6. Consistent filling of ecological

niches of various kinds.

This model makes it possible to give precise quantitative meaning to the statement that "survival of the fittest" in the framework of the task of filling a given ecological niche.

3.3.2. SYSTEM "Victim - Predator".

Consider a system consisting of two types - a "prey" and a "predator" (for example, hares and foxes), then the evolution of the system and its self-organization look different than in the previous case.

Let there be two populations in the biological system - "victims" - rabbits (K), and "predators" - foxes (L), the number of K and L.

Let us now carry out an argument that will allow us to explain the existence of dissipative structures.

Rabbits (K) eat grass (T). Assume that the supply of grass is constant and inexhaustible. Then, the simultaneous presence of grass and rabbits contributes to the unlimited growth of the rabbit population. This process can be symbolically depicted as follows:

Bunnies + Grass ® More Bunnies

The fact that in the country of rabbits there is always plenty of grass is quite analogous to the continuous supply of thermal energy in the problem with Benard cells. Soon, the process as a whole will look like a dissipative one (much like the Benard process).

The reaction "Rabbits - Grass" occurs spontaneously in the direction of increasing the population of rabbits, which is a direct consequence of the second law of thermodynamics.

But in our picture, where rabbits frolic peacefully, predatory foxes (L) crept in, for which rabbits are prey. Just as the number of rabbits increases as the grass is eaten, the number of foxes increases by eating the rabbits:

Foxes + Rabbits ® More Foxes

In turn, foxes, like rabbits, are victims - this time a person, more precisely, a process is taking place

Foxes ® Furs

The final product - Furs, does not play a direct role in the further course of the process. This final product can, however, be regarded as a carrier of energy withdrawn from the system to which it was initially supplied (for example, in the form of grass).

Thus, in an ecological system there is also a flow of energy - similar to how it takes place in a chemical test tube or a biological cell.

It is quite clear that in reality there are periodic fluctuations in the population of rabbits and foxes, and an increase in the number of rabbits is followed by an increase in the number of foxes, which are replaced by a decrease in the number of rabbits, accompanied by an equally sharp decrease in the number of foxes, then an increased increase in the number of rabbits, and so on (Fig. 3.7).

Rice. 3.7. Population change in rabbits and foxes

with time. The presence of periodicity means

the emergence of an ecological structure.

Over time, the number of both populations changes in accordance with the successive passage of points on the graph. After some time (the specific value depends on the speed of foxes eating rabbits, as well as on the speed of reproduction of both species), the whole cycle begins again.

The behavior of populations at different degrees of fecundity, as well as different abilities to avoid extermination, can be studied quantitatively using the program: POPULATION(in the application).

This program implements the solution of equations for the dissipative structure "rabbits - foxes". The solution result is displayed graphically. The system of differential equations is solved

Here the letters K, L, T - mean, respectively, the number of rabbits, foxes, grass; the coefficients k 1 , k 2 , k 3 - denote, respectively, the rate of birth of rabbits, the rate of eating rabbits by foxes, and the rate of death of foxes.

The program will need to clarify the meaning of the relationship (approximately equal to 1), constant amount of grass (same way usually taken equal to 1), initial values ​​of the population of rabbits and foxes (usually 0.4), cycle time (typical value 700) and time step (usually equal to 1).

The population program is a schedule. It shows the behavior of populations with different degrees of fertility, as well as different abilities to avoid extermination.

It is quite clear that in reality there are periodic fluctuations in the population of rabbits and foxes, and an increase in the number of rabbits is followed by an increase in the number of foxes, which are replaced by a decrease in the number of rabbits, accompanied by an equally sharp decrease in the number of foxes, then an increased increase in the number of rabbits, and so on, that is It can be seen that the system is self-organizing.

The program is attached.

CONCLUSION.

We have seen that the irreversibility of time is closely related to instabilities in open systems. I.R. Prigogine defines two times. One is dynamic, which allows you to specify a description of the motion of a point in classical mechanics or a change in the wave function in quantum mechanics. Another time is the new internal time, which exists only for unstable dynamical systems. It characterizes the state of the system associated with entropy.

The processes of biological or social development do not have an end state. These processes are unlimited. Here, on the one hand, as we have seen, there is no contradiction with the second law of thermodynamics, and on the other hand, the progressive nature of development (progress) in an open system is clearly visible. Development is associated, generally speaking, with the deepening of disequilibrium, and therefore, in principle, with the improvement of the structure. However, as the structure becomes more complex, the number and depth of instabilities and the probability of bifurcation increase.

Successes in solving many problems made it possible to single out general patterns in them, introduce new concepts and, on this basis, formulate a new system of views - synergetics. It studies the issues of self-organization and therefore should give a picture of the development and principles of self-organization of complex systems in order to apply them in management. This task is of great importance, and, in our opinion, progress in its study will mean progress in solving global problems: problems of controlled thermonuclear fusion, environmental problems, control problems, and others.

We understand that all the examples given in the work refer to model problems, and for many professionals working in the relevant fields of science, they may seem too simple. They are right about one thing: the use of the ideas and concepts of synergetics should not replace a deep analysis of a specific situation. Finding out what the path from model tasks and general principles to a real problem can be is up to specialists. Briefly, we can say this: if one of the most important processes (or a small number of them) can be distinguished in the system under study, then synergetics will help to analyze it. It indicates the direction in which to move. And, apparently, this is already a lot.

The study of most real non-linear problems was impossible without a computational experiment, without the construction of approximate and qualitative models of the processes under study (synergetics plays an important role in their creation). Both approaches complement each other. The effectiveness of one is often determined by the success of the other. Therefore, the future of synergetics is closely connected with the development and wide use of computational experiments.

The simplest nonlinear media studied in recent years have complex and interesting properties. Structures in such environments can develop independently and be localized, can multiply and interact. These models can be useful in studying a wide range of phenomena.

It is known that there is some disunity between natural science and humanitarian cultures. Rapprochement, and in the future, perhaps, harmonious mutual enrichment of these cultures can be carried out on the basis of a new dialogue with nature in the language of thermodynamics of open systems and synergetics.

LITERATURE :

1. Bazarov I.P. Thermodynamics. - M.: Higher school, 1991

2. Glensdorf P., Prigogine I. Thermodynamic theory of structure, stability and fluctuations. - M.: Mir, 1973

3. Carey D. Order and disorder in the structure of matter. - M.: Mir, 1995

4. Kurdyushov S.P. , Malinetsky G.G. Synergetics is the theory of self-organization. Ideas, methods of perspective. - M.: Knowledge, 1983

5. Nicolis G., Prigogine I. Self-organization in non-equilibrium systems. - M.: Mir, 1979

6. Nicolis G., Prigogine I. Knowledge of the complex. - M.: Mir, 1990

7. Perovsky I.G. Lectures on the theory of differential equations. - M.: MGU, 1980

8. Popov D.E. Interdisciplinary communications and synergetics. - KSPU, 1996

9. Prigogine I. Introduction to thermodynamics of irreversible processes. - M.: Foreign Literature, 1960

10. Prigogine I. From existing to emerging. - M.: Nauka, 1985

11. Synergetics, collection of articles. - M.: Mir, 1984

12. Haken G. Synergetics. - M.: Mir, 1980

13. Haken G. Synergetics. Hierarchy of instabilities in self-organizing systems and devices. - M.: Mir, 1985

14. Shelepin L.A. Far from balance. - M.: Knowledge, 1987

15. Eigen M., Schuster P. Hypercycle. Principles of self-organization of macromolecules. - M.: Mir, 1982

16. Atkins P. Order and disorder in nature. - M.: Mir, 1987

    Subject, content and objectives of the course "Physical and colloidal chemistry". The value of physical and colloidal for the technology of food processes.

Subject and content: Physical chemistry is the main theoretical foundation of modern chemistry, using the theoretical methods of such important sections of physics as quantum mechanics, statistical physics and thermodynamics, nonlinear dynamics, field theory, etc. It includes the study of the structure of matter, including: the structure of molecules, chemical thermodynamics, chemical kinetics and catalysis. Electrochemistry, photochemistry, the physical chemistry of surface phenomena (including adsorption), radiation chemistry, the theory of metal corrosion, the physicochemistry of macromolecular compounds, etc., are also distinguished as separate sections in physical chemistry.

Tasks:

    Can a reaction proceed spontaneously?

    If the reaction proceeds, how deep does it proceed (what are the equilibrium concentrations of the reaction products?)

    If the reaction proceeds, at what rate?

The value of physical and colloidal for the technology of food processes.

Food Process Technology - these are the sciences of chemical, physico-chemical and physical methods of processing food raw materials into finished products (commercial products). Physical and colloidal chemistry is of great importance for food technology. The raw materials used in the food industry and public catering and the resulting products in most cases are colloidal or high-molecular systems. Such food production processes as boiling, separation, distillation, extraction, crystallization, dissolution and hydration can only be justified by the laws of physical chemistry. All biochemical processes underlying food production are also subject to the laws of physical chemistry.

The quality control of food products is also based on the methods of physical chemistry - the determination of acidity, the content of sugars, fats, water, vitamins and proteins.

Thus, for a rational construction of the technological process of processing raw materials and an objective assessment of the quality of the resulting products, a food production specialist needs to know and be able to put into practice the laws of physical and colloidal chemistry.

    Elements of the doctrine of the structure of matter: polarization, refraction and intermolecular interactions.

Polarization - the displacement of electrons and atoms, as well as the orientation of molecules under the action of an external electric field. Polarization can be:

    Orientation - expresses the orientation of molecules in an electric field.

    Deformation - characteristic of both polar and non-polar molecules. But it depends on the temperature.

It is necessary to distinguish between the polarizability of molecules and the polarizability of matter. The latter is typical only for dielectrics.

Refraction- this is the same as the refraction of light, i.e., a change in the direction of light waves with a change in the refractive index of the medium through which these rays pass.

Intermolecular interaction is a relatively weak connection of molecules with each other, which does not lead to a break or the formation of new chemical bonds. (van der waltz forces). Their basis is the interaction between electrons and nuclei and one molecule with the nuclei and electrons of another. The total energy of intermolecular interactions consists of the following components: E=E email +E floor +E disp . , where there are energies of electrostatic, polarization, dispersion interaction.

Electrostatic - determines the Coulomb forces of attraction m / y by dipoles of polar molecules and is in crystals, in gases and in liquids.

Polarization - the connection of a dipole with another induced dipole, due to the deformation of the electron shell of one molecule under the influence of the electric field of another, which leads to the attraction of molecules.

Dispersive - possibly m / y with any molecules, both polar and non-polar.

    Chemical thermodynamics and its features, thermodynamic systems and its parameters, thermodynamic processes.

Chemical thermodynamics - a branch of physical chemistry that studies the processes of interaction of substances by the methods of thermodynamics.

The main areas of chemical thermodynamics are:

    Classical chemical thermodynamics, studying thermodynamic equilibrium in general.

    Thermochemistry, which studies the thermal effects that accompany chemical reactions.

    The theory of solutions that models the thermodynamic properties of a substance based on the concept of molecular structure and data on intermolecular interaction.

Chemical thermodynamics is closely related to such sections of chemistry as:

    analytical chemistry;

    electrochemistry;

    colloid chemistry;

    adsorption and chromatography.

System - this is part of the surrounding world, everything else is the environment. (For example: the contents of the flask are the system, and the flask itself is the environment.)

The system is classified according to two criteria:

    by the nature of the exchange with the environment of energy and matter:

    isolated - they can exchange neither matter nor energy (example: thermos).

    Closed - energy exchange is possible, but not substance (filled ampoule with substance).

    Open - can exchange both matter and energy. (pot with food).

    According to the number of phases, the systems are divided into:

    homogeneous

    Heterogeneous - contain several phases separated from each other by interfaces (drinks with ice).

Phase - this is a set of systems that have the same composition and thermodynamic properties, separated from other parts by interfaces.

    By number components - individual chemicals that make up the system and which can be isolated from the system and exist independently:

    One-component

    Two-component

    Multicomponent - their systems can be both homogeneous and heterogeneous. (tea and jelly).

Thermodynamic process - the transition of a thermodynamic system from one state to another, which is always associated with a violation of the equilibrium of the system. Usually, during the course of the process, any one state parameter is kept constant:

    Isothermal - at a constant temperature (T=const).

    Isobaric - at constant pressure (P=const).

    Isochoric - at constant volume (V=const).

    Adiabatic - in the absence of heat transfer (Q=const).

During the process in non-isolated systems, both absorption and release of heat can occur:

    Exothermic (release of heat)

    Endothermic (heat release)

There is a separate type of processes that occur on their own, and do not require energy from outside for their implementation - spontaneous processes (gas fills the entire volume of the vessel). Non-spontaneous process – requires the attraction of energy from the environment.

4. Equilibrium state - zero beginning of thermodynamics.

Thermodynamic equilibrium - the state of the system, in which the macroscopic quantities of this system (temperature, pressure, volume, entropy) remain unchanged in time under conditions of isolation from the environment. In general, these values ​​are not constant, they only fluctuate (fluctuate) around their average values.

Four beginnings are known: 1. zero; 2.first; 3.second; 4. third.

    Zero - determines the condition of the equilibrium state of the system (often the zero beginning of thermodynamics is not fixed, but only the conditions for the position of the system in the equilibrium state are considered). It is based on postulates, and is not refuted by practice:

    Under constant external conditions, the system in the equilibrium state does not change in time;

    If a system is in equilibrium, then all parts of the system are in equilibrium.

The equality of temperature in all parts of a system in equilibrium is called zero start thermodynamics.

Zeroth law of thermodynamics: if each of two systems is in thermal equilibrium with the other or other systems, then these two or more systems are also in thermal equilibrium.

If the temperature in the system is the same, then the system is in equilibrium. Another conclusion follows from the zero start - additivity- any value characterizing the property of the system as a whole is equal to the sum of these values ​​of the individual parts of the system, regardless of how the system is divided into parts.

    State functions, first law of thermodynamics, heat capacity.

State function in thermodynamics - function of independent parameters that determine the equilibrium state of the thermodynamic system; does not depend on the path (nature of the process), following which the system came to the equilibrium state under consideration (i.e., does not depend on the prehistory of the system); The state functions include, in particular, the characteristic functions of the system:

    internal energy;

    entropy;

    enthalpy, etc.

Thermodynamic work and the amount of heat are not state functions, since their value is determined by the type of process, as a result of which the system changed its state.

First law of thermodynamics - one of the three basic laws of thermodynamics, is the law of conservation of energy for thermodynamic systems.

The first law of thermodynamics was formulated in the middle of the 19th century as a result of the work of the German scientist J.R. Mayer, the English physicist J.P. Joule and the German physicist G. Helmholtz. According to the first law of thermodynamics, a thermodynamic system can do work only due to its internal energy or any external energy sources. The first law of thermodynamics is often formulated as the impossibility of the existence of a perpetual motion machine of the first kind, which would do work without drawing energy from any source.

Formulation:

The amount of heat received by the system goes to changing its internal energy and doing work against external forces.

HEAT CAPACITY - the amount of heat expended to change the temperature by 1 ° C. According to a more rigorous definition, heat capacity-thermodynamic. the value determined by the expression:

where DQ is the amount of heat communicated to the system and caused a change in its t-ry by DT. The ratio of finite differences DQ/DT naz. average heat capacity, the ratio of infinitesimal values ​​dQ/dT is the true heat capacity. Since dQ is not a total differential of the state function, the heat capacity also depends on the transition path between the two states of the system. There are heat capacity of the system as a whole (J / K), specific heat capacity [J / (g K)], molar heat capacity [J / (mol K)]. In all the formulas below, molar heat capacities are used.

    Thermal effects of chemical reactions, determination of thermal effects.

As is known, each physicochemical transformation of matter is accompanied by the transformation of energy. To compare the change in energy during various reactions in thermodynamics, the concept is used thermal effect , i.e., the amount of heat that is released or absorbed in a chemical process, provided that the initial and final temperatures are equal. The thermal effect is usually related to the mole of the reactant and is expressed in joules.

Thermal effects differ from each other if the processes take place in a closed vessel (at a constant volume V=const) or in an open vessel (at a constant pressure P=const).

The thermal effect at constant volume is equal to the loss of internal energy: QVT= –∆ UT, and at constant pressure - the loss of enthalpy:

QRT= –∆НT .

The thermal effect does not depend on intermediate stages, but is determined only by the initial and final state of the system, provided that the only work done by the system is work against external pressure and that the pressure or volume remains unchanged throughout the process (Hess's law). With the help of Hess's law, various thermochemical calculations are made.

    The second law of thermodynamics, entropy.

Second law of thermodynamics - a physical principle that imposes a restriction on the direction of the processes of heat transfer between bodies.

The second law of thermodynamics forbids the so-called perpetual motion machines of the second kind, showing that it is impossible to turn all the internal energy of a body into useful work.

The second law of thermodynamics is a postulate that cannot be proved within the framework of thermodynamics. It was created on the basis of a generalization of experimental facts and received numerous experimental confirmations.

There are several equivalent formulations of the second law of thermodynamics:

Postulate of Clausius : "A process is impossible, the only result of which would be the transfer of heat from a colder body to a hotter one" (such a process is called the Clausius process).

Thomson's postulate: “There is no circular process, the only result of which would be the production of work by cooling the heat reservoir” (such a process is called the Thomson process).

Thermodynamic entropy(S) , often simply referred to as entropy, in chemistry and thermodynamics is a function of the state of a thermodynamic system; its existence is postulated by the second law of thermodynamics.

The concept of entropy was first introduced in 1865 by Rudolf Clausius. He defined the change in the entropy of a thermodynamic system during a reversible process as the ratio of the change in the total amount of heat ΔQ to the value of the absolute temperature T:

where dS is the increment (differential) of entropy, and δQ is an infinitesimal increment in the amount of heat.

    Processes in non-isolated systems: Gibbs energy and Helmholtz energy, criteria for equilibrium and spontaneous processes, maximum Gibbs-Helmholtz equations.

Gibbs free energy (or simply Gibbs energy, or Gibbs potential, or thermodynamic potential in the narrow sense) - this is a value showing the change in energy during a chemical reaction and thus giving an answer to the fundamental possibility of a chemical reaction; is the thermodynamic potential of the following form:

where U is internal energy, P is pressure, V is volume, T is absolute temperature, S is entropy.

Helmholtz free energy (or just free energy) - thermodynamic potential, the loss of which in a quasi-static isothermal process is equal to the work done by the system on external bodies.

The Helmholtz free energy for a system with a constant number of particles is defined as follows:

Where U is internal energy, T is absolute temperature, S is entropy.

In accordance with the second law of thermodynamics, the criterion for a spontaneous process is entropy growth. If the entropy factor, which determines the possibility of spontaneous processes, correlates with the enthalpy factor as follows: TdSdU(and for an isobaric process TdSdH), then from the equations

dA= dUTdSor ∆A=∆ U- TS

dG= dHTdSor ∆G=∆ H - TS

follows: dA≤0, or ∆A≤0; dG≤0, or ∆G≤0.

Equality means an equilibrium process, the sign "less than" - a spontaneous process. These relationships are fundamental for calculations and determination of the conditions for equilibrium and spontaneous processes for non-isolated systems.

The Gibbs-Helmholtz equations are equations of maximum work.

They make it possible to establish a relationship between the maximum work of an equilibrium process and the heat of a nonequilibrium process:

Helmholtz equation (an equation relating the functions F and G with their temperature derivatives).

Gibbs equation (an equation relating the functions F and G with their temperature derivatives). These equations make it possible to calculate the work through the temperature coefficient of the Helmholtz function or through the temperature coefficient of the Gibbs function.

    Third law of thermodynamics.

Formulation: As the temperature approaches 0 K, the entropy of any equilibrium system during isothermal processes ceases to depend on any thermodynamic parameters and in the limit (at T = 0 K) for all systems takes on the same, universal constant value, which can be taken equal to zero.

"The increase in entropy at absolute zero temperature tends to a finite limit, independent of the equilibrium state of the system."

where x is any thermodynamic parameter.

The third law of thermodynamics applies only to equilibrium states.

The third law of thermodynamics allows you to find the absolute value of entropy, which cannot be done within the framework of classical thermodynamics (based on the first and second laws of thermodynamics).

Consequences:

    Unattainability of absolute zero temperatures. It follows from the third law of thermodynamics that absolute zero temperature cannot be reached in any final process associated with a change in entropy, it can only be approached asymptotically, therefore the third law of thermodynamics is sometimes formulated as the principle of unattainability of absolute zero temperature.

    Behavior of thermodynamic coefficients. A number of thermodynamic consequences follow from the third law of thermodynamics: when the heat capacity must tend to zero at constant pressure and at constant volume, the coefficients of thermal expansion, and some similar quantities. The validity of the third law of thermodynamics was at one time questioned, but later it was found that all apparent contradictions (non-zero entropy for a number of substances at T = 0) are associated with metastable states of matter, which cannot be considered thermodynamically equilibrium.

    Violations of the third law of thermodynamics in models. The third law of thermodynamics is often violated in model systems. Thus, at , the entropy of a classical ideal gas tends to minus infinity. This suggests that at low temperatures the Mendeleev-Clapeyron equation does not adequately describe the behavior of real gases.

Thus, the third law of thermodynamics indicates the insufficiency of classical mechanics and statistics and is a macroscopic manifestation of the quantum properties of real systems.

In quantum mechanics, however, in model systems the third law can also be violated. These are all cases where the Gibbs distribution applies and the ground state is degenerate.

The non-observance of the third law in the model, however, does not exclude the possibility that this model can be quite adequate in some range of changes in physical quantities.

    Chemical potential is a factor of intensity of physical and chemical processes.

The chemical potential is a partial derivative of one of the characteristic functions (∆G, ∆A, ∆U, ∆H), more often the Gibbs energy, according to the change in the number of moles of one component with the number of moles of the remaining components unchanged and the corresponding state parameters unchanged.

dG≤ - SdT + Vdp + (∂G/∂n 1) T , p , n 2, n 3,.., nk dn 1 + (∂G/∂n i) T , p , n 1,.., ni -1, n +1,…, nk dn i + (∂G/∂n k) T , p , n 1,.., ni , nk -1 dn k .

The "less" sign refers to a spontaneous process, and the "equality" sign - to an equilibrium one. The partial derivatives of the Gibbs energy with respect to one of the changing components is the chemical potential μ.

How to determine the value of chemical potential? For a one-component system, one can write: μ= G/ n= GM

G M is the molar Gibbs energy, or molar thermodynamic potential, of a component (individual substance). Thus, the chemical potential of the component is identical to the Gibbs molar energy.

    Equilibrium constants, equilibrium constants taking into account real conditions.

A spontaneous process according to the second law of thermodynamics is accompanied by a decrease in the free energy of the system and ends with the establishment of an equilibrium state - chemical equilibrium (equilibrium constant).

Chemical equilibrium is expressed by equilibrium constants.

The equilibrium constant is equal to the ratio of the product of the concentration of reaction products in num. Products of the concentration of the starting substances: K c \u003d Ps i in i / Ps i a i

In general, districts

where vi and vj are stoichiometric. the number of initial in-in Ai (i=1,2,..., q) and products p-tion Aj (j=1, 2, ..., r), activity to-rykh resp. equilibrium constant

Fugacity is the pressure of a real gas at which the gas behaves like an ideal one. The use of fugacity instead of partial pressures and the use of ratios is practiced at relatively high pressures and low temperatures, when there is a significant deviation of real gases from ideal ones. Fugacity is the value that must be substituted into the expression for the chemical potential of an ideal gas in order to obtain the value of the chemical potential of a real gas.

For the transition from ideal solutions to real solutions, the concept of activity is introduced - this is the concentration at which real solutions acquire the thermodynamic properties of ideal solutions. The activity of a component in a solution is the value that should be substituted into the expression for the chemical potential of a component in an ideal solution in order to obtain a real expression for the chemical potential of a real solution.

    Isotherms, isochore and isobar of a chemical reaction, chemical variable and chemical agent.

    Isothermal - Т= const

because

    Isochoric - V = const

δА = pdυ = 0,

δQ = dU + pdυ,

3. Isobaric - P = const

A = pV2 - pV1.

4. Adiabatic - δQ = 0

A = –CV(T2 – T1), T2 > T1;

2) pdδ= –CvdT,

Chemical variable - the ratio of the change in the number of moles of a component in a chemical reaction to its stoichiometric coefficient, which is the same for all components and characterizes the completeness of the reaction. Chemical affinity characterizes the deviation of the system from the state of chemical equilibrium.

CHEMICAL AFFINITY (reaction affinity), thermodynamic parameter. system, characterizing the deviation from the state of chemical. balance. If the reaction is written as an equation:

where L1, ..., Lk - initial reagents, Lk+1, ..., Lk+m - products of the district, v1, ..., vk and vk +1 , ..., vk + m - stoichiometric . coefficient,

    Le Chatelier-Brown principle.

Statement: if a system that is in stable equilibrium is acted upon from the outside, changing any of the conditions that determine the equilibrium position, then in the system one of the directions that weakens the influence of the effect produced will increase, and the equilibrium position will shift in the same direction.

The isochore and isobar equations make it possible to determine the shift in chemical equilibrium as a function of temperature. If the direct reaction proceeds with the release of heat (∆U<0 или ∆Н<0), то рост температуры для экзотермической реакции приведет к уменьшению константы равновесия k c или k p . Смещение равновесия в зависимости от изменения температуры, когда ∆U<0 или ∆Н<0, можно представить в следующем виде:

temperature increase

decrease in temperature

An increase in the temperature of exothermic reactions contributes to an increase in the amount of initial substances, and an increase in the temperature of endothermic reactions leads to an enrichment in the reaction products compared to the equilibrium state before the temperature change.

According to this principle, an increase in concentration leads to a shift in equilibrium towards an increase in the consumption of those components whose concentration increases.

    Phase equilibrium, Gibbs phase rule.

Phase equilibrium is the simultaneous existence of thermodynamically equilibrium phases in a heterogeneous system, for example, a liquid with its own saturated vapor (liquid-gas system), water and ice at a melting point (liquid-solid system), two immiscible liquids (liquid-liquid system). Phase equilibrium depending on the composition and parameters of the system is determined by the Hobbes phase rule.

A system or part of a system that is homogeneous in physical state and chemical composition is called a phase, and a component is a chemically homogeneous substance that can be isolated and able to exist independently.

The formulation of the Hobbes phase rule is: The number of degrees of freedom of an equilibrium thermodynamic system, which is influenced by n external factors (or which is influenced by temperature or temperature and pressure from external factors) is equal to the number of independent components of the system minus the number of phases plus n (one or two).

S=K-F+n, S=K-F+1, S=K-F+2

Hobbes' phase rule applies to systems with a limited number of phases and components; distinguish between one-, two- and three-phase, one-, two- and three-component systems. The number of degrees of freedom determines the variance of the system. Systems can be mono-, di- and trivariant. If C=0, then such systems are called invariant.

The phase state of the system, depending on the external conditions and composition of the system, is determined using diagrams states, or phase diagrams.

    Thermal effects of phase transitions.

Thermal effect - heat that is released or absorbed as a result of a chemical reaction under the following conditions: no useful work, invariance of the pressure or volume of the system, constancy of temperature before and after the reaction, in other words, the reaction must proceed under isobaric-isothermal or isochoric-isothermal conditions . The thermal effect of a reaction can be determined according to the Hess law: at constant pressure or volume, the thermal effect of a chemical reaction depends only on the type and state of the starting materials and reaction products, but does not depend on the transition path. If the reaction occurs at constant pressure, then the observed change in heat is expressed in terms of the change in enthalpy. For reactions proceeding at a constant volume, the thermal effect is identified with a change in internal energy.

There are substances in which, during phase transformations - melting, evaporation and crystallization - the so-called latent heat of the phase transition is released, and the amount of heat released is quite large. As can be seen from the very name "latent heat", in the process of phase transformation of a substance, its temperature does not change, i.e. the whole process takes place at a certain temperature.

Currently, two types of substances are used in practice for accumulators of this heat: calcium chloride (CaCl2 ∙ 6H20) and sodium sulfate (Glauber's salt). Calcium chloride has a melting point of 29°C, the thermal effect of the phase transition from solid to liquid is 42 kcal/kg (at a density of 1.622 kg/m3). At best, the same amount of heat is accumulated in a substance undergoing a phase transition as in water, which occupies 1/7 of the volume of this substance when it is heated by 10°C.

Accumulators that use the latent heat of phase transitions, like water, are subject to the phenomenon of hypothermia, and when using such batteries, it is especially important to prevent it.

    Non-equilibrium thermodynamics.

Non-equilibrium thermodynamics (thermodynamics of irreversible processes) - studies the general patterns in which thermodynamically irreversible processes occur (heat transfer, diffusion, chemical reactions, electric current transfer).

The difference between non-equilibrium thermodynamics and equilibrium:

    The thermodynamic parameters of the system change with time;

    These parameters have different values ​​at different points in the system, i.e. depend on the coordinates;

Distinguish a non-equilibrium system:

A) Linear - valid for minor deviations of the real process from the equilibrium

B) non-linear - with more significant ones.

General information about non-equilibrium thermodynamics

As mentioned above, classical thermodynamics (its three "beginnings") studies thermodynamic equilibrium, reversible processes. For non-equilibrium processes, it establishes only inequalities that indicate the possible direction of these processes. The fundamental works of I.R. Prigogine established that all thermodynamics is divided into three large areas: equilibrium, in which the production of entropy, flows and forces are equal to zero, weakly non-equilibrium, in which thermodynamic forces are “weak”, and energy flows linearly depend on forces, and strongly non-equilibrium, or non-linear, where energy flows are non-linear, and all thermodynamic processes are irreversible. The main task of non-equilibrium thermodynamics is the quantitative study of non-equilibrium processes, in particular, the determination of their rates depending on external conditions. In non-equilibrium thermodynamics, systems in which non-equilibrium processes occur are considered as continuous media, and their state parameters are considered as field variables, that is, continuous functions of coordinates and time.

Weakly nonequilibrium (linear) thermodynamics considers thermodynamic processes occurring in systems in states close to equilibrium. Thus, linear thermodynamics describes the stable, predictable behavior of systems tending to a minimum level of activity. The first works in this area belong to Lars Onsager, who in 1931 first discovered the general relations of non-equilibrium thermodynamics in a linear, weakly non-equilibrium region - the "reciprocity relation". Their essence is purely qualitatively reduced to the following: if the force "one" (for example, the temperature gradient) for weakly nonequilibrium situations affects the flow "two" (for example, diffusion), then the force "two" (the concentration gradient) acts on the flow "one » (heat flux).

Thus, in a weakly nonequilibrium region, the laws of equilibrium thermodynamics practically apply, the system does not aspire to anything, and its behavior in most cases is quite predictable.

Strongly nonequilibrium thermodynamics considers processes occurring in systems whose state is far from equilibrium.

When the thermodynamic forces acting on the system become large enough and take it out of the linear region into the nonlinear region, the stability of the system state and its independence from fluctuations are significantly reduced.

In such states, certain fluctuations intensify their impact on the system, forcing it, upon reaching the bifurcation point - loss of stability, to evolve to a new state, which may be qualitatively different from the original one. The system is self-organizing. Moreover, it is believed that the development of such systems proceeds through the formation of increasing orderliness. On this basis, the idea of ​​self-organization of material systems arose.

All material systems, from the smallest to the largest, are considered open, exchanging energy and matter with the environment and, as a rule, in a state far from thermodynamic equilibrium.

This property of material systems, in turn, made it possible to determine a number of new properties of matter.

Here is some of them.

All processes are irreversible, since they are always accompanied by energy losses;

Entropy S in open systems has two components: deS characterizes the exchange of entropy with the outside world; diS - characterizes irreversible processes inside;

Matter has the property of self-organization.

I. Prigogine's studies of living matter as open material systems were mainly focused on a comparative analysis of the organization of the structures of living and non-living matter, a thermodynamic analysis of glycolysis reactions, and a number of other works.

    Elements of statistical thermodynamics

Within the framework of statistical thermodynamics, the state of a system is determined not by the values ​​of physical quantities themselves, but by the probabilistic laws of their distribution. The starting point for determining the sum over states is the Boltzmann distribution law. This law reflects the unequal energy of different molecules and characterizes the energy distribution of molecules (particles). The quantity that combines molecules according to their energy levels is called the Boltzmann factor.

In the case of non-interacting particles of an ideal gas, the Gibbs canonical distribution turns into a Boltzmann distribution.

Physical chemistry

Sections of physical chemistry

Physical chemistry - a science that studies chemical phenomena and establishes their general patterns based on physical approaches and using physical experimental methods.

Sections of physical chemistry:

The structure of matter

· Chemical thermodynamics and thermochemistry

· Chemical and phase equilibrium

Solutions and electrochemistry

· Chemical kinetics. Catalysis. Photochemistry

· Radiation chemistry

Basic concepts and quantities

Temperature T - the degree of heating of the body, determined by the distribution of molecules and other particles according to the velocities of kinetic movement and the degree of population of the higher energy levels of the molecules. In thermodynamics, it is customary to use absolute temperature, measured from absolute zero, which is always positive. The SI unit of absolute temperature is K (kelvin), numerically equal to the Celsius degree.

Work amount of work w

The amount of work and the amount of heat in the general case are not state functions, since their value is determined by the type of process, as a result of which the system changed its state. The exceptions are the work of expansion and the thermal effect of a chemical reaction.

The term “thermodynamics” itself comes from the Greek words thermos (heat) and dynamos (work), since this science is based on the study of the balance of heat and work in systems during various processes.

Heat capacity FROM - the ratio of the amount of heat absorbed by the body when heated to the change in temperature caused by this absorption. Distinguish between true and average, molar and specific, isobaric and isochoric heat capacity.

True heat capacity- the ratio of an infinitesimal amount of heat to an infinitesimal change in temperature:

From ist= dQ /dT

average heat capacity- the ratio of the macroscopic amount of heat to the temperature change in the macroscopic process:

FROM=D Q /D T .

In physical terms, the average heat capacity is the amount of heat required to heat the body by 1 degree (1 o C or 1K).

The heat capacity of a unit mass of a substance is specific heat(SI unit - J / kg K). The heat capacity of a mole of a substance is molar(molar)heat capacity(SI unit - J / mol K). Heat capacity measured at constant volume - isochoric heat capacity C V ; heat capacity at constant pressure - isobaric heat capacity C P . Between C P and C V there is a relation (for one mole of an ideal gas):

C P = C V + R

where R is the universal gas constant.

Thermodynamic systems

Thermodynamic system- a specific object of thermodynamic research, mentally isolated from the environment. This is a set of macroscopic bodies that can interact with each other and with the external environment - exchange energy and matter with them. A thermodynamic system consists of such a large number of structural particles that its state can be characterized by macroscopic parameters: density, pressure, concentration of substances, temperature, etc.

Thermodynamic systems (or simply systems for short) can be classified according to various criteria:

- as of: equilibrium and non-equilibrium;

- on interaction with the environment(or with other systems): open (can exchange both energy and matter with the environment), closed (can only exchange energy) and isolated (cannot exchange either matter or energy);

- by number of phases: single-phase (homogeneous, homogeneous) and multi-phase (heterogeneous, heterogeneous);

- by number of components(chemical substances included in their composition): single-component and multi-component.

Internal energy system under consideration U - the sum of all types of energy of movement and interaction of particles (molecules, atoms, ions, radicals, etc.) that add up the system - the kinetic energy of the chaotic movement of molecules relative to the center of mass of the system and the potential energy of interaction of molecules with each other. Components of internal energy - translational U post (energy of translational motion of particles, such as molecules of gases and liquids), rotational U vr (energy of rotational motion of particles, for example, rotation of molecules of gases and liquids, atoms around chemical s-bonds), vibrational U kol (energy of intramolecular oscillatory motion of atoms and energy of oscillatory motion of particles located at the nodes of the crystal lattice), electronic U el (energy of movement of electrons in atoms and molecules), nuclear energy U me and others The concept of internal energy does not include the kinetic and potential energy of the system as a whole. The SI unit of internal energy is J/mol or J/kg.

The absolute value of the internal energy cannot be calculated using the equations of thermodynamics. One can only measure its change in a particular process. However, for a thermodynamic consideration this turns out to be sufficient.

State Options

State systems - a set of physical and chemical properties that characterize a given system. It is described state parameters- temperature T , pressure R , volume V , concentration FROM and others. In addition to a certain value of the parameters, each state of the system also corresponds to a certain value of some quantities that depend on the parameters and are called thermodynamic functions. If the change in the thermodynamic function does not depend on the path of the process, but is determined only by the initial and final states, such a function is called state function. For example, internal energy is a function of state, since its change in any process can be calculated as the difference between the final and initial values:

DU= U 2 - U 1 .

State functions include characteristic functions, the totality of which can sufficiently fully characterize the state of the system (internal energy, enthalpy, entropy, Gibbs energy, etc.).

Thermodynamic process- is any change in the system, accompanied by a change in parameters. Processes are driven by factors- non-uniformity of the value of certain parameters (for example, the temperature factor due to different temperatures in different parts of the system). A process that takes place at constant pressure is called isobaric, at a constant volume - isochoric, at constant temperature - isothermal, at a constant amount of heat - adiabatic.

Heat- a form of random (“thermal”) movement of particles (molecules, atoms, etc.) that form the body. The quantitative measure of the energy transferred during heat exchange is quantity of heat Q . The SI unit of the amount of heat is J. Along with the joule, an off-system unit of heat is often used - a calorie (cal). 1 cal = 4.184 J. Often, instead of the term “amount of heat”, the expression “heat” is used as a synonym.

Work- a form of energy transfer from one system to another, associated with action against external forces and carried out with an ordered, directed movement of the system or its individual components. The quantitative measure of the energy transferred during work is amount of work w . The SI dimension of work is J. Instead of the term “amount of work”, the expression “work” is often used as a synonym.

Thermochemistry.

Thermochemistry- a section of chemical thermodynamics that deals with the determination of the thermal effects of chemical reactions and the establishment of their dependence on various conditions. The task of thermochemistry also includes the measurement of the heat capacities of substances and the heats of phase transitions (including the processes of formation and dilution of solutions).

Calorimetric measurements

The main experimental method of thermochemistry is calorimetry. The amount of heat released or absorbed in a chemical reaction is measured using an instrument called calorimeter.

Calorimetric measurements make it possible to calculate extremely important quantities - the thermal effects of chemical reactions, the heats of dissolution, and the energies of chemical bonds. The values ​​of binding energies determine the reactivity of chemical compounds, and in some cases, the pharmacological activity of medicinal substances. However, not all chemical reactions and physicochemical processes can be measured by calorimetric measurements, but only those that satisfy two conditions: 1) the process must be irreversible and 2) the process must proceed quickly enough so that the released heat does not have time to dissipate in the environment.

Enthalpy

Most chemical processes, both in nature and in the laboratory, and in industry, do not proceed at a constant volume, but at a constant pressure. At the same time, only one of the various types of work is often done - expansion work, equal to the product of the pressure and the change in the volume of the system:

w = pDV.

In this case, the equation of the first law of thermodynamics can be written as

DU = Q p - pDV

Q p= DU + pDV

(index R shows that the amount of heat is measured at constant pressure). Replacing the changes in values ​​by the corresponding differences, we obtain:

Qp = U 2 - U 1 + p (V 2 - V 1 )

Qp = (U 2 + PV2 ) - (U 1 + pv 1 )

Qp = (U + pV ) 2 - (U + pV ) 1 = H2 - H1

Because p and V are state parameters, and U is a state function, then the sum U + pV = H is also a state function. This function is called enthalpy. Thus, the heat absorbed or released by the system in a process running at constant pressure is equal to the change in enthalpy:

Qp = D.H.

There is a relationship between the change in enthalpy and the change in the internal energy of the system, expressed by the equations

DH= DU + DnRT or DU = DH - DnRT ,

which can be obtained using the Mendeleev-Clapeyron equation

pV= nRT , where pDV = DnRT .

Quantities DH various processes are relatively easy to measure using calorimeters operating at constant pressure. As a result, the change in enthalpy is widely used in thermodynamic and thermochemical studies. The SI unit of enthalpy is J/mol.

Hess' law

In the 1840s. G.I. Hess formulated the fundamental law of thermochemistry, which he called " the law of constancy of heat sums":

When any chemical compound is formed, the same amount of heat is always released, regardless of whether the formation of this compound occurs directly or indirectly and in several steps.

In modern interpretations, the law reads as follows:

1. If it is possible to obtain the given final products from these initial substances in different ways, then the total heat of the process on any one path is equal to the total heat of the process on any other path.

2. The thermal effect of a chemical reaction does not depend on the path of the process, but depends only on the type and properties of the starting substances and products .

3. The thermal effect of a series of successive reactions is equal to the thermal effect of any other series of reactions with the same initial substances and final products .

4. For example, an aqueous solution of ammonium chloride (NH 4 Cl·aq) can be obtained from gaseous ammonia and hydrogen chloride and liquid water (aq) in the following two ways:

5. I. 1) NH 3 (g) + aq = NH 3 aq + D.H. 1 (D.H. 1 = -34.936 kJ/mol);

6. 2) HCl (g) + aq = HCl aq + D.H. 2 (D.H. 2 = -72.457 kJ/mol);

7. 3) NH 3 aq + HCl aq = NH 4 Cl aq + D.H. 3 (D.H. 3 = -51.338 kJ/mol);

8. D.H. = D.H. 1 + D.H. 2 + D.H. 3 = -34,936 -72,457 -51,338 =

9. = -158.749 kJ/mol

11. II. 1) NH 3 (g) + HCl (g) = NH 4 Cl (t) + D.H. 4 (D.H. 4 = -175.100 kJ/mol);

12. 2) NH 4 Cl (t) + aq = NH 4 Cl aq + D.H. 5 (D.H. 5 = + 16.393 kJ/mol);

13. D.H. = D.H. 4 + D.H. 5 = -175,100 + 16,393 = -158,707

As can be seen, the heat effect of the process carried out along path I is equal to the heat effect of the process carried out along path II (the difference of 0.42 kJ/mol, which is 0.026% of the absolute value, is well within the experimental error).

One more example. Combustion of graphite to CO 2 can be done in two ways:

I. C (t) + O 2 (g) \u003d CO 2 (g) + DH 1 (DH 1 = -393.505 kJ/mol);

II. C (T) + 1/2 O 2 (g) = CO (g) + D.H. 2 (D.H. 2 = -110.541 kJ/mol);

CO (g) + 1/2 O 2 (g) \u003d CO 2 (g) + DH 3 (DH 3 = -282.964 kJ/mol);

And in this case

D.H. = D.H. 2 + D.H. 3 \u003d -110.541 + (-282.964) \u003d -393.505 kJ / mol.

Hess's law makes it possible to calculate the thermal effects of many reactions with the help of a relatively small amount of reference data on the heats of combustion and formation of chemicals, and in addition, to calculate the thermal effects of such reactions that are generally not amenable to direct calorimetry, for example, C (m) + 1/ 2 O 2 (g) = CO (g)). This is achieved by applying the consequences of Hess' law.

1 consequence (Lavoisier-Laplace law): The thermal effect of the decomposition of a complex substance into simpler ones is numerically equal, but opposite in sign, to the thermal effect of the formation of a given complex substance from simpler data.

For example, the heat of decomposition of calcium carbonate (calcite) into calcium oxide and carbon dioxide

CaCO 3 (T) \u003d CO 2 (g) + CaO (t) + DH 1

equals + 178.23 kJ/mol. This means that for the formation of one mole of CaCO 3 from CaO and CO 2, the same amount of energy will be released:

CaO (T) + CO 2 (T) \u003d CaCO 3 (T) + D.H. 2 (D.H. 2 = -178.23 kJ/mol).

2 consequence: If two reactions occur, leading from different initial states to the same final ones, then the difference between their thermal effects is equal to the thermal effect of the transition reaction from one initial state to another initial state.

For example, if the thermal effects of the combustion reactions of diamond and graphite are known:

C (g) + O 2 \u003d CO 2 - 393.51 kJ / mol

C (alm) + O 2 \u003d CO 2 - 395.39 kJ / mol

you can calculate the thermal effect of the transition from one allotropic modification to another:

С (gr) ® С (alm) + DH allotropic

DH allotropic\u003d -393.51 - (-395.39) \u003d +1.88 kJ / mol

3rd consequence: If two reactions occur, leading from the same initial states to different final states, then the difference between their thermal effects is equal to the thermal effect of the transition reaction from one final state to another final state.

For example, using this consequence, one can calculate the thermal effect of the combustion reaction of carbon to CO:

C (gr) + O 2 ® CO 2 - 393.505 kJ / mol

CO + 1/2 O 2 ® CO 2 - 282.964 kJ / mol

C (gr) + 1/2 O 2 ® CO + D.H.r

D.H.r\u003d -393.505 - (-282.964) \u003d -110.541 kJ / mol.

4 consequence: The thermal effect of any chemical reaction is equal to the difference between the sums of the heats of formation of the reaction products and the starting materials (taking into account the stoichiometric coefficients in the reaction equation):

D.H.r = å (n i DH fi ) prod - å (n i DH fi )ref

For example, the thermal effect of the esterification reaction

CH 3 COOH (g) + C 2 H 5 OH (g) \u003d CH 3 COOS 2 H 5 (g) + H 2 O (g) + D.H.r

DH r =(D.H.f CH3COOC2H5 +DH f H2O) - (D.H.f CH3COOH +DH f С2Н5ОН) =

\u003d (-479.03 -285.83) - (-484.09 -276.98) \u003d -3.79 kJ ..

5 consequence: The thermal effect of any chemical reaction is equal to the difference between the sums of the heats of combustion of the starting materials and reaction products (taking into account the stoichiometric coefficients in the reaction equation):

D.H.r = å (n i DH c i ) ref - å (n i DH c i )prod

For example, the thermal effect of the esterification reaction given in the previous example is

DH r =(DH with CH3COOH +DH with C2H5OH) - (DH with CH3COOC2H5 +DH with H2O)=

\u003d (-874.58 -1370.68) - (-2246.39 -0) \u003d -1.13 kJ.

(The discrepancy between the results is explained by the different accuracy of the thermochemical data given in reference books).

Heat of dissolution

Heat of dissolution DН r -r or DH s .(from solution- solution) - the thermal effect of the dissolution of a substance at constant pressure.

There are integral and differential heats of dissolution. The heat of dissolution of 1 mole of a substance with the formation of the so-called. infinitely dilute solution is called integral heat of dissolution. The integral heat of dissolution depends on the ratio of the amounts of the dissolved substance and the solvent and, consequently, on the concentration of the resulting solution. The thermal effect when 1 mole of a substance is dissolved in a very large amount of an already existing solution of the same substance of a certain concentration (leading to an infinitesimal increase in concentration) is called differential heat of solution:


In physical terms, the differential heat of dissolution shows how the heat effect of the dissolution of a substance changes with an increase in its concentration in a solution. The SI unit of the heat of dissolution is J/mol.

The integral heat of dissolution of crystalline substances (for example, inorganic salts, bases, etc.) consists of two quantities - the enthalpy of transformation of the crystal lattice of a substance into an ionic gas (destruction of the crystal lattice) DH sol and enthalpies of solvation (in the case of aqueous solutions - hydration) of molecules and ions formed from them during dissociation DН solv (DH hydr ):

DН r -r = DH sol + DН solv ; DН r -r = DH sol + DH hydr

Quantities DH sol and DН solv are opposite in sign (solvation and hydration are always accompanied by the release of heat, while the destruction of the crystal lattice is accompanied by its absorption). Thus, the dissolution of substances with a not very strong crystal lattice (for example, alkali metal hydroxides - NaOH, KOH, etc.) is accompanied by a strong heating of the resulting solution, and well-hydrated liquid substances that do not have a crystal lattice (for example, sulfuric acid) - even more heating up to boiling. On the contrary, the dissolution of substances with a strong crystal lattice, such as, for example, alkali and alkaline earth metal halides KCl, NaCl, CaCl 2, proceeds with the absorption of heat and leads to cooling. (This effect is used in laboratory practice for the preparation of cooling mixtures).

Therefore, the sign of the total thermal effect during dissolution depends on which of its terms is larger in absolute value.

If the enthalpy of destruction of the salt crystal lattice is known, then by measuring the heat of dissolution, it is possible to calculate the enthalpy of its solvation. On the other hand, by measuring the heat of dissolution of a crystalline hydrate (i.e., a hydrated salt), it is possible to calculate with sufficient accuracy the enthalpy of destruction (strength) of the crystal lattice.

The heat of dissolution of potassium chloride, equal to +17.577 kJ / mol at a concentration of 0.278 mol / l and 25 ° C, is proposed as thermochemical standard to check the performance of the calorimeters.

The temperature dependence of the heats of dissolution, as well as the thermal effects of chemical reactions, obeys the Kirchhoff equation.

When the solute and solvent are chemically similar and there are no complications associated with ionization or solvation during dissolution, the heat of solution can be considered approximately equal to the heat of fusion of the solute. This mainly refers to the dissolution of organic substances in non-polar solvents.

Entropy

Entropy is a measure of the disorder of a system related to thermodynamic probability.

The dissipation of energy among the components of a system can be calculated by methods of statistical thermodynamics. This leads to the statistical definition of entropy. According to the proposition mentioned earlier that the direction of spontaneous change corresponds to the direction of increasing thermodynamic probability, we can conclude that the dissipation of energy, and hence the entropy, is associated with it. This connection was proved in 1872 by L. Boltzmann. It is expressed by the Boltzmann equation

S = k ln W , (3.1)

where k is the Boltzmann constant.

According to the statistical point of view, entropy is a measure of disorder in a system. This is due to the fact that the more areas in the system in which there is a spatial ordering in the arrangement of particles or an uneven distribution of energy (which is also considered an ordering of energy), the lower the thermodynamic probability. With chaotic mixing of particles, as well as with a uniform distribution of energy, when particles cannot be distinguished by their energy state, the thermodynamic probability, and, consequently, the entropy, increase.

Second law of thermodynamics

the second can be expressed in several different formulations, each of which complements the others:

1. Heat cannot spontaneously transfer from a colder body to a hotter one. .

2. Energy of various types tends to turn into heat, and heat tends to dissipate. .

3. No set of processes can be reduced to the transfer of heat from a cold body to a hot one, while the transfer of heat from a hot body to a cold one can be the only result of processes (R.E. Clausius).

4. No set of processes can be reduced only to the conversion of heat into work, while the conversion of work into heat can be the only result of the processes (W. Thomson).

5. It is not possible to build a cyclic machine , which would convert heat into work without producing any other changes in the surrounding bodies (the so-called perpetual motion machine of the second kind) (W. Ostwald).

For an irreversible Carnot cycle, we can write:


where Q1 is the initial heat reserve in the system, Q2 - the amount of heat remaining in the system after passing through any process in it, T1 and T 2 - respectively, the initial and final temperatures of the system, h - process efficiency.

This equality is a mathematical expression of the second law of thermodynamics.

Third law of thermodynamics. Planck's postulate.

Absolute entropy

Since entropy is an extensive quantity, its value for matter at any given temperature T is the sum of the values ​​corresponding to each temperature in the range from 0 K to T. If in equation (3.5) we take the lower temperature of the integration interval equal to absolute zero, then


Therefore, knowing the value of entropy at absolute zero, using this equation, it would be possible to obtain the value of entropy at any temperature.

Careful measurements carried out at the end of the 19th century showed that as the temperature approaches absolute zero, the heat capacity of any substance C p goes to zero:

lim Cp = 0 .

T ® 0

This means that the value C p /T is finite or equal to zero and, therefore, the difference S T - S0 is always positive or zero. Based on these considerations, M. Planck (1912) proposed the postulate:

At absolute zero temperature, the entropy of any substance in the form of an ideal crystal is zero.

This postulate of Planck is one of the formulations of the 3rd law of thermodynamics. It can be explained on the basis of the concepts of statistical physics: for a perfectly ordered crystal at absolute zero temperature, when there is no thermal motion of particles, the thermodynamic probability W is equal to 1. Hence, in accordance with the Boltzmann equation (3.1), its entropy is equal to zero:

S0 = k log 1 = 0

From Planck's postulate, we can conclude that the entropy of any substance at temperatures other than absolute zero is finite and positive. Accordingly, entropy is the only thermodynamic state function for which an absolute value can be determined, and not just a change in some process, as in the case of other state functions (for example, internal energy and enthalpy).

From the above equations it also follows that at a temperature approaching absolute zero, it becomes impossible to take away any, even very small, amounts of heat from the cooled body due to the infinitesimal heat capacity. In other words,

using a finite number of operations it is impossible to lower the body temperature to absolute zero.

This expression is called the principle of inaccessibility of absolute zero temperature and along with Planck's postulate is one of the formulations of the third law of thermodynamics. (Note that at the present time in the experiment it was possible to lower the temperature to 0.00001 K).

The principle of unattainability of absolute zero temperature is also associated with the thermal theorem of W. Nernst (1906), according to which when approaching absolute zero, the values ​​DH and DG = DH +TDS (G - the Gibbs energy, which will be discussed below) approach each other, that is, when T = 0 should be the equality

DG= DH .

Entropy change during a chemical reaction DS about r can be calculated as the difference between the sums of the entropies of the products and starting materials, taken with the corresponding stoichiometric coefficients. For standard conditions:

DS about r = å (n i S o i )prod - å (n i S o I )ref

(For calculations, the absolute values ​​of the entropy of individual substances are taken, and not their changes, as in the calculation of other thermodynamic functions. The reasons for this will be explained when considering the third law of thermodynamics).

Chemical equilibrium

Chemical equilibrium- this is a thermodynamic equilibrium in a system in which direct and reverse chemical reactions are possible.

Under certain conditions, the activities of the reagents can be replaced by concentrations or partial pressures. In these cases, the equilibrium constant expressed in terms of equilibrium concentrations K c or through partial pressures Kp, takes the form

(4.11)
(4.12)

Equations (4.11) and (4.12) are variants law of mass action (LMA) for reversible reactions at equilibrium. At a constant temperature, the ratio of the equilibrium concentrations (partial pressures) of the final products to the equilibrium concentrations (partial pressures) of the initial reagents, respectively raised to powers equal to their stoichiometric coefficients, is a constant value.

For gaseous substances Kp and K c related by the ratio Kp = (RT) Δ n K c, where ∆ n is the difference in the number of moles of initial and final gaseous reagents.

The equilibrium constant is determined at known equilibrium concentrations of reactants or from a known Δ G° chemical reaction

An arbitrary reversible chemical reaction can be described by an equation of the form:

aA + bB Û dD + eE

In accordance with the law of mass action, in the simplest case, the rate of a direct reaction is related to the concentrations of the starting substances by the equation

v pr = k pr C A a FROM in b,

and the rate of the reverse reaction - with the concentrations of the products by the equation

v arr = k arr C D d FROM E e .

When equilibrium is reached, these speeds are equal to each other:

v pr = v arr

The ratio of the rate constants of the forward and reverse reactions to each other will be equal to equilibrium constant:


Since this expression is based on taking into account the amount of reactants and reaction products, it is a mathematical notation of the law acting masses for reversible reactions.

The equilibrium constant, expressed in terms of the concentrations of the reactants, is called the concentration constant and is denoted K s . For a more rigorous consideration, instead of concentrations, one should use the thermodynamic activities of substances a = fC (where f - activity coefficient). In this case, we are talking about the so-called thermodynamic equilibrium constant


At low concentrations, when the activity coefficients of the starting substances and products are close to unity, K s and K a practically equal to each other.

The equilibrium constant of a reaction occurring in the gas phase can be expressed in terms of partial pressures R substances involved in the reaction:


Between K r and K s there is a relation that can be derived in this way. We express the partial pressures of substances in terms of their concentrations using the Mendeleev-Clapeyron equation:

pV = nRT ,

where p = (n /V )RT = CRT .

The dimension of the equilibrium constants depends on the method of expressing the concentration (pressure) and the stoichiometry of the reaction. Often it can cause bewilderment, for example, in the considered example [mol -1 m 3] for K s and [Pa -1] for K r , but there is nothing wrong with that. If the sums of the stoichiometric coefficients of products and starting materials are equal, the equilibrium constant will be dimensionless.

Phase balance.

Phase balance- coexistence of thermodynamically equilibrium phases forming a heterogeneous system.

Phase F - a set of system parts that are identical in chemical composition and physical properties, are in thermodynamic equilibrium with each other and are separated by interfaces from other parts. Any homogeneous system is single-phase, that is, it is characterized by the absence of internal interfaces. A heterogeneous system contains several phases (at least two). In a heterogeneous phase system, there are internal interface(sometimes called interfaces).

Component- an individual chemical substance that is part of the system. A component is only a substance that, in principle, can be isolated from the system and can exist independently for a sufficiently long time.

Number of independent components systems To is the number of components needed to create the complete composition of the system. It is equal to the total number of components minus the number of chemical reactions occurring between them.

Phase transitions- these are the transitions of a substance from one phase state to another with a change in the parameters characterizing the thermodynamic equilibrium.

variance systems FROM can be represented as a number of external conditions (temperature, pressure, concentration, etc.), which the experimenter can change without changing the number of phases in the system.

phase rule, is a consequence of the second law of thermodynamics, relates the number of phases in equilibrium, the number of independent components and the number of parameters necessary for a complete description of the system:

The number of degrees of freedom (variance) of a thermodynamic system in equilibrium, which of external factors is influenced only by pressure and temperature, is equal to the number of independent components minus the number of phases plus two:

FROM = To - F + 2

Phase diagrams.

Exploring property dependencies

So, any economic theory that is not based on physics is a utopia!

To understand what wealth is, you need not read economic books, but study the basics of thermodynamics, which was born at about the same time as Marx's Capital.

Thermodynamics was born due to the fact that people wanted to subjugate the "driving force of fire", for which it was necessary to create an efficient steam engine. Therefore, at first thermodynamics was engaged in the study of heat.

However, over time, thermodynamics has expanded significantly and has become a theory about the transformations of all forms of energy. In this form, thermodynamics exists to this day.

The value of thermodynamics turned out to be so great that the English writer, physicist and statesman Charles Percy Snow proposed introducing a test for general culture, according to which ignorance of the second law of thermodynamics would be equated with ignorance of the works of Shakespeare.

Thermodynamics is based on a small number of statements that, in a condensed form, have absorbed the vast experience of people in the study of energy.

These statements are called laws or began thermodynamics.

There are four laws (beginnings) of thermodynamics.

The second beginning was formulated first in time, the zero beginning was the last. And between them were established the first and third laws of thermodynamics.

Zero start of thermodynamics was formulated about a hundred years ago.

For progressives and for business, the zeroth beginning is perhaps even more important than the most famous second beginning, and here's why.

First, it says the following: regardless of the initial state of an isolated system, eventually thermodynamic equilibrium will be established in it.

It is this statement that opens the way to a scientific understanding of the nature of wealth.

Secondly, the zero beginning introduces the concept of temperature into the scientific language.

And strange as it may sound, it is this very deep concept (temperature) that allows us to describe the conditions necessary for the emergence of new wealth.

Although, if we forget about internal combustion engines, and remember about the incubator, then nothing strange is observed here.

The zero start is formulated as follows:

If system A is in thermodynamic equilibrium with system B, and that, in turn, with system C, then system A is in equilibrium with C. Their temperatures are equal.

First law of thermodynamics was formulated in the middle of the 19th century. Briefly, it was formulated by Kelvin as follows: in any isolated system, the energy supply remains constant.

Kelvin gave this formulation because it corresponded to his religious views. He believed that the Creator at the time of the creation of the Universe endowed it with a reserve of energy, and this divine gift would exist forever.

The irony of the situation lies in the following. According to the theory of the expanding Universe, the total energy of the Universe is indeed constant, but equal to zero. The positive part of the energy of the Universe, which is equivalent to the mass of particles existing in the Universe, can be exactly compensated by the negative part of the energy due to the gravitational potential of the attraction field.

Second law of thermodynamics states that spontaneous transfer of heat from a body that is less heated to a body that is hotter is impossible.

If we compare the first and second laws of thermodynamics with each other, then we can say this: the first law of thermodynamics prohibits the creation of a perpetual motion machine of the first kind, and the second law of thermodynamics prohibits the creation of a perpetual motion machine of the second kind.

A perpetual motion machine of the first kind is an engine that does work without drawing energy from any source. A perpetual motion machine of the second kind is an engine that has a coefficient of efficiency equal to one. This is an engine that converts all 100% of heat into work.

But according to Marx's theory, a hired worker is a mechanism that has a coefficient of efficiency greater than one. And Marx sees no problem in inventing the superperpetual motion machine. Okay Marx! Modern Ph.D. economists see no problem with this either! As if physics doesn't exist for them at all!

Third law of thermodynamics states that it is impossible to cool matter down to absolute zero in a finite number of steps.

In conclusion, I can give the following advice: search the Internet for information about a perpetual motion machine of the third kind. First, it's interesting. And secondly, the progressor must understand that all economists are those people who create a perpetual motion machine of the third kind.

The postulate of the existence of a state of thermodynamic equilibrium. Postulation of the existence of a special intensive state parameter - temperature. Thermodynamic meaning of temperature. Temperature in statistical mechanics. The equation of state of a thermodynamic system. Mendeleev-Clapeyron equation. van der Waals equation.

First law of thermodynamics

First law of thermodynamics (δQ = dU + δA). Internal energy is a function of the state of the system. Heat and work are ways of energy transfer (transition functions). Application of the first law to the characteristics of ideal thermodynamic processes. Adiabatic equation.

Heat capacity and forms of its expression. Heat capacity of an ideal gas at constant volume c V and constant pressure c p , Mayer formula: c p - c V = R. Energy of translational and rotational motion of the molecule as a whole and vibrations of atoms inside the molecule. The number of degrees of freedom of a molecule. Energy distribution over degrees of freedom. Temperature dependence of heat capacity.

Heat capacity of solids. Internal energy and heat capacity of a solid body. Dulong-Petit law. Neumann-Kopp rule. Temperature dependence of heat capacity.

Difficulties of the classical theory of heat capacity.

23. Quantum theory of heat capacity

Crystal as a collection of quantum harmonic oscillators. Phonons. The average value of the oscillator energy. Distribution function of the number of normal vibrations in frequency.

Einstein's quantum theory of heat capacity.

Debye model. Characteristic Debye temperature Q D .

Contribution to the heat capacity of conduction electrons.

Magnetic component of heat capacity.

Application of the first law to chemical processes

Thermochemistry is a branch of thermodynamics. Thermal effects of reactions. Exo- and endothermic transformations. Thermal effects of chemical reactions at constant volume (Q V) and pressure (Q p). Hess' law. Standard state. "Standard" thermodynamic quantities.

Consequences from the law of Hess. Thermochemical equations. Heats of formation, melting, evaporation. The role of thermal processes in technology.



The dependence of the thermal effect of a chemical reaction on temperature, the Kirchhoff equation: dQ V / dT = - (c V con - c V ref).

Second law of thermodynamics

heat engine efficiency. Carnot cycle. Carnot's theorems (1. h K \u003d 1-T 2 /T 1, 2. h K \u003d h max). Refrigerator operation. Cooling coefficient b.

Absolute thermodynamic temperature scale.

Thermodynamic definition of entropy, its properties

The equality Q 1 /T 1 + Q 2 /T 2 = 0 for a reversible Carnot cycle. The equality ∮dq/T = 0 for any closed reversible process; given temperature. Definition of entropy as a state function (dS = dQ/T).

Inequalities dS > dq nrev /T, ∮dq/T ≤ 0 for an irreversible cycle. Direction of processes in isolated systems and thermodynamic equilibrium conditions. Entropy increase law. Formulation of the second law of thermodynamics based on the concept of entropy.

Calculation of entropy for isothermal, isobaric and isochoric processes of an ideal gas.

Gibbs energy and Helmholtz energy

The generalized form of writing I and II principles of thermodynamics: TdS = dU + pdV. Thermodynamic potentials (dU(S,V) =TdS - pdV, dG(T,p) = -SdT + Vdp, dF(T,V) = -SdT - pdV, dH(S,p) = TdS + Vdp). Helmholtz energy F (free energy). Gibbs energy G. Direction of processes in non-isolated systems and thermodynamic equilibrium conditions. Gibbs-Helmholtz equation. Maxwell's relations.

Application of the second law of thermodynamics to phase transitions. Clausius-Clapeyron equation.

Probabilistic (statistical) interpretation of the concept of entropy

Thermodynamic probability W. Connection of entropy with thermodynamic probability; statistical interpretation of the concept of entropy. Boltzmann's principle as formulated by Planck. Justification of the formula S = k B lnW.

Refined formulation of the II law of thermodynamics. Limits of applicability of the second law of thermodynamics. Criticism of the theory of "thermal death of the Universe".

Third law of thermodynamics

The problem of determining the constants of integration in the determination of thermodynamic quantities. Insufficiency of I and II laws of thermodynamics for calculating chemical affinity.

Third law of thermodynamics (Nernst's theorem: lim T →0 (∂S/∂x) T = 0, lim T →0 S(T,x) = S o). Planck's formulation of the III principle of thermodynamics (Planck's postulate: S o = 0). Absolute entropy.

Consequences from the thermal Nernst theorem. Behavior of thermodynamic coefficients at T ® 0. Unattainability of absolute zero temperatures. Violations of the third law of thermodynamics in model systems.

Calculation of the absolute values ​​of the entropy of solid, liquid and gaseous substances. Application of tables of thermodynamic functions for equilibrium calculations.

EDUCATIONAL AND METHODOLOGICAL MATERIALS ON THE DISCIPLINE

a) basic literature:

Kireev V. A. Physical chemistry course. M.: Chemistry. 1975. 776 p.

b) additional literature:

· Course of physical chemistry. In 2 volumes. Gerasimov Ya.I., Dreving V.P., Eremin E.N., Kiselev A.V., Lebedev V.P., Panchenkov G.M., Shlygin A.I. Under total ed. Gerasimova Ya.I. M.-L.: Chemistry. 1973 V.1. 626 p. T.2. 625 p.

· Fundamentals of physical chemistry. V.M. Glazov. M.: Higher school. 1981 456 p.

· Physical chemistry. A.A. Zhukhovitsky, L.A. Shvartsman. Moscow: Metallurgy. 1987. 688 p.

· Physical chemistry. Theoretical and practical guidance. Ed. B.P. Nikolsky. L.: Chemistry. 1987. 880 p.

· Physical research methods in inorganic chemistry. THEM. Zharsky, G.I., Novikov. M. Higher school. 1988 271 p.

· Collection of examples and problems in physical chemistry. I.V. Kudryashov, G.S. Karetnikov. M.: Higher school. 1991. 527 p.

· Physical chemistry. Stromberg A.G., Semchenko D.P. M.: Higher school. 2001. 527 p.

· Physical chemistry. In 2 books. Ed. K.S. Krasnov. M.: Higher school. 2001 V.1. The structure of matter. Thermodynamics. 512 p. T.2. Electrochemistry. Chemical kinetics and catalysis. 319 p.

· Nanotechnology. Poole C., Owens. M. Technosphere. 2004. 328 p.

· Fundamentals of physical chemistry. Theory and tasks. Eremin V.V., Kargov S.I., Uspenskaya I.A., Kuzmenko N.E., Lunin V.V. M.: Exam. 2005. 480 p.

· Workshop on physical chemistry. Roshchina T.M., Zhiryakova M.V., Tiflova L.A., Ermilov A.Yu. M.V. Lomonosov. 2010 91 p.

· Brief reference book of physical and chemical quantities. Ed. A.A. Ravdelya, A.M. Ponomareva. L .: Chemistry, 1983 or St. Petersburg: Chemistry, 1999.

Bush A.A. Technology of ceramic materials, features of obtaining ceramics of HTSC phase YBa 2 Cu 3 O 7-d. Tutorial. M.: MIREA. 2000. 79 p.

Bush A.A. Methods of derivatographic and X-ray phase analysis. Guidelines and control tasks for the implementation of laboratory work on the course "Physical chemistry of materials and processes in electronic technology." MIREA. 2010. 40 p. (No. 0968).

Bush A.A. Methods for growing single crystals, obtaining Al 2 O 3 crystals by crucibleless zone melting. Guidelines and control tasks for the implementation of laboratory work on the course "Physical chemistry of materials and processes in electronic technology." MIREA. 2011. 40 p. 527 p.

Complex compounds: Guidelines / Comp.

V.P. Kuzmicheva, G.N. Olisova, N.I. Ulyanov. - Veliky Novgorod: NovGU,

2006. - 15 p.

3. Modern crystallography. T. 1, 2, 3, 4. M.: Nauka. 1980. 407 p.

Walter Steurer. What is a crystal? Introductory remarks to an ongoing discussion. Z. Crystallogr. 222 (2007) 308–309 / DOI 10.1524/zkri.2007.222.6.308

Kaurova I.A., Melnikova T.I.

B579 Modulated crystals: from theory to practice.

MITHT. Tutorial.-

Moscow: MITHT im. M.. Lomonosov, 2011-76 p.: ill.

The textbook contains basic information about structural features, as well as methods for studying aperiodic structures. Using the example of modulated crystals, the structure is calculated using the Jana 2006 and Superflip programs. For master's students studying the disciplines "Methods for the study of real crystal structure", "Diffraction methods for the study of crystalline materials", "Diffraction methods for the study of rare elements and materials based on them", "Methods for the study of crystal structure", and for students studying the discipline "Methods research of phase composition and structure", as well as to improve the skills of graduate students, researchers and faculty.

Stromberg A.G., Semchenko D.P. Physical chemistry. Under. Ed. Prof. Stromberg. Ed. Fourth corrected. Moscow. Graduate School. 2001.

Semiokhin I.A.

С 30 Physical chemistry: Textbook. - Publishing House of Moscow State University, 2001. - 272 p.

ISBN 5-211-03516-X

This textbook aims to give an idea of ​​the theoretical background, current state and practical application of physical chemistry in geology and soil science. The book outlines the basic laws and relations of thermodynamics, the doctrine of phase, adsorption and chemical equilibria, the foundations of the theory of solutions, thermodynamics of non-equilibrium processes and chemical kinetics, ideas about the equilibrium and non-equilibrium properties of electrolyte solutions, the concept of electrochemical circuits and their electromotive forces (EMF), on the application of the EMF method in chemistry and geology.

For students, graduate students and researchers working in the field of engineering geology, hydrogeology, geocryology and protection of the geological environment, as well as soil scientists.

N. Kabayashi "Introduction to nanotechnology", M., "Binom", 2005

2. Additional literature:

Ed. Ya.I. Gerasimova "Course of physical chemistry", M., 19xx

L.I. Antronov "Theoretical electrochemistry", M., 1975.

E.A. Efimov, I.G. Yerusalimchik "Electrochemistry of germanium and silicon", M., 1963.

Yu.A. Karpov, A.P. Savostin, V.D. Salnikov "Analytical control of metallurgical production", M., 1995.

Poole, F. Owens "Nanotechnologies", M., 2005

Bush A.A. Technology of ceramic materials, features of obtaining ceramics of HTSC phase YBa2Cu3O7-d. Tutorial. M.: MIREA, 2000, 79 p.

Bush A.A. Pyroelectric effect and its applications. Textbook allowance. – M.: MIREA, 2005. – 212 p.

Bush A.A. Study of the pyroelectric effect by the quasi-static method. Guidelines for the implementation of laboratory work. MIREA, 2006, 31 p. (No. 0512).

Bush A.A. Study of the piezoelectric effect by the oscillating load method. Guidelines and control tasks for the implementation of laboratory work. MIREA, 2008, 31 p. (No. 0745).

Bush A.A. Methods of derivatographic and X-ray phase analysis. Guidelines and control tasks for the implementation of laboratory work on the course "Physical chemistry of materials and processes in electronic technology." MIREA, 2010, 40 p. (No. 0968).

Bush A.A. Physical and chemical bases and methods of growth of single crystals, growth of Al 2 O 3 crystals by crucibleless zone melting. Guidelines for the implementation of laboratory work on the course "Physical Chemistry of Materials and Processes of Electronic Engineering" for students studying in the specialties 210104 and 210106. Electronic edition on CD-R 2011 MSTU MIREA. State number registration of an obligatory copy of the electronic publication - 0321200637. Moscow State Technical University of Radio Engineering, Electronics and Automation. 2011