Elements of the theory of random processes and the theory of queuing. Theory of random processes

Abstract of lectures on the discipline "Theory of random processes"

TOPIC 1. BASIC CONCEPTS OF THE THEORY OF RANDOM PROCESSES 2
1.1. Definition of a random process. Basic approaches to the task of random processes. The concept of realization and section. Elementary random processes. 2
1.2. Some classes and types of random processes 3
TOPIC 2. ELEMENTS OF THE CORRELATION THEORY OF RANDOM PROCESSES 4
2.1. The concept of the correlation theory of random processes 4
2.2. Mathematical expectation and variance of a random process. Standard deviation 5
2.3. Correlation function of a random process and its properties. Normalized correlation function 5
2.4. Cross-correlation function and normalized cross-correlation function of two random processes 6
2.5 Probabilistic characteristics of the sum of two random variables 6
TOPIC 3. ELEMENTS OF RANDOM ANALYSIS 7
3.1. Convergence and continuity 7
3.2. Derivative of a random process and its properties 8
3.3. The integral of a random process and its properties 9
TOPIC 4. CANONICAL EXPANSIONS OF RANDOM PROCESSES 10
4.1. The concept of canonical decomposition of a random process 10
4.2. The concept of a generalized function. The Dirac delta function. Integral canonical representation of random processes. eleven
4.3. Linear and non-linear transformations of random processes 12
CHAPTER 5. STATIONARY RANDOM PROCESSES 14
5.1. The concept of a stationary random process. Stationarity in the narrow and broad senses 14
5.2 Properties of probabilistic characteristics of a stationary random process 15
5.3. Stationary coupled random processes. Derivative and integral of a stationary random process 15
5.4. Ergodic stationary random processes and their characteristics 16
5.5. Event streams 17
TOPIC 6. MARKOV CHAINS 19
6.1. Markov chains. 19

TOPIC 1. BASIC CONCEPTS OF THE THEORY OF RANDOM PROCESSES

1.1. Definition of a random process. Basic approaches to the task of random processes. The concept of realization and section. Elementary random processes.

A random (stochastic, probabilistic) process is a function of a real variable t, the values ​​of which are the corresponding random variables X(t).
In the theory of random processes, t is treated as a time that takes values ​​from some subset T of the set of real numbers (t T, T R).
In the framework of classical mathematical analysis, the function y=f(t) is understood as such a type of dependence of the variables t and y, when a specific numerical value of the argument t corresponds to the only numerical value of the function y. For random processes, the situation is fundamentally different: setting a specific argument t leads to the appearance of a random variable X(t) with a known distribution law (if it is a discrete random variable) or with a given distribution density (if it is a continuous random variable). In other words, the characteristic under study at each moment of time is random with a non-random distribution.
The values ​​that the ordinary function y=f(t) takes at each moment of time completely determine the structure and properties of this function. For random processes, the situation is different: here it is absolutely not enough to know the distribution of the random variable X(t) for each value of t, information is needed about the expected changes and their probabilities, that is, information about the degree of dependence of the upcoming value of the random process on its history.

The most general approach to describing random processes is to set all its multivariate distributions, when the probability of the following events occurring simultaneously is determined:

T1, t2,…,tn T, n N: X(ti)≤xi; i=1,2,…,n;

F(t1;t2;…;tn;x1;x2;…;xn)=P(X(t1)≤ x1; X(t2)≤x2;…; X(tn)≤xn).

This way of describing random processes is universal, but very cumbersome. To obtain significant results, the most important special cases are singled out, which allow the use of a more advanced analytical apparatus. In particular, it is convenient to consider the random process X(t, ω) as a function of two variables: t T, ω Ω, which for any fixed value of t T becomes a random variable defined on the probability space (Ω, A, P), where Ω - non-empty set of elementary events ω; A is the σ-algebra of subsets of the set Ω, that is, the set of events; P is a probability measure defined on A.

A nonrandom numerical function x(t)=X(t,ω0) is called a realization (trajectory) of a random process X(t, ω).
The cross section of a random process X(t, ω) is a random variable that corresponds to the value t=t0.

If the argument t takes all real values ​​or all values ​​from some interval T of the real axis, then one speaks of a random process with continuous time. If t takes only fixed values, then one speaks of a random process with discrete time.
If the cross section of a random process is a discrete random variable, then such a process is called a process with discrete states. If any section is a continuous random variable, then the random process is called a process with continuous states.
In the general case, it is analytically impossible to specify a random process. The exception is the so-called elementary random processes, the form of which is known, and random variables are included as parameters:
X(t)=X(t,A1,…,An), where Ai, i=1,…,n are arbitrary random variables with a specific distribution.

1.2. Some classes and types of random processes

1.1.1. Gaussian stochastic processes

A random process X(t) is called Gaussian if all its finite-dimensional distributions are normal, i.e.
t1, t2,…,tn T
random vector
(X(t1); X(t2);…; X(tn))
has the following distribution density:

Where ai=MX(ti); =M(X(ti)-ai)2; сij= M((X(ti)-ai)(X(tj)-aj)); ;
-algebraic complement of the element сij.

1.1.2. Random processes with independent increments

A random process X(t) is called a process with independent increments if its increments on non-overlapping time intervals do not depend on each other:
t1, t2,…,tn T: t1 ≤t2 ≤…≤tn,
random variables
X(t2)-X(t1); X(t3)-X(t2); …; X(tn)-X(tn-1)
independent.

1.1.3. Random processes with uncorrelated increments

A random process X(t) is called a process with uncorrelated increments if the following conditions are met:
1) t T: МX2(t)< ∞;
2) t1, t2, t3, t4 T: t1 ≤t2 ≤ t3≤ t4: М((X(t2)-X(t1))(X(t4)-X(t3)))=0.

1.1.4. Stationary Stochastic Processes (see Chapter 5)

1.1.5. Markov stochastic processes

We confine ourselves to the definition of a Markov random process with discrete states and discrete time (Markov chain).

Let system A be in one of the incompatible states A1; A2;…;An, and at the same time, the probability Рij(s) that in the s-th test the system passes from state to state Aj does not depend on the state of the system in the tests preceding the s-1st. A random process of this type is called a Markov chain.

1.1.6. Poisson random processes

A random process X(t) is called a Poisson process with parameter a (a>0) if it has the following properties:
1) t T; T= is called the limit in rms at λ→0 (n→0)

Integral sums where si (ti; ti+1); λ=max(ti+1 - ti), i=0,…,n-1.

Theorem 4. The mathematical expectation of the integral of a random process is equal to the integral of its mathematical expectation: , .
Theorem 5. The correlation function of the integral of the random process X(t) is equal to the double integral of its correlation function: .
Theorem 6. The mutual correlation function of the random process X(t) and its integral is equal to the integral of the correlation function of the random process X(t):

TOPIC 4. CANONICAL EXPANSIONS OF RANDOM PROCESSES

4.1. The concept of canonical decomposition of a random process

A random variable V is called centered if its mathematical expectation is equal to 0. An elementary centered random process is a product of a centered random variable V and a non-random function φ(t): X(t)=V φ(t). An elementary centered random process has the following characteristics:

Expression of the form, where φk(t), k=1;2;…-non-random functions; , k=1;2;… - uncorrelated centered random variables, is called the canonical expansion of the random process X(t), while the random variables are called the coefficients of the canonical expansion; and non-random functions φk(t) - coordinate functions of the canonical expansion.

Consider the characteristics of a random process

Since by the condition

Obviously, the same random process has different types of canonical expansion depending on the choice of coordinate functions. Moreover, even when the choice of coordinate functions has taken place, there is arbitrariness in the distribution of random variables Vk. In practice, based on the results of the experiments, estimates are obtained for the mathematical expectation and the correlation function: . After expanding into a double Fourier series in terms of coordinate functions φк(t):

Get the values ​​of the dispersions of random variables Vk.
4.2. The concept of a generalized function. The Dirac delta function. Integral canonical representation of random processes.

A generalized function is a limit of a sequence of a one-parameter family of continuous functions.
The Dirac delta function is a generalized function resulting from passing to the limit at in the family of functions

Among the properties of the -function, we note the following:
1.
2.
3. If f(t) is a continuous function, then

Random process X(t), whose correlation function has the form is called non-stationary "white noise". If W(t1)=W - const, then Х(t)-stationary "white noise".

As follows from the definition, no two, even arbitrarily close, “white noise” cross sections are correlated. The expression W(t) is called the "white noise" intensity.

An integral canonical representation of a random process X(t) is an expression of the form where is a random centered function; - non-random function of continuous arguments

The correlation function of such a random process has the form:
.
It can be shown that there exists a non-random function G(λ) such that

Where G(λ1) is the dispersion density; δ(x) - Dirac delta function. We get
Therefore, the variance of the random process X(t):
.

4.3. Linear and Nonlinear Transformations of Stochastic Processes

The following problem is considered: an “input signal” is fed to the input of the system (device, converter) S, which has the character of a random process X(t). The system converts it into an "output signal" Y(t):
.
Formally, the transformation of a random process X(t) into Y(t) can be described using the so-called system operator Аt:
Y(t)=At(X(t)).
The index t indicates that this operator performs a transformation in time. The following formulations of the problem of transformation of a random process are possible.
1. The distribution laws or general characteristics of the random process X(t) at the input to the system S are known, the operator Аt of the system S is given, it is required to determine the distribution law or the general characteristics of the random process Y(t) at the output of the system S.
2. The laws of distribution (general characteristics) of the random process X(t) and the requirements for the random process Y(t) are known; it is necessary to determine the form of the operator Аt of the system S that best satisfies the given requirements for Y(t).
3. The laws of distribution (general characteristics) of the random process Y(t) are known and the operator Аt of the system S is given; it is required to determine the laws of distribution or general characteristics of the random process X(t).
The following classification of operators Аt of the system S is accepted:

System operators

Linear L Nonlinear N

Linear homogeneous L0 Linear inhomogeneous Lн

1. Consider the impact of a linear inhomogeneous system
Lн(...)=L0(…)+φ(t)
on a random process X(t) having the following canonical expansion:
.
We get:

Let us introduce the notation

Then the canonical decomposition of Y(t) takes the form:
.
Mathematical expectation of a random process Y(t):

Correlation function of random process Y(t):

Consequently,
.
On the other hand

Dispersion of random process Y(t):

In conclusion of this section, we note that the operators of differentiation and integration of random processes are linear homogeneous.
2. A quadratic transformation is considered:
Y(t)=(X(t))2,
Vk-centered random variables having a distribution symmetric about zero; any four of them are collectively independent. Then

We introduce non-random functions

And random variables

Then the random process Y(t) takes the form

A canonical decomposition of the random process Y(t) is obtained. Correlation function Y(t):

Dispersion:

CHAPTER 5. STATIONARY RANDOM PROCESSES

5.1. The concept of a stationary random process. Stationarity in the narrow and broad senses

Stationary (homogeneous in time) is a random process, the statistical characteristics of which do not change over time, that is, they are invariant with respect to time shifts.
Distinguish random processes stationary in a broad and narrow sense.

A stationary random process in the narrow sense is a random process X(t), all the probabilistic characteristics of which do not change with time, that is, such that the condition
F(t1; t2;… ;tn; x1; x2;…; xn)=F(t1+τ; t2+τ;… ;tn+τ; x1; x2;…; xn), and hence all n -dimensional distributions do not depend on time points t1; t2;… ;tn, but on the duration of time intervals τi:

In particular, the one-dimensional distribution density does not depend on the time t at all:

Two-Dimensional Section Density at Times t1 and t2

N-dimensional density of sections at time t1; t2...; tn:

A random process X(t) is called stationary in a broad sense if its first and second order moments are invariant with respect to time shift, that is, its mathematical expectation does not depend on time t and is a constant, and the correlation function depends only on the length of the time interval between sections:
It is obvious that a stationary random process in the narrow sense is a stationary random process in the broad sense as well; the converse is not true.

5.2 Properties of probabilistic characteristics of a stationary random process
1.

3. The correlation function of a stationary random process is even:

4. The variance of a stationary random process is a constant equal to
the value of its correlation function at the point:

5.
6. The correlation function of a stationary random process is
positive definite, that is

The normalized correlation function of a stationary random process is also even, positive definite, and, moreover,

5.3. Stationary coupled random processes. Derivative and integral of a stationary random process

Random processes X(t) and Y(t) are called stationary if their mutual correlation function depends only on the difference of the arguments τ =t2-t1: RXY(t1;t2)=rXY(τ).

The stationarity of the random processes X(t) and Y(t) themselves does not mean their stationary connection.
We note the main properties of stationary random processes, the derivative and integral of stationary random processes,
1) rXY(τ)=rYX(-τ).
2)
3)
4)
where
5) where
6) ;

5.4. Ergodic Stationary Stochastic Processes and Their Characteristics

Among stationary random processes, there is a special class of processes called ergodic, which have the following property: their characteristics obtained by averaging the set of all realizations coincide with the corresponding characteristics obtained by averaging over time one realization observed on the interval (0, T) of a sufficiently long duration. That is, on a sufficiently large time interval, any implementation passes through any state, regardless of what the initial state of the system was at t=0; and in this sense, any realization fully represents the entire set of realizations.

Ergodic Birkhoff-Khinchin theorem
For any stationary random process in the narrow sense X(t) that has a finite mathematical expectation with probability 1, there is a limit
for SSP with continuous time,
for SSP with discrete time.
If, in addition, X(t) is an ergodic stationary random process, then
In the condition of the theorem, the conditional mathematical expectation of the random process X(t) with respect to Jx; Jx is the -algebra of events invariant with respect to X(t); an event A is said to be invariant under X(t) if B is such that A=(ω: X(ω,t) B).

Sufficient conditions for ergodicity
Theorem 1. A stationary random process X(t) is ergodic with respect to
mathematical expectation if its correlation function
tends to zero as τ→∞;
wherein: .

Theorem 2. A stationary random process X(t) is ergodic with respect to
dispersion, if the correlation function of the stationary random
tea process Y(t)=X2(t) tends to zero as τ→∞;
wherein:

Theorem 3. A stationary random process X(t) is ergodic with respect to
correlation function if tends to zero as τ→∞ cor-
relational function of a stationary random process
Z(t, τ)= ;
wherein:

In practical calculations, the interval (0; T) is divided into n equal parts; in each interval, a point ti is selected (for example, the middle). If we confine ourselves to the formula of rectangles, we get

5.5. Event streams
A stream of events is a sequence of events that occur at a random moment in time.

Event stream properties:
1) Stationary flow.
A flow is called stationary if the probability of m events in any time interval τ depends only on the number of events m and on the length of the interval τ and does not depend on the moment of time at which this interval began
2) No aftereffect.
A stream of events is said to have the property of no aftereffect if the probability of occurrence of m events in any period of time does not depend on whether or not events appeared at the moments of time immediately preceding this period.
The history of a thread does not affect the occurrence of events in the near future. If the flow has the property of no aftereffect, then the random variables of the occurrence of events on non-intersecting intervals are independent of each other.
3) Ordinariness.
A flow is said to have the property of being ordinary if no more than one event can occur in an infinitely small time interval, i.e. the occurrence of 2 or more events in a short period of time is practically impossible.
4) Poisson flow
If the flow simultaneously possesses the properties of stationarity, absence of aftereffect and ordinariness, then it is called the simplest (Poisson) flow.

Theorem. If the flow is the sum of a large number of independent stationary flows, the influence of each of which is negligible, then the total flow, provided that it is ordinary, is close to the simplest.
The flow intensity is the average number of events occurring per unit of time.
If the flow has a constant intensity, then the probability of occurrence of m events for time intervals of duration τ is calculated using the Poisson formula.
– Poisson flow.
The problem of a simple telegraph wave.
There is some device to which the signal is applied. These signals form the simplest flow.
X(t) a -a
P 1/2 1/2
Investigate the characteristics of the SP X(t), which takes the values ​​±a at arbitrary times. Discrete SP with continuous time. M(X(t)) = 0

X(t1)X(t2) a2 -a2
P P even P odd
Let t1< t2 => τ > 0

Consequently, the telegraph wave is an ergodic SCS.
Rationale - The following properties must hold
1) Stationarity - no dependence on the choice of time interval.
2) Absence of aftereffect - moments of time do not appear in the formula.
3) Ordinariness
Probability of more than one event
Probability of 1st event
Probability of more than 2 events
with =>
for small τ tend to zero at a rate not less than a square.

TOPIC 6. MARKOV CHAINS

6.1. Markov chains.

A Markov chain is a sequence of events, in each of which only one of the incompatible events A1,A2…Ak appears, while the conditional probability pij(s) in the s-th test will be the event Ai and the condition that in the s-1 test the event Aj e occurred depends on the result of previous events.
A discrete-time Markov chain is a chain whose states change at fixed times.
A Markov chain with continuous time is a chain whose state changes occur at an arbitrary moment in time.
A Markov chain is called homogeneous if the conditional probability pij(s) of transition to a state from Ai to Aj does not depend on the trial number, on s.
The probabilities that the system passes from Ai to Aj as a result of the test are called the transition probabilities of a homogeneous Markov chain.
Transition probabilities form a matrix of transition probabilities i=1;…;k
Markov equality
Pij(n) is the probability of the system transition from state Ai to Aj in n trials

Consequences
1) n=2; m=1
; Pij(1)=pi,j; P2=(Pi,j(2))=P1P1=P2
2) n=3; m=2
; P3=P3
3) Pn=P12.

1. The concept of a random function, stochastic processes

When studying many phenomena, one systematically has to deal with random variables that change in the process of testing for a certain time. We have already met with examples of such phenomena in Sections 6.2. and 9.2. in connection with the Poisson distribution law.

Examples of such r.v. are: the decay of a radioactive substance in a chemical reaction, the signal at the output of a radio receiver under the influence of interference, the length of the queue for a ticket for a football match, price fluctuations in the essential goods trade system, the workload of students during the academic semester, the trajectory of particles in Brownian motion, the rating of applicants in electoral processes, the number of calls coming to the telephone exchange, etc.

Such random variables that change in the process of experience (observation, testing) are called random processes (random functions). At present, a number of branches of technology and science (physical statistics, the diffusion process, chemical reaction processes, etc.) have posed new problems for the theory of probability that do not fit into the framework of classical probability theory. At that time, many branches of human activity were interested in the study of processes, that is, phenomena occurring in time. They demanded from the science of probability theory the development of a general theory of so-called random processes. In other words, the development of a theory that would study random variables that depend on one or more continuously changing time parameters. Let us give examples of such problems illustrating the necessity of constructing a theory of random processes.

Imagine that we want to follow the movement of a molecule of a gas or liquid. This molecule at random times collides with other molecules and changes its speed and position. Obviously, the state of the molecule is subject to random changes at each moment of time. Many phenomena of nature require for their study the ability to calculate the probabilities that a certain number of phenomena (molecules, price changes, the arrival of radio signals, etc.) change one or another situation. All these and many other questions are answered by the statistical theory of random processes, or, as it is commonly called " theory of stochastic processes ». Obviously, similar problems arise in physics, chemistry, astronomy, economics, genetics, etc. For example, when studying the process of a chemical reaction, a legitimate question arises:

What fraction of the molecule has already reacted?

How does this reaction occur over time?

When is the reaction almost over?

A large number of phenomena proceed according to the principle of radioactive decay. The essence of this phenomenon is that the atoms of a radioactive substance decay instantly, turning into atoms of another chemical element. The decay of each atom occurs quickly and at a high speed in time, like an explosion, with the release of a certain amount of energy. As a rule, numerous observations show that the decay of various atoms for the observer occurs at randomly taken times. In this case, the location of these moments of time do not depend on each other in the sense of probability theory. To study the process of radioactive decay, it is essential to determine what is the probability that a certain number of atoms will decay in a certain period of time? Formally, if one asks only to elucidate the mathematical picture of such phenomena, then one can find a simple solution to such mathematical problems that such phenomena lead to.

Let us briefly describe how, based on the consideration of the problem of particle wandering in a straight line, the scientists Planck and Fokker obtained a differential equation in the theory of diffusion.

Let the particle at the moment of time at the point
, at moments
experiences random shocks, as a result of which it moves each time with a probability by the amount to the right and with probability
also by the amount to the left.

Denote by
the probability that the particle as a result shocks will appear at the time
pregnant (it is clear that for an even number of shocks, the value can only be an even number of steps , and when odd - only an odd number of steps . If through
denote the number of steps taken by the particle to the right (then

is the number of steps that the particle made to the left), then according to the Bernoulli formula, this probability is equal to

It is clear that these quantities are related by the equality
One can directly verify that the function
satisfies the difference equation

with initial conditions
and at

. The physical nature of the problem will force us to go to certain natural restrictions on the ratio of parameters
. Failure to comply with some necessary conditions, which will be discussed below, can lead to the fact that over a finite period of time a particle with a probability equal to one can go to infinity. In order to exclude this possibility, we impose the following conditions on the parameters with

where the value expresses speed currents, a
diffusion coefficient.

Subtract from both parts of equality (1) the quantity
, we get

Let's assume that the function
differentiable with respect to twice and once . Then we have

After substituting the obtained equalities into (3), we have

From here, passing to the limit
and based on conditions (2) we finally obtain

(4)

Thus, we have obtained the well-known equation, which in diffusion theory is called Fokker–Planck equations.

The beginning of the general theory of stochastic processes was laid in the fundamental works of A.N. Kolmogorov and A.Ya. Khinchin in the early 1930s. In the article by A.N. Kolmogorov "On analytical methods of the theory of probability" was given a systematic and rigorous construction of the foundations of the theory of stochastic processes no aftereffect or, as is often said, Markov-type processes. In a number of Khinchin's works, a theory of so-called stationary processes was created.

Thus, the branch of mathematics that studies random phenomena in the dynamics of their

development is called theory of random processes(random functions). Its methods are often used: in the theory of automatic control, in the analysis and planning of the financial activities of enterprises and farms, in the processing and transmission of the necessary information (signals in radio devices, satellite communications, etc.), in economics and in the theory of mass service.

Let us briefly consider the basic concepts of the theory of random processes (SP).

If each value
, where denotes some set of real numbers, is put in correspondence with r.v.
, then we say that on the set given a random function (s.f.)
. Random processes that
, are especially important in applications. In cases where the parameter interpreted as a time parameter, then the random function is called random process, i.e. random process is called the family of r.v.
parameter dependent
and given on the same space of elementary events
Denoted
or

A random process can be specified in the form of a formula (analytical notation) if the form of the random function is known. For example, s.f. is an r.p., where the random variable
has a uniform distribution. For a fixed value
, s.p.
, then the r.p. converts to r.v.
which is called the cross section of the random process.

Implementation or trajectory random process
called nonrandom time function
at a fixed
, i.e. as a result of testing s.p. takes on a specific form.
, while realizations of r.s. denoted by
,
where the indices indicate the test number.

Figure 59 shows three implementations
random process at
;

They resemble the types of three sinusoidal oscillatory phenomena in some mechanical process, while each such realization (trajectory) is an ordinary function

Fig.59 (Written).

In this example, the r.v. in three experiments, it took three values, respectively: 1, 2, 0.5, i.e. three implementations of the joint venture are stated: All three features are non-random. If in this example we fix the moment of time, at
, then we get the cross section:
- random variable or
, are random variables. Note that the so-called one-dimensional distribution law of a random process
is not exhaustive characteristic of s.p. random process
is a set of all cross sections for different values
, therefore, to fully describe it, one should consider the joint distribution function of the process cross sections:

the so-called finite-dimensional law of distribution of r.p. in moments
. In other words, multidimensional r.v.s arise.

Thus, the concept of s.p. is a direct generalization of the concept of a system of random variables, when these variables are an infinite set.

Theory of random processes called a mathematical science that studies the patterns of random phenomena in the dynamics of their development.

The theory of random processes (in another terminology - the theory of random functions) is a relatively new branch of probability theory, which has been developing especially rapidly in recent decades in connection with the ever-expanding range of its practical applications.

When studying the phenomena of the world around us, we often encounter processes, the course of which cannot be precisely predicted in advance. This uncertainty (unpredictability) is caused by the influence of random factors affecting the course of the process. Let us give some examples of such processes.

1. The voltage in the electrical network, nominally constant and equal to 220 V, actually changes over time, fluctuates around the nominal value under the influence of such random factors as the number and type of devices connected to the network, the moments of their switching on and off, etc.

2. The population of a city (or region) changes over time in a random (unpredictable) way under the influence of factors such as births, deaths, migration, etc.

3. The water level in a river (or in a reservoir) changes randomly over time depending on the weather, rainfall, snowmelt, irrigation intensity, etc.

4. A particle performing Brownian motion in the field of view of a microscope changes its position randomly as a result of collisions with liquid molecules.

5. A space rocket is flying, which must be launched at a given moment to a given point in space with a given direction and absolute value of the velocity vector. The actual movement of the rocket does not coincide with the calculated one due to such random factors as atmospheric turbulence, fuel heterogeneity, errors in command processing, etc.

6. The computer in the course of work can randomly move from state to state, for example:

S1- works properly;

S2- there is a malfunction, but it is not detected;

S3- a malfunction has been detected, its source is being searched;

S4- being repaired, etc.

Transitions from state to state occur under the influence of random factors, such as voltage fluctuations in the computer power supply network, failure of individual elements, the moment of detection of faults, the time for their elimination, etc.

Strictly speaking, in nature there are no completely non-random, exactly deterministic processes, but there are processes on the course of which random factors influence so weakly that they can be neglected when studying the phenomenon (example: the process of planets revolving around the Sun). However, there are also such processes where randomness plays the main role (example: the above-considered process of the Brownian motion of a particle). Between the two extremes lies a spectrum of processes in which chance plays a greater or lesser role. To take into account (or not to take into account) the randomness of the process also depends on what practical problem we are solving. For example, when scheduling the movement of aircraft between two points, their trajectories can be considered rectilinear, and the movement is uniform; the same assumptions will not work if the problem of designing an autopilot to control the flight of an aircraft is being solved.



There are two main types of problems, the solution of which requires the use of the theory of random functions (random processes).

Direct problem (analysis): the parameters of a certain device and its probabilistic characteristics (mathematical expectations, correlation functions, distribution laws) of the function (signal, process) arriving at its “input” are given; it is required to determine the characteristics at the "output" of the device (they are used to judge the "quality" of the device).

Inverse problem (synthesis): the probabilistic characteristics of the "input" and "output" functions are given; it is required to design an optimal device (find its parameters) that converts a given input function into an output function that has the given characteristics. The solution of this problem requires, in addition to the apparatus of random functions, attraction and other disciplines.

Introduction


The theory of random processes (random functions) is a branch of mathematical science that studies the patterns of random phenomena in the dynamics of their development.

At present, a large amount of literature has appeared that is directly devoted to the theory of queuing, the development of its mathematical aspects, as well as various areas of its application - military, medical, transport, trade, aviation, etc.

Queuing theory is based on probability theory and mathematical statistics. The initial development of the theory of queuing is associated with the name of the Danish scientist A.K. Erlang (1878-1929), with his writings on the design and operation of telephone exchanges.

Queuing theory is a field of applied mathematics that deals with the analysis of processes in production, service, and control systems in which homogeneous events are repeated many times, for example, in consumer services enterprises; in systems for receiving, processing and transmitting information; automatic production lines, etc. A great contribution to the development of this theory was made by Russian mathematicians A.Ya. Khinchin, B.V. Gnedenko, A.N. Kolmogorov, E.S. Wentzel and others.

The subject of queuing theory is to establish relationships between the nature of the flow of applications, the number of service channels, the performance of an individual channel and efficient service in order to find the best ways to control these processes. The tasks of the queuing theory are of an optimization nature and ultimately include the economic aspect of determining such a variant of the system, which will provide a minimum of total costs from waiting for service, loss of time and resources for service, and from downtime of service channels.

In commercial activities, the application of the theory of queuing has not yet found the desired distribution.

This is mainly due to the difficulty of setting goals, the need for a deep understanding of the content of commercial activities, as well as reliable and accurate tools that allow calculating various options for the consequences of managerial decisions in commercial activities.


1. Definition of a random process and its characteristics


A random process X(t) is a process whose value for any value of the argument t is a random variable.

In other words, a random process is a function that, as a result of testing, can take one or another specific form, unknown in advance. For a fixed t = to X(to) is an ordinary random variable, i.e. cross section of a random process at time tо.

The implementation of a random process X (t, w) is a non-random function x(t), into which the random process X(t) turns as a result of testing (for a fixed w), i.e. specific form taken by the random process X(t), its trajectory.

Thus, the random process X (t, w) combines the features of a random variable and a function. If we fix the value of the argument t, the random process turns into an ordinary random variable, if we fix w, then as a result of each test it turns into an ordinary non-random Function.

Like a random variable, a random process can be described by numerical characteristics.

The mathematical expectation of a random process X(t) is a non-random function a x (t), which for any value of the variable t is equal to the mathematical expectation of the corresponding section of the random process X(t), i.e. ax (t) = M .

The variance of a random process X(t) is a non-random function. D x (t), for any value of the variable t, equal to the variance of the corresponding section of the random process X(t), i.e. Dx (t) = D .

Standard deviation random process X(t) is the arithmetic value of the square root of its variance, i.e.

The mathematical expectation of a random process characterizes the average trajectory of all its possible implementations, and its variance or standard deviation characterizes the spread of implementations relative to the average trajectory.

The correlation function of a random process X(t) is a non-random function

two variables t1 and t 2, which for each pair of variables t1 and t2 is equal to the covariance of the corresponding sections X(t1) and X(t 2) random process.

The normalized correlation function of a random process X(t) is the function

Random processes can be classified depending on whether the states of the system in which they occur change smoothly or abruptly, of course (countably) or an infinite number of these states, etc. Among random processes, a special place belongs to the Markov random process. But first, let's get acquainted with the basic concepts of queuing theory.


2. Basic concepts queuing theory


In practice, one often encounters systems designed for reusable use in solving the same type of problems. The processes that arise in this case are called service processes, and the systems are called queuing systems (QS). Examples of such systems are telephone systems, repair shops, computer systems, ticket offices, shops, hairdressers, and the like.

Each QS consists of a certain number of service units (instruments, devices, points, stations), which we will call service channels. Channels can be communication lines, working points, computers, sellers, etc. According to the number of channels, QS are divided into single-channel and multi-channel.

Applications usually arrive at the QS not regularly, but randomly, forming the so-called random flow of applications (requirements). Servicing requests, generally speaking, also continues for some random time. The random nature of the flow of applications and service time leads to the fact that the QS is loaded unevenly: in some periods of time, a very large number of applications accumulate (they either queue up or leave the QS unserved), while in other periods the QS operates with an underload or is idle.

The subject of queuing theory is the construction of mathematical models that relate the given operating conditions of the QS (the number of channels, their performance, the nature of the request flow, etc.) with the QS efficiency indicators that describe its ability to cope with the flow of requests.

The following are used as performance indicators of the QS: the average number of applications served per unit of time; the average number of applications in the queue; average waiting time for service; probability of denial of service without waiting; the probability that the number of requests in the queue will exceed a certain value, etc.

QS are divided into two main types (classes): QS with failures and QS with waiting (queue). In a QS with denials, a request that arrives at the moment when all channels are busy receives a refusal, leaves the QS and does not participate in the further service process (for example, a request for a telephone conversation at the moment when all channels are busy receives a refusal and leaves the QS unserved). In a QS with waiting, a claim that arrives at a time when all channels are busy does not leave, but queues up for service.

QS with waiting are divided into different types depending on how the queue is organized: with a limited or unlimited queue length, with a limited waiting time, etc.


3. The concept of a Markov random process


The QS process is a random process.

A process is called a process with discrete states if its possible states S1, S2, S3… can be listed in advance, and the transition of the system from state to state occurs instantly (jump). A process is called a process with continuous time if the moments of possible transitions of the system from state to state are not fixed in advance, but are random.

The QS operation process is a random process with discrete states and continuous time. This means that the state of the QS changes abruptly at random moments of the appearance of some events (for example, the arrival of a new claim, the end of service, etc.).

Mathematical analysis of the work of the QS is greatly simplified if the process of this work is Markov. A random process is called a Markov or random process without aftereffect if, for any time to, the probabilistic characteristics of the process in the future depend only on its current state to and do not depend on when and how the system came to this state.

An example of a Markov process: system S is a counter in a taxi. The state of the system at time t is characterized by the number of kilometers (tenths of kilometers) traveled by the car up to that moment. Let the counter show So at the moment to. The probability that at the moment t > to the meter will show one or another number of kilometers (more precisely, the corresponding number of rubles) S1 depends on So, but does not depend on the time at which the meter readings changed before the moment to.

Many processes can be considered approximately Markovian. For example, the process of playing chess; system S is a group of chess pieces. The state of the system is characterized by the number of opponent's pieces remaining on the board at the moment to. The probability that at the moment t > to the material advantage will be on the side of one of the opponents depends primarily on the state the system is in at the moment to, and not on when and in what sequence the pieces disappeared from the board to moment to.

In some cases, the prehistory of the processes under consideration can simply be neglected and Markov models can be used to study them.

When analyzing random processes with discrete states, it is convenient to use a geometric scheme - the so-called state graph. Usually, system states are represented by rectangles (circles), and possible transitions from state to state - by arrows (oriented arcs), connecting states.

For a mathematical description of a Markov random process with discrete states and continuous time, occurring in a QS, let's get acquainted with one of the important concepts of probability theory - the concept of a stream of events.


. Event streams


The flow of events is understood as a sequence of homogeneous events following one after another at some random time (for example, a flow of calls at a telephone exchange, a flow of computer failures, a flow of customers, etc.).

The flow is characterized by the intensity X - the frequency of occurrence of events or the average number of events entering the QS per unit time.

A stream of events is called regular if events follow one after another at regular intervals. For example, the flow of products on an assembly line (at a constant speed) is regular.

A stream of events is called stationary if its probabilistic characteristics do not depend on time. In particular, the intensity of a stationary flow is a constant value: For example, the flow of cars on a city avenue is not stationary during the day, but this flow can be considered stationary at a certain time of the day, say, during peak hours. In this case, the actual number of passing cars per unit of time (for example, every minute) may vary markedly, but their average number is constant and will not depend on time.

A stream of events is called a stream without aftereffect if for any or two non-intersecting time intervals T1 and T2 the number of events falling on one of them does not depend on the number of events falling on the others. For example, the flow of passengers entering the subway has almost no aftereffect. And, say, the flow of customers leaving the counter with their purchases already has an aftereffect (if only because the time interval between individual customers cannot be less than the minimum service time for each of them).

A stream of events is called ordinary if the probability hitting a small (elementary) time interval At of two or more events is negligible compared to withthe probability of hitting a single event. In other words, a stream of events is ordinary if events appear in it one by one, and not in groups. For example, the flow of trains approaching the station is ordinary, but the flow of wagons is not ordinary.

The stream of events is called the simplest(or stationary Poisson) if it is simultaneously stationary, ordinary and has no aftereffect. The name "simplest" is explained by the fact that QS with the simplest flows has the simplest mathematical description. A regular stream is not the simplest, since it has an aftereffect: the moments of occurrence of events in such a stream are rigidly fixed.

The simplest flow as a limiting flow arises in the theory of random processes just as naturally as in probability theory, the normal distribution is obtained as a limiting one for the sum of random variables: when superimposing (superposition) a sufficiently large number n of independent, stationary and ordinary flows (comparable to each other in intensities Аi (i=1,2…p)) the flow is close to the simplest one with the intensity X equal to the sum of the intensities of the incoming flows, i.e.:

Binomial distribution law:

with parameters

The binomial distribution tends to the Poisson distribution with the parameter


for which the mathematical expectation of a random variable is equal to its variance:

In particular, the probability that no event will occur during time t (t = 0) is equal to

The distribution given by the probability density or distribution function is exponential (exponential). Thus, the time interval between two adjacent arbitrary events of the simplest flow has an exponential distribution, for which the mathematical expectation is equal to the standard deviation of the random variable:

and vice versa according to the intensity of the flow

The most important property of the exponential distribution (inherent only in the exponential distribution) is as follows: if the time interval distributed according to the exponential law has already lasted for some time t, then this does not affect the distribution law of the remaining part of the interval (T - t): it will be the same , as well as the law of distribution of the entire interval T.

In other words, for a time interval T between two successive neighboring events of a flow that has an exponential distribution, any information about how long this interval elapsed does not affect the distribution of the remainder. This property of the exponential law is, in essence, another formulation for the "lack of aftereffect" - the main property of the simplest flow.

For the simplest flow with intensity, the probability of hitting at least one event of the flow on an elementary (small) time interval At is equal to:

(This approximate formula, obtained by replacing the function with only the first two terms of its expansion into a series in powers of At, is the more accurate, the smaller At).


5. Kolmogorov's equations. Limit probabilities of states


The corresponding process state graph is shown in fig. to the task. We will assume that all transitions of the system from the state Si to Sj occur under the influence of the simplest flows of events with intensities (i , j = 0, 1, 2.3); Thus, the transition of the system from the state S0 to S1 will occur under the influence of the flow of failures of the first node, and the reverse transition from state S0 to S1 will occur under the influence of the flow of “ends of repairs” of the first node, etc.

The state graph of a system with intensities marked on the arrows will be called labeled (see the figure above). The considered system S has four possible states: S0 , S1 S2, S3. The probability of the i-th state is the probability pi(t) that at the moment t the system will be in the state Si. Obviously, for any moment t, the sum of the probabilities of all states is equal to one:

Let us consider the system at the moment t and, having given a small interval At, find the probability po (t + At) that the system at the moment t + At will be in the state S0. This is achieved in various ways.

1.The system at the moment t was in the state S0 with probability po (t), but did not leave it during the time At.

The system can be brought out of this state (see the graph in the figure for the problem) using the simplest total flow with intensity , with a probability approximately equal to

And the probability that the system will not leave the state S0 is equal to . The probability that the system will be in the state S0 and will not leave it during the time At is, according to the probability multiplication theorem:

At time t, the system was in state S1 or S2 with probability p1 (t) (or p2 (t)) and in time At passed into state

By the flow of intensity the system will go to the state So with a probability approximately equal to . The probability that the system will be in the state So, according to this method is equal to (or )

Applying the probability addition theorem, we get:

Passing to the limit at At 0 (approximate equalities turn into exact ones), we obtain the derivative on the left side of the equation (let's denote it for simplicity):

A first-order differential equation is obtained, i.e. an equation containing both the unknown function itself and its first-order derivative.

Arguing similarly for other states of the system S, we can obtain a system of Kolmogorov differential equations for state probabilities:


Let us formulate a rule for compiling the Kolmogorov equations. On the left side of each of them is the derivative of the probability of the i-th state. On the right side - the sum of the products of the probabilities of all states (from which the arrows go to this state) by the intensity of the corresponding flows of events minus the total intensity of all flows that bring the system out of this state, multiplied by the probability of the given (i-th state

In the system indicated above, the number of independent equations is one less than the total number of equations. Therefore, to solve the system, it is necessary to add the equation

A feature of solving differential equations in general is that it is required to set the so-called initial conditions, in this case, the probabilities of the system states at the initial moment t = 0. the system was in the So state, i.e. under initial conditions

Kolmogorov's equations make it possible to find all probabilities of states as functions of time. Of particular interest are the probabilities of the system p i (t) in the limiting stationary mode, i.e. at , which are called limiting (final) state probabilities.

In the theory of random processes, it is proved that if the number of states of the system is finite and from each of them it is possible (in a finite number of steps) to go to any other state, then there are limiting probabilities.

The limiting probability of the state Si has a clear meaning: it shows the average relative time the system spends in this state. For example, if the marginal probability of the state So, i.e. p0=0.5, this means that, on average, the system is in state S0 half the time.

Since the limiting probabilities are constant, replacing their derivatives in the Kolmogorov equations with zero values, we obtain a system of linear algebraic equations describing the stationary regime.

The processes of death and reproduction

In the theory of queuing, a special class of random processes is widespread - the so-called death and reproduction processes.This name is associated with a number of biological problems, where this process serves as a mathematical model of changes in the number of biological populations.

Consider an ordered set of system states S 0, S1, S2,…, Sk. Transitions can be carried out from any state only to states with neighboring numbers, i.e. from the state Sk-1, transitions are possible either to the state or to the state S k+11 .

In accordance with the rule for compiling such equations (the Kolmogorov equation), we obtain: for the state S0



Conclusion


This abstract reveals the concepts that lead to the system elements of the theory of a random queuing process, namely: a random process, service, queuing system, queuing system.


References

random mass Markov Kolmogorov

1. N.Sh. Kremer "Probability Theory and Mathematical Statistics" Unity, Moscow, 2003


Tutoring

Need help learning a topic?

Our experts will advise or provide tutoring services on topics of interest to you.
Submit an application indicating the topic right now to find out about the possibility of obtaining a consultation.