Stochastic model in economics. Deterministic and stochastic models

Modeling is one of the most important tools in modern life when one wants to foresee the future. And this is not surprising, because the accuracy of this method is very high. Let's take a look at what a deterministic model is in this article.

general information

Deterministic system models have the feature that they can be analyzed analytically if they are simple enough. Otherwise, when using a significant number of equations and variables for this purpose, electronic computers can be used. Moreover, computer assistance, as a rule, comes down solely to solving them and finding answers. Because of this, it is necessary to change the systems of equations and use a different discretization. And this entails an increased risk of errors in the calculations. All types of deterministic models are characterized by the fact that the knowledge of the parameters on a certain interval under study allows us to fully determine the dynamics of development beyond the known indicators.

Peculiarities

Factor modeling

References to this could be seen throughout the article, but we have not yet discussed what it is. Factor modeling implies that the main provisions are highlighted, for which a quantitative comparison is necessary. To achieve the goals set, the study produces a transformation of the form.

If a rigidly deterministic model has more than two factors, then it is called multifactorial. Its analysis can be carried out through various methods. As an example, we give In this case, it considers the tasks set from the point of view of pre-established and developed a priori models. The choice among them is carried out according to the content representation.

For the qualitative construction of the model, it is necessary to use theoretical and experimental studies of the essence of the technological process and its cause-and-effect relationships. This is precisely the main advantage of the subjects we are considering. Deterministic models allow accurate forecasting in many areas of our lives. Due to their quality parameters and versatility, they have become so widespread.

Cybernetic deterministic models

They are of interest to us due to the analysis-based transient processes that occur with any, even the most insignificant changes in the aggressive properties of the external environment. For simplicity and speed of calculations, the current state of affairs is replaced by a simplified model. It is important that it satisfies all the basic requirements.

The efficiency of the automatic control system and the effectiveness of its decisions depend on the unity of all the necessary parameters. At the same time, it is necessary to solve the following problem: the more information is collected, the higher the probability of error and the longer the processing time. But if you limit the collection of your data, then you can count on a less reliable result. Therefore, it is necessary to find a middle ground that will allow obtaining information of sufficient accuracy, and at the same time it will not be unnecessarily complicated by unnecessary elements.

Multiplicative deterministic model

It is built by dividing the factors into their set. As an example, we can consider the process of forming the volume of manufactured products (PP). So, for this it is necessary to have labor (PC), materials (M) and energy (E). In this case, the PP factor can be divided into a set (RS; M; E). This option reflects the multiplicative form of the factor system and the possibility of its separation. In this case, you can use the following transformation methods: expansion, formal decomposition and lengthening. The first option has found wide application in the analysis. It can be used to calculate the performance of an employee, and so on.

Lengthening replaces one value with other factors. But the end result should be the same number. An example of extension was considered by us above. Only the formal expansion remains. It involves the use of lengthening the denominator of the original factorial model due to the replacement of one or more parameters. Consider this example: we calculate the profitability of production. To do this, the amount of profit is divided by the amount of costs. When multiplying, instead of a single value, we divide by the summed expenses for material, personnel, taxes, and so on.

Probabilities

Oh, if everything went exactly as planned! But this rarely happens. Therefore, in practice, deterministic and are often used together. What can be said about the latter? Their peculiarity is that they also take into account various probabilities. Take, for example, the following. There are two states. Relations between them are very bad. The third party decides whether to invest in the enterprises of one of the countries. After all, if a war breaks out, profits will suffer greatly. Or you can cite the example of building a plant in an area with high seismic activity. Here, after all, there are natural factors that cannot be taken into account exactly, it can only be done approximately.

Conclusion

We have considered what are models of deterministic analysis. Alas, in order to fully understand them and be able to put them into practice, you should learn very well. The theoretical foundations are already in place. Also, within the framework of the article, separate simple examples were presented. Further, it is better to follow the path of gradual complication of the working material. You can simplify your task a bit and start learning software that can perform the appropriate simulation. But whatever the choice may be, understand the basics and be able to answer questions about what, how and why, is still necessary. You should learn to start with choosing the right input data and choosing the right actions. Then the programs will be able to successfully perform their tasks.

The system models that we have talked about so far have been deterministic (defined), i.e. the task of the input action determined the output of the system unambiguously. However, this rarely happens in practice: the description of real systems is usually characterized by uncertainty. For example, for a static model, the uncertainty can be taken into account by writing the place (2.1) relation

where is the error reduced to the system output.

The reasons for uncertainty are varied:

– errors and interference in measurements of system inputs and outputs (natural errors);

– the inaccuracy of the system model itself, which makes it necessary to artificially introduce an error into the model;

– incomplete information about system parameters, etc.

Among the various ways of clarifying and formalizing uncertainty, the most widespread is the chaotic (probabilistic) approach, in which uncertain quantities are considered random. The developed conceptual and computational apparatus of probability theory and mathematical statistics makes it possible to give specific recommendations for choosing the structure of a system and estimating its parameters. The classification of stochastic models of systems and methods for their study is presented in Table. 1.4. Conclusions and recommendations are based on the averaging effect: random deviations of the measurement results of a certain quantity from its expected value cancel each other out when summed, and the arithmetic mean of a large number of measurements turns out to be close to the expected value. Mathematical formulations of this effect are given by the law of large numbers and the central limit theorem. The law of large numbers says that if are random variables with mathematical expectation (mean) and variance, then



for large enough N. This indicates the fundamental possibility of an arbitrarily accurate estimate from measurements. The central limit theorem, which refines (2.32), states that

where is a standard normally distributed random variable

Since the distribution of the quantity is well known and tabulated (for example, it is known that relation (2.33) allows us to calculate the estimation error. Let, for example, it is required to find at what number of measurements the error in estimating their mathematical expectation with a probability of 0.95 will be less than 0.01 , if the variance of each measurement is equal to 0.25 From (2.33) we find that the inequality must hold whence N> 10000.

Of course, formulations (2.32), (2.33) can be given a more rigorous form, and this can easily be done using the concepts of probabilistic convergence. Difficulties arise when trying to check the conditions of these strict assertions. For example, in the law of large numbers and the central limit theorem, the independence of individual measurements (realizations) of a random variable and the finiteness of its variance are required. If these conditions are violated, then the conclusions may also be violated. For example, if all measurements are the same: then, although all other conditions are satisfied, averaging is out of the question. Another example: the law of large numbers is unfair if random variables are distributed according to the Cauchy law (with a distribution density that does not have finite mathematical expectation and variance. But such a law occurs in life! at sea (on a ship) and switched on at random times.

But even more difficult is the verification of the validity of the very use of the term "random". What is a random variable, a random event, etc. It is often said that the event BUT by chance, if as a result of the experiment it can occur (with a probability R) or not to occur (with a probability of 1- R). Everything, however, is not so simple. The very concept of probability can be associated with the results of experiments only through the frequency of its occurrence in a certain row (series) of experiments: , where N A is the number of experiments in which the event occurred, N- total number; experiments. If numbers for sufficiently large N approach some constant number r A:

that event BUT can be called random, and the number R- its probability. In this case, the frequencies observed in different series of experiments should be close to each other (this property is called statistical stability or homogeneity). This also applies to the concept of a random variable, since the value is random if the events are random (and<£<Ь} для любых чисел a,b. The frequencies of occurrence of such events in long series of experiments should cluster around some constant values.

So, for the applicability of the stochastic approach, the following requirements must be met:

1) the mass nature of the experiments, i.e. a sufficiently large number;

2) the repeatability of the conditions of the experiments, justifying the comparison of the results of different experiments;

3) statistical stability.

The stochastic approach obviously cannot be applied to single experiments: expressions like “probability that it will rain tomorrow”, “Zenith will win the cup with a probability of 0.8”, etc. are meaningless. But even if there is a mass character and repeatability of experiments, there may not be statistical stability, and it is not an easy task to check this. Known estimates of the frequency deviation from probability are based on the central limit theorem or Chebyshev's inequality and require additional hypotheses about the independence or weak dependence of the measurements. The experimental verification of the independence condition is even more difficult, since it requires additional experiments.

The methodology and practical recipes for applying the theory of probability are described in more detail in the instructive book by V.N. Tutubalina, an idea of ​​which is given by the following quotes:

“It is extremely important to eradicate the delusion, sometimes found among engineers and natural scientists who are not sufficiently familiar with the theory of probability, that the result of any experiment can be considered as a random variable. In especially severe cases, this is accompanied by belief in a normal distribution law, and if the random variables themselves are not normal, then they believe that their logarithms are normal.

“According to modern concepts, the scope of application of probabilistic methods is limited to phenomena that are characterized by statistical stability. However, the test of statistical stability is difficult and always incomplete, moreover, it often gives a negative conclusion. As a result, in entire fields of knowledge, for example, in geology, such an approach has become the norm, in which statistical stability is not checked at all, which inevitably leads to serious errors. In addition, the propaganda of cybernetics, undertaken by our leading scientists, gave (in some cases!) a somewhat unexpected result: now it is believed that only a machine (and not a person) is capable of obtaining objective scientific results.

In such circumstances, the duty of every teacher is to propagate again and again that old truth that Peter I tried (unsuccessfully) to inspire Russian merchants: that one must trade honestly, without deceit, since in the end it is more profitable for themselves.

How to build a system model if there is uncertainty in the problem, but the stochastic approach is not applicable? One of the alternative approaches based on fuzzy set theory is briefly outlined below.


We remind you that a relation (relation between and) is a subset of a set. those. some set of pairs R=(( x, at)), where,. For example, a functional relationship (dependency) can be represented as a relationship between sets, including pairs ( X, at) for which.

In the simplest case, perhaps, a R is an identity relation if.

Examples 12-15 in the table. 1. 1 invented in 1988 by a student of class 86 of school 292 M. Koroteev.

The mathematician here, of course, will notice that the minimum in (1.4), strictly speaking, may not be reached, and in the formulation of (1.4) it is necessary to replace rnin with inf (“infimum” is the infimum of the set). However, the situation will not change because of this: formalization in this case does not reflect the essence of the problem; carried out incorrectly. In the future, in order not to "scare" the engineer, we will use the notation min, max; bearing in mind that, if necessary, they should be replaced by more general inf, sup.

Here the term "structure" is used in a somewhat narrower sense; 1.1, and means the composition of subsystems in the system and the types of connections between them.

A graph is a pair ( G, R), where G=(g 1 ... gn) is a finite set of vertices, a - binary relation on G. If, then, and only if, then the graph is said to be undirected; otherwise, directed. The pairs are called arcs (edges), and the elements of the set G- graph vertices.

That is, algebraic or transcendental.

Strictly speaking, a countable set is a kind of idealization that cannot be implemented in practice due to the finite size of technical systems and the limits of human perception. Such idealized models (for example, the set of natural numbers N=(1, 2,...)) it makes sense to introduce for sets of finite, but with a previously unlimited (or unknown) number of elements.

Formally, the concept of an operation is a special case of the concept of a relationship between elements of sets. For example, the operation of adding Two numbers defines a 3-place (ternary) relation R: triplet of numbers (x, y, z) z) belongs to relation R(we write (x, y, z)) if z = x+y.

Complex number, argument of polynomials BUT(), AT().

This assumption is often fulfilled in practice.

If the value is unknown, then it should be replaced in (2.33) by the estimate where In this case, the value will be distributed not normally, but according to the Student's law, which is practically indistinguishable from the normal one at.

It is easy to see that (2.34) is a special case of (2.32) when taken if the event BUT came in j- m experiment, otherwise. Wherein

And today you can add "... and computer science" (author's note).

1. Deterministic and probabilistic mathematical models in economics. Advantages and disadvantages

Methods for studying economic processes are based on the use of mathematical - deterministic and probabilistic - models representing the process, system or type of activity being studied. Such models give a quantitative description of the problem and serve as the basis for making managerial decisions when looking for the best option. How justified are these decisions, are they the best possible, have all the factors that determine the optimal solution been taken into account and weighed, what is the criterion that allows you to determine that this solution is really the best - these are the range of questions that are of great importance for production managers, and the answer to which can be found using operations research methods [Chesnokov S. V. Deterministic analysis of socio-economic data. - M.: Nauka, 1982, p. 45].

One of the principles of formation of the control system is the method of cybernetic (mathematical) models. Mathematical modeling occupies an intermediate position between experiment and theory: there is no need to build a real physical model of the system, it will be replaced by a mathematical model. The peculiarity of the formation of the control system lies in the probabilistic, statistical approach to control processes. In cybernetics, it is accepted that any control process is subject to random, perturbing influences. So, the production process is influenced by a large number of factors, which cannot be taken into account in a deterministic way. Therefore, it is considered that the production process is affected by random signals. Because of this, planning the work of an enterprise can only be probabilistic.

For these reasons, when talking about mathematical modeling of economic processes, it is often probabilistic models that are meant.

Let us describe each of the types of mathematical models.

Deterministic mathematical models are characterized by the fact that they describe the relationship of certain factors with the performance indicator as a functional dependence, i.e. in deterministic models, the performance indicator of the model is presented as a product, quotient, algebraic sum of factors, or as any other function. This type of mathematical models is the most common, because, being quite simple to use (compared to probabilistic models), it allows you to understand the logic of the action of the main factors in the development of the economic process, quantify their influence, understand which factors and in what proportion it is possible and expedient to change to increase production efficiency.

Probabilistic mathematical models fundamentally differ from deterministic ones in that in probabilistic models the relationship between factors and the resulting feature is probabilistic (stochastic): with a functional dependence (deterministic models), the same state of the factors corresponds to the only state of the resulting feature, while in probabilistic models one and the same state of factors corresponds to a whole set of states of the resulting attribute [Tolstova Yu. N. Logic of mathematical analysis of economic processes. - M.: Nauka, 2001, p. 32-33].

The advantage of deterministic models is their ease of use. The main drawback is the low adequacy of reality, since, as noted above, most economic processes are probabilistic in nature.

The advantage of probabilistic models is that, as a rule, they are more consistent with reality (more adequate) than deterministic ones. However, the disadvantage of probabilistic models is the complexity and laboriousness of their application, so in many situations it is sufficient to limit ourselves to deterministic models.

For the first time, the formulation of a linear programming problem in the form of a proposal for the preparation of an optimal transportation plan; allowing to minimize the total mileage, was given in the work of the Soviet economist A. N. Tolstoy in 1930.

Systematic studies of linear programming problems and the development of general methods for solving them were further developed in the works of Russian mathematicians L. V. Kantorovich, V. S. Nemchinov and other mathematicians and economists. Also, many works of foreign and, above all, American scientists are devoted to the methods of linear programming.

The task of linear programming is to maximize (minimize) a linear function.

, where

under restrictions

and all

Comment. Inequalities can also have the opposite meaning. By multiplying the corresponding inequalities by (-1), one can always obtain a system of the form (*).

If the number of variables of the constraint system and the objective function in the mathematical model of the problem is 2, then it can be solved graphically.

So we need to maximize the function

to a satisfying system of constraints.

Let us turn to one of the inequalities of the system of constraints.

From a geometric point of view, all points satisfying this inequality must either lie on a line

, or belong to one of the half-planes into which the plane of this line is divided. In order to find out, you need to check which of them contains a dot ().

Remark 2. If

, it is easier to take the point (0;0).

Conditions for non-negativity

also define half-planes, respectively, with boundary lines . We assume that the system of inequalities is consistent, then the half-planes, intersecting, form a common part, which is a convex set and is a collection of points whose coordinates are the solution to this system - this is the set of feasible solutions. The set of these points (solutions) is called the solution polygon. It can be a point, a ray, a polygon, an unbounded polygonal area. Thus, the task of linear programming is to find such a point of the solution polygon at which the objective function takes the maximum (minimum) value. This point exists when the solution polygon is not empty and the objective function on it is bounded from above (from below). Under these conditions, at one of the vertices of the decision polygon, the objective function takes the maximum value. To determine this vertex, we construct a straight line (where h is some constant). Most often taken straight . It remains to find out the direction of motion of this straight line. This direction is determined by the gradient (anti-gradient) of the objective function. perpendicular to a line at every point , so the value of f will increase when the straight line moves in the direction of the gradient (decreases in the direction of the anti-gradient). To do this, parallel to the line draw straight lines, moving in the direction of the gradient (anti-gradient).

We will continue these constructions until the line passes through the last vertex of the solution polygon. This point determines the optimal value.

So, finding a solution to a linear programming problem by a geometric method includes the following steps:

Lines are constructed, the equations of which are obtained as a result of replacing the signs of inequalities in the restrictions with signs of exact equalities.

Find the half-planes defined by each of the constraints of the problem.

Find a solution polygon.

Build vector

.

Build a straight line

.

Build parallel lines

in the direction of the gradient or anti-gradient, as a result of which the point is found at which the function takes the maximum or minimum value, or the unboundedness from above (from below) of the function on the admissible set is established.

The coordinates of the maximum (minimum) point of the function are determined and the value of the objective function at this point is calculated.

The problem of rational nutrition (the problem of diet)

Formulation of the problem

The farm produces fattening livestock for commercial purposes. For simplicity, let's assume that there are only four types of products: P1, P2, P3, P4; the unit cost of each product is C1, C2, C3, C4, respectively. From these products it is required to make a diet, which should contain: proteins - at least b1 units; carbohydrates - not less than b2 units; fat - at least b3 units. For products P1, P2, P3, P4, the content of proteins, carbohydrates and fats (in units per unit of product) is known and given in the table, where aij (i=1,2,3,4; j=1,2,3) - some specific numbers the first index indicates the number of the product, the second - the number of the element (proteins, carbohydrates, fats).

Mathematical models in economics and programming

1. Deterministic and probabilistic mathematical models in economics. Advantages and disadvantages

Methods for studying economic processes are based on the use of mathematical - deterministic and probabilistic - models representing the process, system or type of activity being studied. Such models give a quantitative description of the problem and serve as the basis for making managerial decisions when looking for the best option. How justified are these decisions, are they the best possible, have all the factors that determine the optimal solution been taken into account and weighed, what is the criterion that allows you to determine that this solution is really the best - these are the range of questions that are of great importance for production managers, and the answer to which can be found using operations research methods [Chesnokov S. V. Deterministic analysis of socio-economic data. - M.: Nauka, 1982, p. 45].

One of the principles of formation of the control system is the method of cybernetic (mathematical) models. Mathematical modeling occupies an intermediate position between experiment and theory: there is no need to build a real physical model of the system, it will be replaced by a mathematical model. The peculiarity of the formation of the control system lies in the probabilistic, statistical approach to control processes. In cybernetics, it is accepted that any control process is subject to random, perturbing influences. So, the production process is influenced by a large number of factors, which cannot be taken into account in a deterministic way. Therefore, it is considered that the production process is affected by random signals. Because of this, planning the work of an enterprise can only be probabilistic.

For these reasons, when talking about mathematical modeling of economic processes, it is often probabilistic models that are meant.

Let us describe each of the types of mathematical models.

Deterministic mathematical models are characterized by the fact that they describe the relationship of certain factors with the performance indicator as a functional dependence, i.e. in deterministic models, the performance indicator of the model is presented as a product, quotient, algebraic sum of factors, or as any other function. This type of mathematical models is the most common, because, being quite simple to use (compared to probabilistic models), it allows you to understand the logic of the action of the main factors in the development of the economic process, quantify their influence, understand which factors and in what proportion it is possible and expedient to change to increase production efficiency.

Probabilistic mathematical models fundamentally differ from deterministic ones in that in probabilistic models the relationship between factors and the resulting feature is probabilistic (stochastic): with a functional dependence (deterministic models), the same state of the factors corresponds to the only state of the resulting feature, while in probabilistic models one and the same state of factors corresponds to a whole set of states of the resulting attribute [Tolstova Yu. N. Logic of mathematical analysis of economic processes. - M.: Nauka, 2001, p. 32-33].

The advantage of deterministic models is their ease of use. The main drawback is the low adequacy of reality, since, as noted above, most economic processes are probabilistic in nature.

The advantage of probabilistic models is that, as a rule, they are more consistent with reality (more adequate) than deterministic ones. However, the disadvantage of probabilistic models is the complexity and laboriousness of their application, so in many situations it is sufficient to limit ourselves to deterministic models.

2. Statement of the linear programming problem on the example of the problem of food ration

For the first time, the formulation of a linear programming problem in the form of a proposal for the preparation of an optimal transportation plan; allowing to minimize the total mileage, was given in the work of the Soviet economist A. N. Tolstoy in 1930.

Systematic studies of linear programming problems and the development of general methods for solving them were further developed in the works of Russian mathematicians L. V. Kantorovich, V. S. Nemchinov and other mathematicians and economists. Also, many works of foreign and, above all, American scientists are devoted to the methods of linear programming.

The task of linear programming is to maximize (minimize) a linear function.

under restrictions

and all

Comment. Inequalities can also have the opposite meaning. By multiplying the corresponding inequalities by (-1), one can always obtain a system of the form (*).

If the number of variables of the constraint system and the objective function in the mathematical model of the problem is 2, then it can be solved graphically.

So, it is necessary to maximize the function to a satisfying system of constraints.

Let us turn to one of the inequalities of the system of constraints.

From a geometric point of view, all points satisfying this inequality must either lie on the line or belong to one of the half-planes into which the plane of this line is divided. In order to find out, you need to check which of them contains a dot ().

Remark 2. If , then it is easier to take the point (0;0).

The non-negativity conditions also define half-planes, respectively, with boundary lines. We assume that the system of inequalities is consistent, then the half-planes, intersecting, form a common part, which is a convex set and is a collection of points whose coordinates are the solution to this system - this is the set of feasible solutions. The set of these points (solutions) is called the solution polygon. It can be a point, a ray, a polygon, an unbounded polygonal area. Thus, the task of linear programming is to find such a point of the solution polygon at which the objective function takes the maximum (minimum) value. This point exists when the solution polygon is not empty and the objective function on it is bounded from above (from below). Under these conditions, at one of the vertices of the decision polygon, the objective function takes the maximum value. To determine this vertex, we construct a straight line (where h is some constant). Most often, a straight line is taken. It remains to find out the direction of motion of this straight line. This direction is determined by the gradient (anti-gradient) of the objective function.

The vector at each point is perpendicular to the line, so the value of f will increase as the line moves in the direction of the gradient (decrease in the direction of the anti-gradient). To do this, we draw straight lines parallel to the straight line, moving in the direction of the gradient (anti-gradient).

We will continue these constructions until the line passes through the last vertex of the solution polygon. This point determines the optimal value.

So, finding a solution to a linear programming problem by a geometric method includes the following steps:

Lines are constructed, the equations of which are obtained as a result of replacing the signs of inequalities in the restrictions with signs of exact equalities.

Find the half-planes defined by each of the constraints of the problem.

Find a solution polygon.

Build a vector.

Build a straight line.

Parallel lines are built in the direction of the gradient or anti-gradient, as a result of which they find the point at which the function takes the maximum or minimum value, or set the function to be unbounded from above (from below) on the admissible set.

The coordinates of the maximum (minimum) point of the function are determined and the value of the objective function at this point is calculated.

The problem of rational nutrition (the problem of diet)

Formulation of the problem

The farm produces fattening livestock for commercial purposes. For simplicity, let's assume that there are only four types of products: P1, P2, P3, P4; the unit cost of each product is C1, C2, C3, C4, respectively. From these products it is required to make a diet, which should contain: proteins - at least b1 units; carbohydrates - not less than b2 units; fat - at least b3 units. For products P1, P2, P3, P4, the content of proteins, carbohydrates and fats (in units per unit of product) is known and given in the table, where aij (i=1,2,3,4; j=1,2,3) - some specific numbers the first index indicates the number of the product, the second - the number of the element (proteins, carbohydrates, fats).

January 23, 2017

The stochastic model describes the situation when there is uncertainty. In other words, the process is characterized by some degree of randomness. The adjective "stochastic" itself comes from the Greek word "guess". Since uncertainty is a key characteristic of everyday life, such a model can describe anything.

However, each time we apply it, the result will be different. Therefore, deterministic models are more often used. Although they are not as close as possible to the real state of affairs, they always give the same result and make it easier to understand the situation, simplify it by introducing a set of mathematical equations.

Main features

A stochastic model always includes one or more random variables. She seeks to reflect real life in all its manifestations. Unlike the deterministic model, the stochastic one does not aim to simplify everything and reduce it to known values. Therefore, uncertainty is its key characteristic. Stochastic models are suitable for describing anything, but they all have the following common features:

  • Any stochastic model reflects all aspects of the problem for which it was created.
  • The outcome of each of the phenomena is uncertain. Therefore, the model includes probabilities. The correctness of the overall results depends on the accuracy of their calculation.
  • These probabilities can be used to predict or describe the processes themselves.

Deterministic and stochastic models

For some, life appears as a series of random events, for others - processes in which the cause determines the effect. In fact, it is characterized by uncertainty, but not always and not in everything. Therefore, it is sometimes difficult to find clear differences between stochastic and deterministic models. Probabilities are quite subjective.

For example, consider a coin toss situation. At first glance, it looks like there is a 50% chance of getting tails. Therefore, a deterministic model must be used. However, in reality, it turns out that much depends on the dexterity of the hands of the players and the perfection of the balancing of the coin. This means that a stochastic model must be used. There are always parameters that we do not know. In real life, the cause always determines the effect, but there is also a certain degree of uncertainty. The choice between using deterministic and stochastic models depends on what we are willing to give up - simplicity of analysis or realism.

Related videos

In chaos theory

Recently, the concept of which model is called stochastic has become even more blurred. This is due to the development of the so-called chaos theory. It describes deterministic models that can give different results with a slight change in the initial parameters. This is like an introduction to the calculation of uncertainty. Many scientists have even admitted that this is already a stochastic model.

Lothar Breuer elegantly explained everything with the help of poetic images. He wrote: “A mountain stream, a beating heart, an epidemic of smallpox, a column of rising smoke - all this is an example of a dynamic phenomenon, which, as it seems, is sometimes characterized by chance. In reality, such processes are always subject to a certain order, which scientists and engineers are only just beginning to understand. This is the so-called deterministic chaos.” The new theory sounds very plausible, which is why many modern scientists are its supporters. However, it still remains little developed, and it is rather difficult to apply it in statistical calculations. Therefore, stochastic or deterministic models are often used.

Building

The stochastic mathematical model begins with the choice of the space of elementary outcomes. So in statistics they call the list of possible results of the process or event being studied. The researcher then determines the probability of each of the elementary outcomes. Usually this is done on the basis of a certain technique.

However, the probabilities are still quite a subjective parameter. The researcher then determines which events are most interesting for solving the problem. After that, it simply determines their probability.

Example

Consider the process of building the simplest stochastic model. Suppose we roll a die. If "six" or "one" falls out, then our winnings will be ten dollars. The process of building a stochastic model in this case will look like this:

  • Let us define the space of elementary outcomes. The die has six sides, so one, two, three, four, five, and six can come up.
  • The probability of each of the outcomes will be equal to 1/6, no matter how much we roll the die.
  • Now we need to determine the outcomes of interest to us. This is the loss of a face with the number "six" or "one".
  • Finally, we can determine the probability of the event of interest to us. It is 1/3. We sum up the probabilities of both elementary events of interest to us: 1/6 + 1/6 = 2/6 = 1/3.

Concept and result

Stochastic simulation is often used in gambling. But it is also indispensable in economic forecasting, as it allows you to understand the situation deeper than deterministic ones. Stochastic models in economics are often used in making investment decisions. They allow you to make assumptions about the profitability of investments in certain assets or their groups.

Modeling makes financial planning more efficient. With its help, investors and traders optimize the distribution of their assets. Using stochastic modeling always has advantages in the long run. In some industries, refusal or inability to apply it can even lead to the bankruptcy of the enterprise. This is due to the fact that in real life new important parameters appear daily, and if they are not taken into account, this can have disastrous consequences.