The classic definition of probability is brief. Determining the probability of an event

ChapterI. RANDOM EVENTS. PROBABILITY

1.1. Regularity and randomness, random variability in the exact sciences, in biology and medicine

Probability theory is a branch of mathematics that studies patterns in random phenomena. A random phenomenon is a phenomenon that, with repeated reproduction of the same experience, can proceed in a slightly different way each time.

Obviously, there is not a single phenomenon in nature in which elements of chance would not be present to one degree or another, but in different situations we take them into account in different ways. So, in a number of practical problems, they can be neglected and instead of a real phenomenon, its simplified scheme - a “model” can be considered, assuming that under the given experimental conditions the phenomenon proceeds in a completely definite way. At the same time, the most important, decisive factors characterizing the phenomenon are singled out. It is this scheme for studying phenomena that is most often used in physics, technology, and mechanics; this is how the main pattern is revealed , characteristic of a given phenomenon and making it possible to predict the result of an experiment according to given initial conditions. And the influence of random, secondary, factors on the result of the experiment is taken into account here by random measurement errors (we will consider the method of their calculation below).

However, the described classical scheme of the so-called exact sciences is poorly adapted to solving many problems in which numerous, closely intertwined random factors play a noticeable (often decisive) role. Here, the random nature of the phenomenon comes to the fore, which can no longer be neglected. This phenomenon must be studied precisely from the point of view of the laws inherent in it as a random phenomenon. In physics, examples of such phenomena are Brownian motion, radioactive decay, a number of quantum mechanical processes, etc.

The subject of study of biologists and physicians is a living organism, the origin, development and existence of which is determined by very many and diverse, often random external and internal factors. That is why the phenomena and events of the living world are also largely random in nature.

The elements of uncertainty, complexity, multi-causality inherent in random phenomena necessitate the creation of special mathematical methods for studying these phenomena. The development of such methods, the establishment of specific patterns inherent in random phenomena, are the main tasks of the theory of probability. It is characteristic that these regularities are fulfilled only when the random phenomena are massive. Moreover, the individual features of individual cases, as it were, cancel each other out, and the average result for a mass of random phenomena turns out to be no longer random, but quite natural. . To a large extent, this circumstance was the reason for the widespread use of probabilistic research methods in biology and medicine.

Consider the basic concepts of probability theory.

1.2. Probability of a random event

Each science that develops a general theory of a certain range of phenomena is based on a number of basic concepts. For example, in geometry, these are the concepts of a point, a straight line; in mechanics - the concepts of force, mass, speed, etc. Basic concepts exist in probability theory, one of them is a random event.

A random event is any phenomenon (fact) that, as a result of experience (testing), may or may not occur.

Random events are denoted by letters A, B, C… etc. Here are some examples of random events:

A- loss of an eagle (coat of arms) when tossing a standard coin;

IN- the birth of a girl in this family;

WITH– the birth of a child with a predetermined body weight;

D- the occurrence of an epidemic disease in a given region in a certain period of time, etc.

The main quantitative characteristic of a random event is its probability. Let A some random event. The probability of a random event A is a mathematical value that determines the possibility of its occurrence. It is designated R(A).

Consider two main methods for determining this value.

The classical definition of the probability of a random event usually based on the results of the analysis of speculative experiments (tests), the essence of which is determined by the condition of the task. In this case, the probability of a random event P(A) is equal to:

Where m- the number of cases favorable for the occurrence of the event A; n is the total number of equally likely cases.

Example 1 A lab rat is placed in a maze in which only one of four possible paths leads to a food reward. Determine the probability of the rat choosing such a path.

Solution: according to the condition of the problem from four equally possible cases ( n=4) event A(rat finds food)
favors only one, i.e. m= 1 Then R(A) = R(rat finds food) = = 0.25 = 25%.

Example 2. There are 20 black and 80 white balls in an urn. One ball is drawn at random from it. Determine the probability that this ball is black.

Solution: the number of all balls in the urn is the total number of equally likely cases n, i.e. n = 20 + 80 = 100, of which event A(drawing the black ball) is only possible at 20, i.e. m= 20. Then R(A) = R(H.W.) = = 0.2 = 20%.

We list the properties of probability following from its classical definition - formula (1):

1. The probability of a random event is a dimensionless quantity.

2. The probability of a random event is always positive and less than one, i.e. 0< P (A) < 1.

3. The probability of a certain event, i.e. an event that will definitely happen as a result of experience ( m = n) is equal to one.

4. Probability of an impossible event ( m= 0) is equal to zero.

5. The probability of any event is not negative and does not exceed one:
0 £ P (A) £ 1.

Statistical determination of the probability of a random event used when it is not possible to use the classical definition (1). This is often the case in biology and medicine. In that case, the probability R(A) is determined by summarizing the results of actually conducted series of tests (experiments).

Let us introduce the concept of the relative frequency of occurrence of a random event. Suppose a series of N experiences (number N may be preselected) event we are interested in A happened in M of them ( M < N). The ratio of the number of experiments M, in which this event occurred, to the total number of experiments performed N is called the relative frequency of occurrence of a random event A in this series of experiments R* (A)

R*(A) = .

It has been experimentally established that if a series of tests (experiments) are carried out under the same conditions and in each of them the number N is large enough, then the relative frequency exhibits the property of stability : it doesn't change much from episode to episode. , approaching with an increase in the number of experiments to a certain constant value . It is taken as the statistical probability of a random event A:

R(A)= lim , when N , (2)

So the statistical probability R(A) random event A call the limit to which the relative frequency of occurrence of this event tends with an unlimited increase in the number of trials (for N → ∞).

Approximately, the statistical probability of a random event is equal to the relative frequency of occurrence of this event with a large number of trials:

R(A)≈ R*(A)= (for large N) (3)

For example, in experiments on tossing a coin, the relative frequency of the coat of arms falling out at 12,000 tosses turned out to be 0.5016, and at 24,000 tosses - 0.5005. According to formula (1):

P(coat of arms) == 0.5 = 50%

Example . During a medical examination of 500 people, a tumor in the lungs (o.l.) was found in 5 of them. Determine the relative frequency and likelihood of this disease.

Solution: according to the condition of the problem M = 5, N= 500, relative frequency R*(o.l.) = M/N= 5/500 = 0.01; because the N is large enough, it can be considered with good accuracy that the probability of a tumor in the lungs is equal to the relative frequency of this event:

R(o.l.) = R* (o.l.) \u003d 0.01 \u003d 1%.

The properties of the probability of a random event listed earlier are also preserved in the statistical determination of this quantity.

1.3. Types of random events. Basic theorems of probability theory

All random events can be divided into:

¾ incompatible;

¾ independent;

¾ dependent.

Each type of event has its own characteristics and theorems of probability theory.

1.3.1. Incompatible random events. Addition theorem

Random events (A, B, C,D…) are called inconsistent , if the occurrence of one of them excludes the occurrence of other events in the same trial.

Example1 . Coin tossed. When it falls, the appearance of a “coat of arms” excludes the appearance of a “tails” (an inscription that determines the price of a coin). The events “coat of arms fell out” and “tails fell out” are incompatible.

Example 2 . Getting by a student on one exam a grade of “2”, or “3”, or “4”, or “5” are inconsistent events, since one of these grades excludes the other on the same exam.

For incompatible random events, addition theorem: probability of occurrence one, but still which, of several incompatible events A1, A2, A3 ... Ak is equal to the sum of their probabilities:

P(A1 or A2 ... or Ak) = Р(А1) + Р(А2) + …+ Р(Аk). (4)

Example 3. There are 50 balls in an urn: 20 white, 20 black and 10 red. Find the probability of the appearance of white (event A) or red ball (event IN) when a ball is drawn at random from the urn.

Solution: P(A or B)= P(A)+ P(IN);

R(A) = 20/50 = 0,4;

R(IN) = 10/50 = 0,2;

R(A or IN)= P(b. sh. or k. sh.) = 0,4 + 0,2 = 0,6 = 60%.

Example 4 . There are 40 children in the class. Of these, aged 7 to 7.5 years, 8 boys ( A) and 10 girls ( IN). Find the probability that there are children of this age in the class.

Solution: P(A)= 8/40 = 0.2; R(IN) = 10/40 = 0,25.

P(A or B) = 0.2 + 0.25 = 0.45 = 45%

The next important concept is complete group of events: several incompatible events form a complete group of events if each trial can result in only one of the events in this group and no other.

Example 5 . The shooter fired at the target. One of the following events will definitely happen: hitting the "ten", "nine", "eight", .., "one" or a miss. These 11 disjoint events form a complete group.

Example 6 . On the exam at the University, a student can receive one of the following four grades: 2, 3, 4 or 5. These four non-joint events also form a complete group.

If incompatible events A1, A2 ... Ak form a complete group, then the sum of the probabilities of these events is always equal to one:

R(A1)+ P(A2)+ … P(Ak) = 1, (5)

This statement is often used in solving many applied problems.

If two events are unique and incompatible, then they are called opposite and denoted A And . Such events make up a complete group, so the sum of their probabilities is always equal to one:

R(A)+ P() = 1. (6)

Example 7. Let R(A) is the probability of a lethal outcome in a certain disease; it is known and equal to 2%. Then the probability of a successful outcome in this disease is 98% ( R() = 1 – R(A) = 0.98), since R(A) + R() = 1.

1.3.2. independent random events. Probability multiplication theorem

Random events are called independent if the occurrence of one of them does not affect the probability of occurrence of other events.

Example 1 . If there are two or more urns with colored balls, then drawing any ball from one urn does not affect the probability of drawing other balls from the remaining urns.

For independent events, probability multiplication theorem: probability joint(simultaneous)the occurrence of several independent random events is equal to the product of their probabilities:

P(A1 and A2 and A3 ... and Ak) = P(A1) ∙P(A2) ∙…∙P(Ak). (7)

Joint (simultaneous) occurrence of events means that events occur and A1, And A2, And A3… And Ak .

Example 2 . There are two urns. One contains 2 black and 8 white balls, the other contains 6 black and 4 white. Let the event A- random selection of a white ball from the first urn, IN- from the second. What is the probability of choosing at random from these urns a white ball, i.e., what is R (A And IN)?

Solution: probability of drawing a white ball from the first urn
R(A) = = 0.8 from the second – R(IN) = = 0.4. The probability of getting a white ball from both urns at the same time is
R(A And IN) = R(AR(IN) = 0,8∙ 0,4 = 0,32 = 32%.

Example 3 A reduced iodine diet causes thyroid enlargement in 60% of animals in a large population. For the experiment, 4 enlarged glands are needed. Find the probability that 4 randomly selected animals will have an enlarged thyroid gland.

Solution: Random event A- a random selection of an animal with an enlarged thyroid gland. According to the condition of the problem, the probability of this event R(A) = 0.6 = 60%. Then the probability of the joint occurrence of four independent events - the choice at random of 4 animals with an enlarged thyroid gland - will be equal to:

R(A 1 and A 2 and A 3 and A 4) = 0,6 ∙ 0,6 ∙0,6 ∙ 0,6=(0,6)4 ≈ 0,13 = 13%.

1.3.3. dependent events. Probability multiplication theorem for dependent events

Random events A and B are called dependent if the occurrence of one of them, for example, A changes the probability of occurrence of the other event - B. Therefore, two probability values ​​are used for dependent events: unconditional and conditional probabilities .

If A And IN dependent events, then the probability of the event occurring IN first (i.e. before the event A) is called unconditional probability of this event and is designated R(IN). Probability of an event IN provided that the event A already happened, is called conditional probability events IN and denoted R(IN/A) or RA(IN).

The unconditional - R(A) and conditional - R(A/B) probabilities for the event A.

The probabilities multiplication theorem for two dependent events: the probability of the simultaneous occurrence of two dependent events A and B is equal to the product of the unconditional probability of the first event by the conditional probability of the second:

R(A and B)= P(A)∙P(B/A) , (8)

A, or

R(A and B)= P(IN)∙P(A/B), (9)

if the event occurs first IN.

Example 1. There are 3 black balls and 7 white balls in an urn. Find the probability that 2 white balls will be taken out of this urn one by one (and the first ball is not returned to the urn).

Solution: probability of drawing the first white ball (event A) is equal to 7/10. After it is taken out, 9 balls remain in the urn, 6 of which are white. Then the probability of the appearance of the second white ball (the event IN) is equal to R(IN/A) = 6/9, and the probability of getting two white balls in a row is

R(A And IN) = R(A)∙R(IN/A) = = 0,47 = 47%.

The given probability multiplication theorem for dependent events can be generalized to any number of events. In particular, for three events related to each other:

R(A And IN And WITH)= P(A)∙ R(B/A)∙ R(C/AB). (10)

Example 2. In two kindergartens, each attended by 100 children, there was an outbreak of an infectious disease. The proportion of cases is 1/5 and 1/4, respectively, and in the first institution 70%, and in the second - 60% of cases are children under 3 years of age. One child is randomly selected. Determine the probability that:

1) the selected child belongs to the first kindergarten (event A) and sick (event IN).

2) a child from the second kindergarten is selected (event WITH), sick (event D) and older than 3 years (event E).

Solution. 1) the desired probability -

R(A And IN) = R(A) ∙ R(IN/A) = = 0,1 = 10%.

2) the desired probability:

R(WITH And D And E) = R(WITH) ∙ R(D/C) ∙ R(E/CD) = = 5%.

1.4. Bayes formula

If the probability of joint occurrence of dependent events A And IN does not depend on the order in which they occur, then R(A And IN)= P(A)∙P(B/A)= P(IN) × R(A/B). In this case, the conditional probability of one of the events can be found by knowing the probabilities of both events and the conditional probability of the second:

R(B/A) = (11)

The generalization of this formula for the case of many events is the Bayes formula.

Let " n» incompatible random events H1, H2, …, Hn, form a complete group of events. The probabilities of these events are R(H1), R(H2), …, R(Hn) are known and since they form a complete group, then = 1.

some random event A associated with events H1, H2, …, Hn, and the conditional probabilities of the occurrence of the event are known A with each event Hi, i.e. known R(A/H1), R(A/H2), …, R(A/Nn). In this case, the sum of conditional probabilities R(A/Ni) may not be equal to one, i.e. ≠ 1.

Then the conditional probability of the occurrence of the event Hi when the event is implemented A(i.e., provided that the event A happened) is determined by the Bayes formula :

And for these conditional probabilities .

Bayes' formula has found wide application not only in mathematics, but also in medicine. For example, it is used to calculate the probabilities of certain diseases. So if H 1,…, Hn- estimated diagnoses for this patient, A- some sign related to them (a symptom, a certain indicator of a blood test, urine, a detail of a radiograph, etc.), and the conditional probabilities R(A/Ni) manifestations of this symptom in each diagnosis Hi (i = 1,2,3,…n) are known in advance, then the Bayes formula (12) allows us to calculate the conditional probabilities of diseases (diagnoses) R(Hi/A) after it is established that the characteristic feature A present in the patient.

Example1. During the initial examination of the patient, 3 diagnoses are assumed H 1, H 2, H 3. Their probabilities, according to the doctor, are distributed as follows: R(H 1) = 0,5; R(H 2) = 0,17; R(H 3) = 0.33. Therefore, the first diagnosis seems tentatively the most likely. To clarify it, for example, a blood test is prescribed, in which an increase in ESR is expected (event A). It is known in advance (based on research results) that the probabilities of an increase in ESR in suspected diseases are equal to:

R(A/H 1) = 0,1; R(A/H 2) = 0,2; R(A/H 3) = 0,9.

In the obtained analysis, an increase in ESR was recorded (event A happened). Then the calculation according to the Bayes formula (12) gives the values ​​of the probabilities of the alleged diseases with an increased ESR value: R(H 1/A) = 0,13; R(H 2/A) = 0,09;
R(H 3/A) = 0.78. These figures show that, taking into account laboratory data, not the first, but the third diagnosis, the probability of which has now turned out to be quite high, is the most realistic.

The above example is the simplest illustration of how, using the Bayes formula, one can formalize the doctor's logic when making a diagnosis and, thanks to this, create computer diagnostic methods.

Example 2. Determine the probability that assesses the degree of risk of perinatal* death of a child in women with an anatomically narrow pelvis.

Solution: let event H 1 - safe delivery. According to clinical reports, R(H 1) = 0.975 = 97.5%, then if H2- the fact of perinatal mortality, then R(H 2) = 1 – 0,975 = 0,025 = 2,5 %.

Denote A- the fact of the presence of a narrow pelvis in a woman in labor. From the studies carried out, it is known: a) R(A/H 1) - the probability of a narrow pelvis with favorable childbirth, R(A/H 1) = 0.029, b) R(A/H 2) - the probability of a narrow pelvis in perinatal mortality,
R(A/H 2) = 0.051. Then the desired probability of perinatal mortality in a narrow pelvis in a woman in labor is calculated by the Bays formula (12) and is equal to:


Thus, the risk of perinatal mortality in anatomically narrow pelvis is significantly higher (almost twice) than the average risk (4.4% vs. 2.5%).

Such calculations, usually performed using a computer, form the basis of methods for forming groups of patients at increased risk associated with the presence of one or another aggravating factor.

The Bayes formula is very useful for evaluating many other biomedical situations, which will become apparent when solving the tasks given in the manual.

1.5. About random events with probabilities close to 0 or 1

When solving many practical problems, one has to deal with events whose probability is very small, that is, close to zero. Based on experience with such events, the following principle has been adopted. If a random event has a very small probability, then in practice we can assume that it will not occur in a single trial, in other words, the possibility of its occurrence can be neglected. The answer to the question of how small this probability should be is determined by the essence of the problems being solved, by how important the result of the prediction is for us. For example, if the probability that a parachute will not open during a jump is 0.01, then the use of such parachutes is unacceptable. However, the same 0.01 probability that a long-distance train will arrive late makes us almost certain that it will arrive on time.

A sufficiently small probability at which (in a given specific problem) an event can be considered practically impossible is called significance level. In practice, the significance level is usually taken to be 0.01 (one percent significance level) or 0.05 (five percent significance level), much less often it is taken to be 0.001.

The introduction of a significance level allows us to state that if some event A practically impossible, then the opposite event - practically reliable, i.e. for him R() » 1.

ChapterII. RANDOM VALUES

2.1. Random variables, their types

In mathematics, a quantity is a common name for various quantitative characteristics of objects and phenomena. Length, area, temperature, pressure, etc. are examples of different quantities.

A value that takes various numerical values ​​under the influence of random circumstances, is called a random variable. Examples of random variables: the number of patients at the doctor's office; the exact dimensions of the internal organs of people, etc.

Distinguish between discrete and continuous random variables .

A random variable is called discrete if it takes only certain values ​​separated from each other, which can be set and enumerated.

Examples of a discrete random variable are:

– the number of students in the audience – can only be a positive integer: 0,1,2,3,4….. 20…..;

- the number that appears on the upper face when throwing a dice - can only take integer values ​​from 1 to 6;

- relative frequency of hitting the target with 10 shots - its values: 0; 0.1; 0.2; 0.3 …1

- the number of events that occur in the same time intervals: pulse rate, number of ambulance calls per hour, number of operations per month with a fatal outcome, etc.

A random variable is called continuous if it can take on any values ​​within a certain interval, which sometimes has sharply defined boundaries, and sometimes not.*. Continuous random variables include, for example, body weight and height of adults, body weight and brain volume, the quantitative content of enzymes in healthy people, the size of blood cells, R H blood, etc.

The concept of a random variable plays a decisive role in modern probability theory, which has developed special techniques for the transition from random events to random variables.

If a random variable depends on time, then we can talk about a random process.

2.2. Distribution law of a discrete random variable

To give a complete description of a discrete random variable, it is necessary to indicate all its possible values ​​and their probabilities.

The correspondence between the possible values ​​of a discrete random variable and their probabilities is called the law of distribution of this variable.

Denote the possible values ​​of the random variable X through Xi, and the corresponding probabilities through Ri *. Then the law of distribution of a discrete random variable can be specified in three ways: in the form of a table, graph or formula.

In a table called a distribution series, all possible values ​​of a discrete random variable are listed X and the probabilities corresponding to these values R(X):

X

…..

…..

P(X)

…..

…..

In this case, the sum of all probabilities Ri must be equal to one (normalization condition):

Ri = p1 + p2 + ... + pn = 1. (13)

Graphically the law is represented by a broken line, which is usually called the distribution polygon (Fig. 1). Here, along the horizontal axis, all possible values ​​​​of the random variable are plotted Xi, , and on the vertical axis - the corresponding probabilities Ri

Analytically the law is expressed by a formula. For example, if the probability of hitting the target with one shot is R, then the probability of hitting the target 1 time at n shots is given by the formula R(n) = n qn-1 × p, Where q= 1 - p- the probability of a miss with one shot.

2.3. The law of distribution of a continuous random variable. Probability density

For continuous random variables, it is impossible to apply the distribution law in the forms given above, since such a variable has an uncountable (“uncountable”) set of possible values ​​that completely fill a certain interval. Therefore, it is impossible to make a table in which all its possible values ​​\u200b\u200bare listed, or to build a distribution polygon. In addition, the probability of any particular value is very small (close to 0)*. At the same time, different areas (intervals) of possible values ​​of a continuous random variable are not equally probable. Thus, in this case, too, a certain law of distribution operates, although not in the former sense.

Consider a continuous random variable X, the possible values ​​of which completely fill a certain interval (A, b)**. The probability distribution law of such a value should allow us to find the probability that its value falls into any given interval ( x1, x2) lying inside ( A,b), Fig.2.

This probability is R(x1< Х < х2 ), or
R(x1£ X£ x2).

Consider first a very small range of values X- from X before ( x +DX); see fig.2. low probability dR that the random variable X will take some value from the interval ( x, x +DX), will be proportional to the value of this interval DX:dR~ DX, or by introducing the proportionality factor f, which itself may depend on X, we get:

dP =f(X) × D x =f(x) × dx (14)

Function introduced here f(X) is called probability density random variable X, or, in short, probability density, distribution density. Equation (13) is a differential equation, the solution of which gives the probability of hitting the value X into the interval ( x1,x2):

R(x1<X<x2) = f(X) dX. (15)

Graphically probability R(x1<X<x2) is equal to the area of ​​the curvilinear trapezoid bounded by the abscissa axis, the curve f(X) and direct X = x1 and X = x2(Fig. 3). This follows from the geometric meaning of the definite integral (15) Curve f(X) is called the distribution curve.

From (15) it follows that if the function f(X), then, by changing the limits of integration, we can find the probability for any intervals of interest to us. Therefore, it is the task of the function f(X) completely determines the distribution law for continuous random variables.

For the probability density f(X) the normalization condition must be satisfied in the form:

f(X) dx = 1, (16)

if it is known that all values X lie in the interval ( A,b), or in the form:

f(X) dx = 1, (17)

if the interval limits for the values X exactly undefined. Conditions for normalizing the probability density (16) or (17) are a consequence of the fact that the values ​​of the random variable X lie reliably within ( A,b) or (-¥, +¥). From (16) and (17) it follows that the area of ​​the figure bounded by the distribution curve and the x-axis is always equal to 1 .

2.4. Basic numerical characteristics of random variables

The results presented in sections 2.2 and 2.3 show that a complete characterization of discrete and continuous random variables can be obtained by knowing the laws of their distribution. However, in many practically significant situations, the so-called numerical characteristics of random variables are used, the main purpose of these characteristics is to express in a compressed form the most significant features of the distribution of random variables. It is important that these parameters are specific (constant) values ​​that can be estimated using the data obtained in the experiments. These estimates are handled by Descriptive Statistics.

In probability theory and mathematical statistics, quite a lot of different characteristics are used, but we will consider only the most used ones. And only for some of them we will give the formulas by which their values ​​are calculated, in other cases we will leave the calculations to the computer.

Consider position characteristics - mathematical expectation, mode, median.

They characterize the position of a random variable on the number axis , i.e. they indicate some approximate value around which all possible values ​​of the random variable are grouped. Among them, the mathematical expectation plays the most important role. M(X).

RUSSIAN ACADEMY OF THE NATIONAL ECONOMY AND PUBLIC SERVICE UNDER THE PRESIDENT OF THE RUSSIAN FEDERATION

OREL BRANCH

Department of Sociology and Information Technology

Typical calculation No. 1

in the discipline "Probability Theory and Mathematical Statistics"

on the topic "Fundamentals of Probability Theory"

Eagle - 2016.

Goal of the work: consolidation of theoretical knowledge on the topic of the foundations of the theory of probability, by solving typical problems. Mastering the concepts of the main types of random events and developing the skills of algebraic operations on events.

Job submission requirements: the work is done in handwritten form, the work must contain all the necessary explanations and conclusions, the formulas must contain a decoding of the accepted designations, the pages must be numbered.

Variant number corresponds to the student's serial number in the group list.

Basic theoretical information

Probability theory- a branch of mathematics that studies the patterns of random phenomena.

The concept of an event. Event classification.

One of the basic concepts of probability theory is the concept of an event. Events are indicated in capital Latin letters. A, IN, WITH,…

Event- this is a possible result (outcome) of a test or experience.

Testing is understood as any purposeful action.

Example : The shooter shoots at the target. A shot is a test, hitting a target is an event.

The event is called random , if under the conditions of a given experiment it can both occur and not occur.

Example : Shot from a gun - test

Inc. A- hitting the target

Inc. IN– miss – random events.

The event is called reliable if as a result of the test it must necessarily occur.

Example : Drop no more than 6 points when throwing a dice.

The event is called impossible if, under the conditions of the given experiment, it cannot occur at all.

Example : More than 6 points rolled when throwing a die.

The events are called incompatible if the occurrence of one of them precludes the occurrence of any other. Otherwise, the events are called joint.

Example : A dice is thrown. A roll of 5 eliminates a roll of 6. These are incompatible events. A student receiving “good” and “excellent” grades in exams in two different disciplines is a joint event.

Two incompatible events, of which one must necessarily occur, are called opposite . Event opposite to event A designate Ā .

Example : The appearance of the "coat of arms" and the appearance of "tails" when tossing a coin are opposite events.

Several events in this experience are called equally possible if there is reason to believe that none of these events is more possible than the others.

Example : drawing ace, tens, queens from a deck of cards - events are equally likely.

Several events form full group if, as a result of the test, one and only one of these events must necessarily occur.

Example : Dropping the number of points 1, 2, 3, 4, 5, 6 when throwing a die.

The classic definition of the probability of an event. Probability Properties

For practical activities, it is important to be able to compare events according to the degree of possibility of their occurrence.

Probability An event is a numerical measure of the degree of objective possibility of an event occurring.

Let's call elementary outcome each of the equally likely test results.

Exodus is called favorable (favorable) event A, if its occurrence entails the occurrence of an event A.

Classic definition : event probability A is equal to the ratio of the number of outcomes favorable for a given event to the total number of possible outcomes.

(1)where P(A) is the probability of an event A,

m- the number of favorable outcomes,

n is the number of all possible outcomes.

Example : There are 1000 tickets in the lottery, of which 700 are not winning. What is the probability of winning on one purchased ticket.

Event A- purchased a winning ticket

Number of possible outcomes n=1000 is the total number of lottery tickets.

Number of outcomes favoring the event A is the number of winning tickets, i.e., m=1000-700=300.

According to the classical definition of probability:

Answer:
.

Note event probability properties:

1) The probability of any event is between zero and one, i.e. 0≤ P(A)≤1.

2) The probability of a certain event is 1.

3) The probability of an impossible event is 0.

In addition to the classical, there are also geometric and statistical definitions of probability.

Elements of combinatorics.

Combinatorics formulas are widely used to calculate the number of outcomes favorable to the event in question or the total number of outcomes.

Let there be a set N from n various elements.

Definition 1: Combinations, each of which includes all n elements and which differ from each other only by the order of the elements are called permutations from n elements.

P n=n! (2), where n! (n-factorial) - product n the first numbers of the natural series, i.e.

n! = 1∙2∙3∙…∙(n–1)∙n

So, for example, 5!=1∙2∙3∙4∙5 = 120

Definition 2: m elements ( mn) and differing from each other either in the composition of the elements or their order are called placements from n By m elements.

(3) 
Definition 3: Combinations, each containing m elements ( mn) and differing from each other only in the composition of the elements are called combinations from n By m elements.


(4)
Comment: changing the order of elements within the same combination does not result in a new combination.

We formulate two important rules that are often used in solving combinatorial problems

Sum rule: if object A can be chosen m ways, and the object INn ways, then the choice is either A or IN can be done m+n ways.

Product rule: if object A can be chosen m ways, and the object IN after each such choice, one can choose n ways, then a pair of objects A And IN can be selected in that order. mn ways.

It is unlikely that many people think about whether it is possible to calculate events that are more or less random. In simple terms, is it realistic to know which side of the die will fall next. It was this question that two great scientists asked, who laid the foundation for such a science as the theory of probability, in which the probability of an event is studied quite extensively.

Origin

If you try to define such a concept as probability theory, you get the following: this is one of the branches of mathematics that studies the constancy of random events. Of course, this concept does not really reveal the whole essence, so it is necessary to consider it in more detail.

I would like to start with the creators of the theory. As mentioned above, there were two of them, and it was they who were among the first who tried to calculate the outcome of an event using formulas and mathematical calculations. On the whole, the beginnings of this science appeared in the Middle Ages. At that time, various thinkers and scientists tried to analyze gambling, such as roulette, dice, and so on, thereby establishing a pattern and percentage of a particular number falling out. The foundation was laid in the seventeenth century by the aforementioned scientists.

At first, their work could not be attributed to the great achievements in this field, because everything they did was simply empirical facts, and the experiments were made visually, without the use of formulas. Over time, it turned out to achieve great results, which appeared as a result of observing the throwing of dice. It was this tool that helped to derive the first intelligible formulas.

Like-minded people

It is impossible not to mention such a person as Christian Huygens, in the process of studying a topic called "probability theory" (the probability of an event is covered precisely in this science). This person is very interesting. He, like the scientists presented above, tried to derive the regularity of random events in the form of mathematical formulas. It is noteworthy that he did not do this together with Pascal and Fermat, that is, all his works did not in any way intersect with these minds. Huygens brought out

An interesting fact is that his work came out long before the results of the work of the discoverers, or rather, twenty years earlier. Among the designated concepts, the most famous are:

  • the concept of probability as a magnitude of chance;
  • mathematical expectation for discrete cases;
  • theorems of multiplication and addition of probabilities.

It is also impossible not to remember who also made a significant contribution to the study of the problem. Conducting his own tests, independent of anyone, he managed to present a proof of the law of large numbers. In turn, the scientists Poisson and Laplace, who worked at the beginning of the nineteenth century, were able to prove the original theorems. It was from this moment that probability theory began to be used to analyze errors in the course of observations. Russian scientists, or rather Markov, Chebyshev and Dyapunov, could not bypass this science either. Based on the work done by the great geniuses, they fixed this subject as a branch of mathematics. These figures worked already at the end of the nineteenth century, and thanks to their contribution, phenomena such as:

  • law of large numbers;
  • theory of Markov chains;
  • central limit theorem.

So, with the history of the birth of science and with the main people who influenced it, everything is more or less clear. Now it's time to concretize all the facts.

Basic concepts

Before touching on laws and theorems, it is worth studying the basic concepts of probability theory. The event takes the leading role in it. This topic is quite voluminous, but without it it will not be possible to understand everything else.

An event in probability theory is any set of outcomes of an experiment. There are not so many concepts of this phenomenon. So, the scientist Lotman, who works in this area, said that in this case we are talking about what "happened, although it might not have happened."

Random events (probability theory pays special attention to them) is a concept that implies absolutely any phenomenon that has the ability to occur. Or, conversely, this scenario may not happen when many conditions are met. It is also worth knowing that it is random events that capture the entire volume of phenomena that have occurred. Probability theory indicates that all conditions can be repeated constantly. It was their conduct that was called "experiment" or "test".

A certain event is one that will 100% occur in a given test. Accordingly, an impossible event is one that will not happen.

The combination of a pair of actions (conditionally case A and case B) is a phenomenon that occurs simultaneously. They are designated as AB.

The sum of pairs of events A and B is C, in other words, if at least one of them happens (A or B), then C will be obtained. The formula of the described phenomenon is written as follows: C \u003d A + B.

Disjoint events in probability theory imply that the two cases are mutually exclusive. They can never happen at the same time. Joint events in probability theory are their antipode. This implies that if A happened, then it does not prevent B in any way.

Opposite events (probability theory deals with them in great detail) are easy to understand. It is best to deal with them in comparison. They are almost the same as incompatible events in probability theory. But their difference lies in the fact that one of the many phenomena in any case must occur.

Equally probable events are those actions, the possibility of repetition of which is equal. To make it clearer, we can imagine the tossing of a coin: the loss of one of its sides is equally likely to fall out of the other.

A favorable event is easier to see with an example. Let's say there is episode B and episode A. The first is the roll of the die with the appearance of an odd number, and the second is the appearance of the number five on the die. Then it turns out that A favors B.

Independent events in the theory of probability are projected only on two or more cases and imply the independence of any action from another. For example, A - dropping tails when throwing a coin, and B - getting a jack from the deck. They are independent events in probability theory. At this point, it became clearer.

Dependent events in probability theory are also admissible only for their set. They imply the dependence of one on the other, that is, the phenomenon B can occur only if A has already happened or, on the contrary, has not happened when this is the main condition for B.

The outcome of a random experiment consisting of one component is elementary events. Probability theory explains that this is a phenomenon that happened only once.

Basic formulas

So, the concepts of "event", "probability theory" were considered above, the definition of the main terms of this science was also given. Now it's time to get acquainted directly with the important formulas. These expressions mathematically confirm all the main concepts in such a difficult subject as probability theory. The probability of an event plays a huge role here too.

It is better to start with the main ones. And before proceeding to them, it is worth considering what it is.

Combinatorics is primarily a branch of mathematics, it deals with the study of a huge number of integers, as well as various permutations of both the numbers themselves and their elements, various data, etc., leading to the appearance of a number of combinations. In addition to probability theory, this branch is important for statistics, computer science, and cryptography.

So, now you can move on to the presentation of the formulas themselves and their definition.

The first of these will be an expression for the number of permutations, it looks like this:

P_n = n ⋅ (n - 1) ⋅ (n - 2)…3 ⋅ 2 ⋅ 1 = n!

The equation applies only if the elements differ only in their order.

Now the placement formula will be considered, it looks like this:

A_n^m = n ⋅ (n - 1) ⋅ (n-2) ⋅ ... ⋅ (n - m + 1) = n! : (n - m)!

This expression is applicable not only to the order of the element, but also to its composition.

The third equation from combinatorics, and it is also the last one, is called the formula for the number of combinations:

C_n^m = n ! : ((n - m))! :m!

A combination is called a selection that is not ordered, respectively, and this rule applies to them.

It turned out to be easy to figure out the formulas of combinatorics, now we can move on to the classical definition of probabilities. This expression looks like this:

In this formula, m is the number of conditions favorable to the event A, and n is the number of absolutely all equally possible and elementary outcomes.

There are a large number of expressions, the article will not cover all of them, but the most important of them will be touched upon, such as, for example, the probability of the sum of events:

P(A + B) = P(A) + P(B) - this theorem is for adding only incompatible events;

P(A + B) = P(A) + P(B) - P(AB) - and this one is for adding only compatible ones.

Probability of producing events:

P(A ⋅ B) = P(A) ⋅ P(B) - this theorem is for independent events;

(P(A ⋅ B) = P(A) ⋅ P(B∣A); P(A ⋅ B) = P(A) ⋅ P(A∣B)) - and this one is for dependents.

The event formula will end the list. Probability theory tells us about Bayes' theorem, which looks like this:

P(H_m∣A) = (P(H_m)P(A∣H_m)) : (∑_(k=1)^n P(H_k)P(A∣H_k)),m = 1,..., n

In this formula, H 1 , H 2 , …, H n is the full group of hypotheses.

Examples

If you carefully study any branch of mathematics, it is not complete without exercises and sample solutions. So is the theory of probability: events, examples here are an integral component that confirms scientific calculations.

Formula for number of permutations

Let's say there are thirty cards in a deck of cards, starting with face value one. Next question. How many ways are there to stack the deck so that cards with a face value of one and two are not next to each other?

The task is set, now let's move on to solving it. First you need to determine the number of permutations of thirty elements, for this we take the above formula, it turns out P_30 = 30!.

Based on this rule, we will find out how many options there are to fold the deck in different ways, but we need to subtract from them those in which the first and second cards are next. To do this, let's start with the option when the first is above the second. It turns out that the first card can take twenty-nine places - from the first to the twenty-ninth, and the second card from the second to the thirtieth, it turns out only twenty-nine places for a pair of cards. In turn, the rest can take twenty-eight places, and in any order. That is, for a permutation of twenty-eight cards, there are twenty-eight options P_28 = 28!

As a result, it turns out that if we consider the solution when the first card is above the second, there are 29 ⋅ 28 extra possibilities! = 29!

Using the same method, you need to calculate the number of redundant options for the case when the first card is under the second. It also turns out 29 ⋅ 28! = 29!

From this it follows that there are 2 ⋅ 29! extra options, while there are 30 necessary ways to build the deck! - 2 ⋅ 29!. It remains only to count.

30! = 29! ⋅ 30; 30!- 2 ⋅ 29! = 29! ⋅ (30 - 2) = 29! ⋅ 28

Now you need to multiply all the numbers from one to twenty-nine among themselves, and then at the end multiply everything by 28. The answer is 2.4757335 ⋅〖10〗^32

Example solution. Formula for Placement Number

In this problem, you need to find out how many ways there are to put fifteen volumes on one shelf, but on the condition that there are thirty volumes in total.

In this problem, the solution is slightly simpler than in the previous one. Using the already known formula, it is necessary to calculate the total number of arrangements from thirty volumes of fifteen.

A_30^15 = 30 ⋅ 29 ⋅ 28⋅... ⋅ (30 - 15 + 1) = 30 ⋅ 29 ⋅ 28 ⋅ ... ⋅ 16 = 202 843 204 931 727 360 000

The answer, respectively, will be equal to 202,843,204,931,727,360,000.

Now let's take the task a little more difficult. You need to find out how many ways there are to arrange thirty books on two bookshelves, provided that only fifteen volumes can be on one shelf.

Before starting the solution, I would like to clarify that some problems are solved in several ways, so there are two ways in this one, but the same formula is used in both.

In this problem, you can take the answer from the previous one, because there we calculated how many times you can fill a shelf with fifteen books in different ways. It turned out A_30^15 = 30 ⋅ 29 ⋅ 28 ⋅ ... ⋅ (30 - 15 + 1) = 30 ⋅ 29 ⋅ 28 ⋅ ...⋅ 16.

We calculate the second shelf according to the permutation formula, because fifteen books are placed in it, while only fifteen remain. We use the formula P_15 = 15!.

It turns out that in total there will be A_30^15 ⋅ P_15 ways, but, in addition, the product of all numbers from thirty to sixteen will need to be multiplied by the product of numbers from one to fifteen, as a result, the product of all numbers from one to thirty will be obtained, that is, the answer equals 30!

But this problem can be solved in a different way - easier. To do this, you can imagine that there is one shelf for thirty books. All of them are placed on this plane, but since the condition requires that there be two shelves, we cut one long one in half, it turns out two fifteen each. From this it turns out that the placement options can be P_30 = 30!.

Example solution. Formula for combination number

Now we will consider a variant of the third problem from combinatorics. You need to find out how many ways there are to arrange fifteen books, provided that you need to choose from thirty absolutely identical ones.

For the solution, of course, the formula for the number of combinations will be applied. From the condition it becomes clear that the order of the identical fifteen books is not important. Therefore, initially you need to find out the total number of combinations of thirty books of fifteen.

C_30^15 = 30 ! : ((30-15)) ! : 15 ! = 155 117 520

That's all. Using this formula, in the shortest possible time it was possible to solve such a problem, the answer, respectively, is 155 117 520.

Example solution. The classical definition of probability

Using the formula above, you can find the answer in a simple problem. But it will help to visually see and trace the course of actions.

The problem is given that there are ten absolutely identical balls in the urn. Of these, four are yellow and six are blue. One ball is taken from the urn. You need to find out the probability of getting blue.

To solve the problem, it is necessary to designate getting the blue ball as event A. This experience can have ten outcomes, which, in turn, are elementary and equally probable. At the same time, six out of ten are favorable for event A. We solve using the formula:

P(A) = 6: 10 = 0.6

By applying this formula, we found out that the probability of getting a blue ball is 0.6.

Example solution. Probability of the sum of events

Now a variant will be presented, which is solved using the formula for the probability of the sum of events. So, in the condition given that there are two boxes, the first contains one gray and five white balls, and the second contains eight gray and four white balls. As a result, one of them was taken from the first and second boxes. It is necessary to find out what is the chance that the balls taken out will be gray and white.

To solve this problem, it is necessary to designate events.

  • So, A - take a gray ball from the first box: P(A) = 1/6.
  • A '- they took a white ball also from the first box: P (A ") \u003d 5/6.
  • B - a gray ball was taken out already from the second box: P(B) = 2/3.
  • B' - they took a gray ball from the second box: P(B") = 1/3.

According to the condition of the problem, it is necessary that one of the phenomena occur: AB 'or A'B. Using the formula, we get: P(AB") = 1/18, P(A"B) = 10/18.

Now the formula for multiplying the probability has been used. Next, to find out the answer, you need to apply the equation for their addition:

P = P(AB" + A"B) = P(AB") + P(A"B) = 11/18.

So, using the formula, you can solve similar problems.

Outcome

The article provided information on the topic "Probability Theory", in which the probability of an event plays a crucial role. Of course, not everything was taken into account, but, based on the text presented, one can theoretically get acquainted with this section of mathematics. The science in question can be useful not only in professional work, but also in everyday life. With its help, you can calculate any possibility of any event.

The text also touched upon significant dates in the history of the formation of the theory of probability as a science, and the names of people whose works were invested in it. This is how human curiosity led to the fact that people learned to calculate even random events. Once they were just interested in it, but today everyone already knows about it. And no one will say what awaits us in the future, what other brilliant discoveries related to the theory under consideration will be made. But one thing is for sure - research does not stand still!

The probability of an event is understood as some numerical characteristic of the possibility of the occurrence of this event. There are several approaches to determining probability.

Probability of an event A is the ratio of the number of outcomes favorable to this event to the total number of all equally possible incompatible elementary outcomes that form a complete group. So the probability of an event A is determined by the formula

Where m is the number of elementary outcomes favoring A, n- the number of all possible elementary outcomes of the test.

Example 3.1. In the experiment with throwing a dice, the number of all outcomes n is 6 and they are all equally possible. Let the event A means the appearance of an even number. Then for this event, favorable outcomes will be the appearance of numbers 2, 4, 6. Their number is 3. Therefore, the probability of the event A is equal to

Example 3.2. What is the probability that the digits in a randomly chosen two-digit number are the same?

Two-digit numbers are numbers from 10 to 99, there are 90 such numbers in total. 9 numbers have the same numbers (these are the numbers 11, 22, ..., 99). Since in this case m=9, n=90, then

Where A- event, "a number with the same digits."

Example 3.3. There are 7 standard parts in a lot of 10 parts. Find the probability that there are 4 standard parts among six randomly selected parts.

The total number of possible elementary outcomes of the test is equal to the number of ways in which 6 parts can be extracted from 10, i.e., the number of combinations of 10 elements of 6 elements. Determine the number of outcomes that favor the event of interest to us A(among the six parts taken, 4 are standard). Four standard parts can be taken from seven standard parts in ways; at the same time, the remaining 6-4=2 parts must be non-standard, but you can take two non-standard parts from 10-7=3 non-standard parts in different ways. Therefore, the number of favorable outcomes is .

Then the desired probability is equal to

The following properties follow from the definition of probability:

1. The probability of a certain event is equal to one.

Indeed, if the event is reliable, then each elementary outcome of the test favors the event. In this case m=n, hence

2. The probability of an impossible event is zero.

Indeed, if the event is impossible, then none of the elementary outcomes of the trial favors the event. In this case it means

3. The probability of a random event is a positive number between zero and one.

Indeed, only a part of the total number of elementary outcomes of the test favors a random event. In this case< m< n, means 0 < m/n < 1, i.e. 0< P(A) < 1. Итак, вероятность любого события удовлетворяет двойному неравенству


The construction of a logically complete probability theory is based on the axiomatic definition of a random event and its probability. In the system of axioms proposed by A. N. Kolmogorov, undefined concepts are an elementary event and probability. Here are the axioms that define the probability:

1. Every event A assigned a non-negative real number P(A). This number is called the probability of the event. A.

2. The probability of a certain event is equal to one.

3. The probability of occurrence of at least one of the pairwise incompatible events is equal to the sum of the probabilities of these events.

Based on these axioms, the properties of probabilities and the relationships between them are derived as theorems.

Questions for self-examination

1. What is the name of the numerical characteristic of the possibility of an event?

2. What is called the probability of an event?

3. What is the probability of a certain event?

4. What is the probability of an impossible event?

5. What are the limits of the probability of a random event?

6. What are the limits of the probability of any event?

7. What definition of probability is called classical?

In order to quantitatively compare events with each other according to their degree of possibility, it is obviously necessary to associate a certain number with each event, which is the greater, the more possible the event is. We call this number the probability of the event. Thus, event probability is a numerical measure of the degree of objective possibility of this event.

The classical definition of probability, which arose from the analysis of gambling and was initially applied intuitively, should be considered the first definition of probability.

The classical way of determining probability is based on the concept of equally probable and incompatible events, which are the outcomes of a given experience and form a complete group of incompatible events.

The simplest example of equally possible and incompatible events that form a complete group is the appearance of one or another ball from an urn containing several balls of the same size, weight and other tangible features, differing only in color, thoroughly mixed before being taken out.

Therefore, a test, the outcomes of which form a complete group of incompatible and equally probable events, is said to be reduced to a scheme of urns, or a scheme of cases, or fit into the classical scheme.

Equally possible and incompatible events that make up a complete group will be called simply cases or chances. Moreover, in each experiment, along with cases, more complex events can occur.

Example: When tossing a dice, along with cases A i - i-points falling on the upper face, events such as B - an even number of points falling out, C - a multiple of three points falling out ...

In relation to each event that can occur during the implementation of the experiment, the cases are divided into favorable, at which this event occurs, and unfavorable, at which the event does not occur. In the previous example, event B is favored by cases A 2 , A 4 , A 6 ; event C - cases A 3 , A 6 .

classical probability the occurrence of some event is the ratio of the number of cases that favor the appearance of this event to the total number of cases of equally possible, incompatible, constituting a complete group in a given experience:

Where P(A)- probability of occurrence of event A; m- number of cases favorable for event A; n is the total number of cases.

Examples:

1) (see example above) P(B)= , P(C) =.

2) An urn contains 9 red and 6 blue balls. Find the probability that one or two balls drawn at random will be red.

A- a red ball drawn at random:

m= 9, n= 9 + 6 = 15, P(A)=

B- two red balls drawn at random:

The following properties follow from the classical definition of probability (show yourself):


1) The probability of an impossible event is 0;

2) The probability of a certain event is 1;

3) The probability of any event lies between 0 and 1;

4) The probability of an event opposite to event A,

The classical definition of probability assumes that the number of outcomes of a trial is finite. In practice, however, very often there are trials, the number of possible cases of which is infinite. In addition, the weakness of the classical definition is that it is very often impossible to represent the result of a test as a set of elementary events. It is even more difficult to indicate the grounds for considering the elementary outcomes of the test as equally probable. Usually, the equality of the elementary outcomes of the test is concluded from considerations of symmetry. However, such tasks are very rare in practice. For these reasons, along with the classical definition of probability, other definitions of probability are also used.

Statistical Probability event A is the relative frequency of occurrence of this event in the tests performed:

where is the probability of occurrence of event A;

Relative frequency of occurrence of event A;

The number of trials in which event A appeared;

The total number of trials.

Unlike classical probability, statistical probability is a characteristic of an experimental one.

Example: To control the quality of products from a batch, 100 products were randomly selected, among which 3 products turned out to be defective. Determine the probability of marriage.

The statistical method of determining the probability is applicable only to those events that have the following properties:

The events under consideration should be the outcomes of only those trials that can be reproduced an unlimited number of times under the same set of conditions.

Events must have statistical stability (or stability of relative frequencies). This means that in different series of tests, the relative frequency of the event does not change significantly.

The number of trials that result in event A must be large enough.

It is easy to verify that the properties of probability, which follow from the classical definition, are also preserved in the statistical definition of probability.