Bernoulli scheme. Examples of problem solving

Bernoulli formula- a formula in probability theory that allows you to find the probability of an event occurring A (\displaystyle A) in independent tests. The Bernoulli formula allows you to get rid of a large number of calculations - addition and multiplication of probabilities - with a sufficiently large number of tests. Named after the outstanding Swiss mathematician Jacob Bernoulli, who derived this formula.

Encyclopedic YouTube

    1 / 3

    ✪ Probability theory. 22. Bernoulli formula. Problem solving

    ✪ Bernoulli formula

    ✪ 20 Repeat tests Bernoulli Formula

    Subtitles

Wording

Theorem. If the probability p (\displaystyle p) event A (\displaystyle A) is constant in each trial, then the probability P k , n (\displaystyle P_(k,n)) that the event A (\displaystyle A) comes exactly k (\displaystyle k) once a n (\displaystyle n) independent tests is equal to: P k , n = C n k ⋅ p k ⋅ q n − k (\displaystyle P_(k,n)=C_(n)^(k)\cdot p^(k)\cdot q^(n-k)), where q = 1 − p (\displaystyle q=1-p).

Proof

Let it be held n (\displaystyle n) independent tests, and it is known that as a result of each test, an event A (\displaystyle A) comes with a probability P (A) = p (\displaystyle P\left(A\right)=p) and therefore does not occur with probability P (A ¯) = 1 − p = q (\displaystyle P\left((\bar (A))\right)=1-p=q). Let, also, in the course of probability tests p (\displaystyle p) and q (\displaystyle q) remain unchanged. What is the probability that as a result n (\displaystyle n) independent test, event A (\displaystyle A) comes exactly k (\displaystyle k) once?

It turns out that it is possible to accurately calculate the number of "successful" combinations of test outcomes for which the event A (\displaystyle A) comes k (\displaystyle k) once a n (\displaystyle n) independent trials, is exactly the number combinations of n (\displaystyle n) on k (\displaystyle k) :

C n (k) = n! k! (n − k) ! (\displaystyle C_(n)(k)=(\frac (n{k!\left(n-k\right)!}}} !}.

At the same time, since all trials are independent and their outcomes are incompatible (event A (\displaystyle A) either occurs or not), then the probability of obtaining a "successful" combination is exactly: .

Finally, in order to find the probability that n (\displaystyle n) independent test event A (\displaystyle A) comes exactly k (\displaystyle k) times, you need to add up the probabilities of getting all the "successful" combinations. The probabilities of getting all "successful" combinations are the same and equal p k ⋅ q n − k (\displaystyle p^(k)\cdot q^(n-k)), the number of "successful" combinations is C n (k) (\displaystyle C_(n)(k)), so we finally get:

P k , n = C n k ⋅ p k ⋅ q n − k = C n k ⋅ p k ⋅ (1 − p) n − k (\displaystyle P_(k,n)=C_(n)^(k)\cdot p^( k)\cdot q^(n-k)=C_(n)^(k)\cdot p^(k)\cdot (1-p)^(n-k)).

The last expression is nothing but the Bernoulli formula. It is also useful to note that, due to the completeness of the group of events, it will be true:

∑ k = 0 n (P k , n) = 1 (\displaystyle \sum _(k=0)^(n)(P_(k,n))=1).

In the practical application of the theory of probability, one often encounters problems in which the same experiment or similar experiments are repeated more than once. As a result of each experience, an event may or may not appear. BUT, and we are not interested in the result of each individual experiment, but total appearances events BUT as a result of a series of experiments. For example, if a group of shots are fired at the same target, we are not interested in the result of each shot, but in the total number of hits. Such problems are solved quite simply if the experiments are independent.

Definition. Trials that are independent of event A are those in which the probability of event A in each trial is independent of the outcomes of other trials.

Example. Several successive drawings of a card from the deck are independent experiments, provided that the card drawn is returned to the deck each time and the cards are shuffled; otherwise, they are dependent experiences.

Example. Several shots are independent experiments only if the aiming is done again before each shot; in the case when aiming is carried out once before the whole firing or is continuously carried out in the course of firing (firing in a burst, bombing in a series), the shots are dependent experiments.

Independent tests may be carried out under the same or different conditions. In the first case, the probability of the event BUT in all experiments the same, in the second case the probability of the event BUT varies from experience to experience. The first case is connected with many problems of reliability theory, shooting theory and leads to the so-called Bernoulli scheme, which is as follows:

1) the sequence is carried out n independent trials, in each of which an event BUT may or may not appear;

2) the probability of occurrence of an event BUT in each test is constant and equal to , as well as the probability of its non-occurrence .

Bernoulli's formula for finding the probability of an event occurring A k once a n independent trials, in each of which an event BUT appears with a probability p:

. (1)

Remark 1. With increasing n and k the application of the Bernoulli formula is associated with computational difficulties, so formula (1) is used mainly if k does not exceed 5 and n not great.

Remark 2. Due to the fact that the probabilities in form are members of the binomial expansion, the probability distribution of the form (1) is called binomial distribution.

Example. The probability of hitting the target with one shot is 0.8. Find the probability of five hits with six shots.


Decision. Since then , besides and . Using the Bernoulli formula, we get:

Example. Four independent shots are fired at the same target from different distances. The hit probabilities for these shots are respectively:

Find the probabilities of none, one, two, three, and four hits:

Decision. We compose the generating function:

Example. Five independent shots are fired at a target with a hit probability of 0.2. Three hits are enough to destroy the target. Find the probability that the target will be destroyed.

Decision. The probability of destruction of the target is calculated by the formula:

Example. Ten independent shots are fired at the target, the probability of hitting it with one shot is 0.1. One hit is enough to hit the target. Find the probability of hitting the target.

Decision. The probability of at least one hit is calculated by the formula:

3. Local Moivre-Laplace theorem

In applications, it is often necessary to calculate the probabilities of various events related to the number of occurrences of the event in n tests of the Bernoulli scheme at large values n. In this case, calculations by formula (1) become difficult. Difficulties increase when one has to add up these probabilities. Difficulties in calculations also arise for small values p or q.

Laplace obtained an important approximate formula for the probability of an event occurring BUT exactly m times, if is a sufficiently large number, that is, when .

Local de Moivre–Laplace theorem. If the probability p of the occurrence of event A in each trial is constant and different from zero and one, , the value is bounded uniformly in m and n, then the probability of occurrence of the event A exactly m times in n independent trials is approximately equal to

Let n trials be carried out with respect to the event A. Let's introduce the following events: Аk -- event А was realized during the k-th test, $ k=1,2,\dots , n$. Then $\bar(A)_(k) $ is the opposite event (event A did not occur during the k-th trial, $k=1,2,\dots , n$).

What are peer and independent trials

Definition

Tests are called of the same type with respect to event A if the probabilities of the events $A1, A2, \dots , An$ are the same: $P(A1)=P(A2)= \dots =P(An)$ (i.e., the probability of occurrence event A in one trial is constant in all trials).

Obviously, in this case, the probabilities of opposite events also coincide: $P(\bar(A)_(1))=P(\bar(A)_(2))=...=P(\bar(A) _(n))$.

Definition

Trials are called independent with respect to event A if the events $A1, A2, \dots , An$ are independent.

In this case

In this case, equality is preserved when any event Ak is replaced by $\bar(A)_(k) $.

Let a series of n similar independent trials be conducted with respect to event A. We carry the notation: p - the probability of the event A in one test; q is the probability of the opposite event. Thus P(Ak)=p, $P(\bar(A)_(k))=q$ for any k and p+q=1.

The probability that in a series of n trials event A will occur exactly k times (0 ≤ k ≤ n) is calculated by the formula:

$P_(n) (k)=C_(n)^(k) p^(k) q^(n-k) $ (1)

Equality (1) is called the Bernoulli formula.

The probability that in a series of n independent trials of the same type event A will occur at least k1 times and at most k2 times is calculated by the formula:

$P_(n) (k_(1) \le k\le k_(2))=\sum \limits _(k=k_(1) )^(k_(2) )C_(n)^(k) p ^(k) q^(n-k) $ (2)

Application of the Bernoulli formula for large values ​​of n leads to cumbersome calculations, so in these cases it is better to use other formulas - asymptotic ones.

Generalization of the Bernoulli scheme

Consider a generalization of the Bernoulli scheme. If in a series of n independent trials, each of which has m pairwise incompatible and possible results Ak with corresponding probabilities Рk= рk(Аk). Then the polynomial distribution formula is valid:

Example 1

The probability of getting the flu during an epidemic is 0.4. Find the probability that out of 6 employees of the company will fall ill

  1. exactly 4 employees;
  2. no more than 4 employees.

Decision. 1) Obviously, to solve this problem, the Bernoulli formula is applicable, where n=6; k=4; p=0.4; q=1-p=0.6. Applying formula (1), we get: $P_(6) (4)=C_(6)^(4) \cdot 0.4^(4) \cdot 0.6^(2) \approx 0.138$.

To solve this problem, formula (2) is applicable, where k1=0 and k2=4. We have:

\[\begin(array)(l) (P_(6) (0\le k\le 4)=\sum \limits _(k=0)^(4)C_(6)^(k) p^( k) q^(6-k) =C_(6)^(0) \cdot 0.4^(0) \cdot 0.6^(6) +C_(6)^(1) \cdot 0.4 ^(1) \cdot 0.6^(5) +C_(6)^(2) \cdot 0.4^(2) \cdot 0.6^(4) +) \\ (+C_(6) ^(3) \cdot 0.4^(3) \cdot 0.6^(3) +C_(6)^(4) \cdot 0.4^(4) \cdot 0.6^(2) \ approx 0.959.) \end(array)\]

It should be noted that this task is easier to solve using the opposite event - more than 4 employees fell ill. Then, taking into account formula (7) on the probabilities of opposite events, we obtain:

Answer: $\ $0.959.

Example 2

An urn contains 20 white and 10 black balls. 4 balls are taken out, and each ball taken out is returned to the urn before the next one is drawn and the balls in the urn are mixed. Find the probability that out of the four balls drawn there will be 2 white balls in Figure 1.

Picture 1.

Decision. Let the event A be that -- a white ball is drawn. Then the probabilities $D (A)=\frac(2)(3) ,\, \, D (\overline(A))=1-\frac(2)(3) =\frac(1)(3) $ .

According to the Bernoulli formula, the required probability is $D_(4) (2)=N_(4)^(2) \left(\frac(2)(3) \right)^(2) \left(\frac(1)( 3) \right)^(2) =\frac(8)(27) $.

Answer: $\frac(8)(27) $.

Example 3

Determine the probability that a family with 5 children will have no more than 3 girls. The probabilities of having a boy and a girl are assumed to be the same.

Decision. Probability of having a girl $\partial =\frac(1)(2) ,\, q=\frac(1)(2) $-probability of having a boy. There are no more than three girls in a family, which means that either one, or two, or three girls were born, or all boys in the family.

Find the probabilities that there are no girls in the family, one, two or three girls were born: $D_(5) (0)=q^(5) =\frac(1)(32) $,

\ \ \

Therefore, the required probability is $D =D_(5) (0)+D_(5) (1)+D_(5) (2)+D_(5) (3)=\frac(13)(16) $.

Answer: $\frac(13)(16)$.

Example 4

The first shooter with one shot can hit the top ten with a probability of 0.6, the nine with a probability of 0.3, and the eight with a probability of 0.1. What is the probability that, with 10 shots, he will hit ten six times, nine three times, and eight eight times?

n experiments are performed according to the Bernoulli scheme with a probability of success p. Let X be the number of successes. The random variable X has the range (0,1,2,...,n). The probabilities of these values ​​can be found by the formula: , where C m n is the number of combinations from n to m .
The distribution series has the form:

x0 1 ... mn
p(1-p)nnp(1-p) n-1... C m n p m (1-p) n-mp n
This distribution law is called binomial.

Service assignment. An online calculator is used to plot binomial distribution series and calculation of all characteristics of the series: mathematical expectation, variance and standard deviation. A report with a decision is drawn up in Word format (example).

Number of trials: n= , Probability p =
With a small probability p and a large number of n (np Poisson formula.

Video instruction

Bernoulli test scheme

Numerical characteristics of a random variable distributed according to the binomial law

The mathematical expectation of a random variable X, distributed according to the binomial law.
M[X]=np

Dispersion of a random variable X, distributed according to the binomial law.
D[X]=npq

Example #1. The product may be defective with a probability p = 0.3 each. Three items are selected from a batch. X is the number of defective parts among the selected ones. Find (enter all answers in the form of decimal fractions): a) distribution series X; b) distribution function F(x) .
Decision. Random variable X has range (0,1,2,3).
Let's find the distribution series X.
P 3 (0) = (1-p) n = (1-0.3) 3 = 0.34
P 3 (1) = np(1-p) n-1 = 3(1-0.3) 3-1 = 0.44

P 3 (3) = p n = 0.3 3 = 0.027

x i 0 1 2 3
pi 0.34 0.44 0.19 0.027

The mathematical expectation is found by the formula M[X]= np = 3*0.3 = 0.9
Examination: m = ∑ x i p i .
Mathematical expectation M[X].
M[x] = 0*0.34 + 1*0.44 + 2*0.19 + 3*0.027 = 0.9
The dispersion is found by the formula D[X]=npq = 3*0.3*(1-0.3) = 0.63
Examination: d = ∑x 2 i p i - M[x] 2 .
Dispersion D[X].
D[X] = 0 2 *0.34 + 1 2 *0.44 + 2 2 *0.19 + 3 2 *0.027 - 0.9 2 = 0.63
Standard deviation σ(x).

Distribution function F(X).
F(xF(0F(1F(2F(x>3) = 1
  1. The probability of an event occurring in one trial is 0.6 . 5 tests are made. Compose the law of distribution of a random variable X - the number of occurrences of an event.
  2. Compose the law of distribution of the random variable X of the number of hits with four shots, if the probability of hitting the target with one shot is 0.8.
  3. A coin is tossed 7 times. Find the mathematical expectation and variance of the number of appearances of the coat of arms. Note: here the probability of the appearance of the coat of arms is p = 1/2 (because the coin has two sides).

Example #2. The probability of an event occurring in a single trial is 0.6 . Applying Bernoulli's theorem, determine the number of independent trials, starting from which the probability of deviation of the frequency of an event from its probability in absolute value is less than 0.1 , greater than 0.97 . (Answer: 801)

Example #3. Students perform tests in the computer science class. The work consists of three tasks. To get a good grade, you need to find the correct answers to at least two problems. Each problem has 5 answers, of which only one is correct. The student chooses an answer at random. What is the probability that he will get a good grade?
Decision. Probability of answering the question correctly: p=1/5=0.2; n=3.
These data must be entered into the calculator. See P(2)+P(3) for the answer.

Example #4. The probability of the shooter hitting the target with one shot is (m+n)/(m+n+2) . n + 4 shots are fired. Find the probability that he misses no more than two times.

Note. The probability that he will miss no more than two times includes the following events: never misses P(4), misses once P(3), misses twice P(2).

Example number 5. Determine the probability distribution of the number of failed aircraft if 4 aircraft flies. The probability of non-failure operation of the aircraft Р=0.99. The number of aircraft that failed in each sortie is distributed according to the binomial law.

If several trials are performed, and the probability of the event A in each trial does not depend on the outcomes of other trials, then such trials are called independent with respect to the event A .

In different independent trials, event A may have either different probabilities or the same probability. We will further consider only such independent trials in which the event A has the same probability.

Below we use the concept complex events, understanding by it combination of several separate events, which are called simple .

Let it be produced n independent trials, in each of which event A may or may not occur. Let us agree to assume that the probability of the event A in each trial is the same, namely, it is equal to R . Therefore, the probability of non-occurrence of event A in each trial is also constant and equal to q = 1 - p .

Let us set ourselves the task of calculating the probability that n tests, event A will occur exactly k times and, therefore, will not be realized n-k once. It is important to emphasize that it is not required that the event A repeats exactly k times in a certain sequence.

For example, if we are talking about the occurrence of an event BUT three times in four trials, the following complex events are possible: AAA, AAA, AAA, AAA. Recording AAA means that in the first, second and third trials the event BUT came, but in the fourth test it did not appear, i.e. the opposite happened BUT; other entries have a corresponding meaning.

Denote the desired probability R p (k) . For example, the symbol R 5 (3) means the probability that in five trials the event will occur exactly 3 times and, therefore, will not occur 2 times.

The problem can be solved using the so-called Bernoulli formula.

Derivation of the Bernoulli formula. The probability of one compound event consisting in the fact that in P test event BUT will come k once and will not come n - k times, according to the theorem of multiplication of probabilities of independent events is equal to p k q n - k . There can be as many such complex events as there are combinations of P elements by k elements, i.e. C n k .

Since these complex events incompatible, then according to the theorem of addition of probabilities of incompatible events the desired probability is equal to the sum of the probabilities of all possible complex events. Since the probabilities of all these complex events are the same, the desired probability (of the occurrence k event times BUT in P tests) is equal to the probability of one complex event, multiplied by their number:

The resulting formula is called Bernoulli formula .

Example 1. The probability that the electricity consumption during one day will not exceed the established norm is equal to p = 0.75 . Find the probability that in the next 6 days the electricity consumption for 4 days will not exceed the norm.


Decision. The probability of normal consumption of electricity during each of the 6 days is constant and equal to p = 0.75 . Therefore, the probability of overexpenditure of electricity every day is also constant and equal to q \u003d 1 - p \u003d 1 - 0.75 \u003d 0.25.

The desired probability according to the Bernoulli formula is equal to: