The law of accumulation of errors. Accumulation of error

INTRODUCTION

Any measurements, no matter how carefully they are performed, are accompanied by errors (errors), i.e., deviations of the measured values ​​from their true value. This is explained by the fact that in the process of measurements the conditions are constantly changing: the state of the environment, the measuring device and the measured object, as well as the attention of the performer. Therefore, when measuring a quantity, its approximate value is always obtained, the accuracy of which must be estimated. Another problem also arises: to choose an instrument, conditions and technique in order to perform measurements with a given accuracy. The theory of errors helps to solve these problems, which studies the laws of distribution of errors, establishes evaluation criteria and tolerances for measurement accuracy, methods for determining the most probable value of the quantity being determined, and rules for predicting the expected accuracy.

12.1. MEASUREMENTS AND THEIR CLASSIFICATION

Measurement is the process of comparing a measured value with another known value, taken as a unit of measurement.
All quantities with which we are dealing are divided into measured and calculated. measured the value is called its approximate value, found by comparison with a homogeneous unit of measure. So, sequentially laying the survey tape in a given direction and counting the number of layings, they find the approximate value of the length of the section.
Computed a quantity is its value determined from other measured quantities that are functionally related to it. For example, the area of ​​a rectangular area is the product of its measured length and width.
To detect misses (gross errors) and improve the accuracy of the results, the same value is measured several times. By accuracy, such measurements are divided into equal and unequal. Equivalent - homogeneous multiple measurement results of the same quantity, performed by the same instrument (or different instruments of the same accuracy class), in the same way and in the same number of steps, under identical conditions. unequal - measurements made in case of non-compliance with the conditions of equal accuracy.
In the mathematical processing of measurement results, the number of measured values ​​is of great importance. For example, to get the value of each angle of a triangle, it is enough to measure only two of them - this will be necessary number of values. In the general case, to solve any topographic-geodesic problem, it is necessary to measure a certain minimum number of quantities that ensures the solution of the problem. They are called the number of required quantities or measurements. But in order to judge the quality of the measurements, check their correctness and improve the accuracy of the result, the third angle of the triangle is also measured - excess . The number of redundant values (k ) is the difference between the number of all measured quantities ( P ) and the number of required quantities ( t ):

k = n - t

In topographic and geodetic practice, redundant measured values ​​are indispensable. They make it possible to detect errors (errors) in measurements and calculations and increase the accuracy of the determined values.

By physical performance measurements can be direct, indirect and remote.
Direct measurements are the simplest and historically the first types of measurements, for example, measuring the lengths of lines with a survey tape or tape measure.
Indirect measurements are based on the use of certain mathematical relationships between the sought and directly measured quantities. For example, the area of ​​a rectangle on the ground is determined by measuring the lengths of its sides.
remote measurements are based on the use of a number of physical processes and phenomena and, as a rule, are associated with the use of modern technical means: light range finders, electronic total stations, phototheodolites, etc.

Measuring instruments used in topographic and geodetic production can be divided into three main classes :

  • high-precision (precision);
  • accurate;
  • technical.

12.2. MEASUREMENT ERRORS

With repeated measurements of the same value, each time slightly different results are obtained, both in absolute value and in signs, no matter how experienced the performer has and no matter what high-precision instruments he uses.
Errors are distinguished: gross, systematic and random.
Appearance rough errors ( misses ) is associated with serious errors in the production of measurement work. These errors are easily identified and eliminated as a result of measurement control.
Systematic errors are included in each measurement result according to a strictly defined law. They are due to the influence of the design of measuring instruments, errors in the calibration of their scales, wear, etc. ( instrumental errors) or arise due to underestimation of the measurement conditions and the patterns of their changes, the approximation of some formulas, etc. ( methodological errors). Systematic errors are divided into permanent (invariant in sign and magnitude) and variables (changing their value from one dimension to another according to a certain law).
Such errors are predetermined and can be reduced to the required minimum by introducing appropriate corrections.
for example, the influence of the curvature of the Earth on the accuracy of determining vertical distances, the influence of air temperature and atmospheric pressure when determining the lengths of lines with light range finders or electronic total stations can be taken into account in advance, the influence of atmospheric refraction can be taken into account in advance, etc.
If gross errors are not allowed and systematic errors are eliminated, then the quality of measurements will be determined only random errors. These errors are unavoidable, but their behavior is subject to the laws of large numbers. They can be analyzed, controlled and reduced to the necessary minimum.
To reduce the influence of random errors on the measurement results, they resort to repeated measurements, to improve working conditions, choose more advanced instruments, measurement methods and carry out their careful production.
Comparing the series of random errors of equally accurate measurements, it can be found that they have the following properties:
a) for a given type and measurement conditions, random errors cannot exceed a certain limit in absolute value;
b) errors that are small in absolute value appear more often than large ones;
c) positive errors appear as often as negative ones equal in absolute value;
d) the arithmetic mean of random errors of the same value tends to zero with an unlimited increase in the number of measurements.
The distribution of errors corresponding to the specified properties is called normal (Fig. 12.1).

Rice. 12.1. Curve of normal distribution of Gaussian random errors

The difference between the measurement result of some quantity ( l) and its true meaning ( X) called absolute (true) error .

Δ = l - X

The true (absolutely accurate) value of the measured quantity cannot be obtained, even using the highest accuracy instruments and the most advanced measurement technique. Only in some cases can the theoretical value of the quantity be known. The accumulation of errors leads to the formation of discrepancies between the measurement results and their actual values.
The difference between the sum of practically measured (or calculated) values ​​and its theoretical value is called inviscid. For example, the theoretical sum of the angles in a flat triangle is 180º, and the sum of the measured angles turned out to be 180º02"; then the error of the sum of the measured angles will be +0º02". This error will be the angular discrepancy of the triangle.
Absolute error is not a complete indicator of the accuracy of the work performed. For example, if some line whose actual length is 1000 m, measured with a survey tape with an error of 0.5 m, and a segment of length 200 m- with an error of 0.2 m, then, despite the fact that the absolute error of the first measurement is greater than the second, the first measurement was nevertheless performed with an accuracy twice as high. Therefore, the concept is introduced relative errors:

The ratio of the absolute error of the measured valueΔ to the measured valuelcalled relative error.

Relative errors are always expressed as a fraction with a numerator equal to one (aliquot fraction). So, in the above example, the relative error of the first measurement is

and the second

12.3 MATHEMATICAL PROCESSING OF THE RESULTS OF EQUAL-ACCURACY MEASUREMENTS OF A SINGLE VALUE

Let some quantity with true value X measured equally n times and the results are: l 1 , l 2 , l 3 ,li (i = 1, 2, 3, … n), which is often referred to as a series of measurements. It is required to find the most reliable value of the measured quantity, which is called most likely , and evaluate the accuracy of the result.
In the theory of errors, the most probable value for a series of equally accurate measurement results is average , i.e.

(12.1)

In the absence of systematic errors, the arithmetic mean with an unlimited increase in the number of measurements tends to the true value of the measured quantity.
To enhance the influence of larger errors on the result of estimating the accuracy of a series of measurements, one uses root mean square error (UPC). If the true value of the measured quantity is known, and the systematic error is negligible, then the root mean square error ( m ) of a single result of equally accurate measurements is determined by the Gauss formula:

m = (12.2) ,

where Δ i is true error.

In geodetic practice, the true value of the measured quantity in most cases is not known in advance. Then the root-mean-square error of a single measurement result is calculated from the most probable errors ( δ ) individual measurement results ( l i ); according to the Bessel formula:

m = (12.3)

Where are the most likely errors ( δ i ) are defined as the deviation of the measurement results from the arithmetic mean

δ i = l i - µ

Often, next to the most probable value of a quantity, its root-mean-square error is also written ( m), e.g. 70°05" ± 1". This means that the exact value of the angle can be more or less than the specified value by 1". However, this minute can neither be added to the angle nor subtracted from it. It characterizes only the accuracy of obtaining results under given measurement conditions.

An analysis of the Gaussian normal distribution curve shows that with a sufficiently large number of measurements of the same value, the random measurement error can be:

  • greater than rms m in 32 cases out of 100;
  • greater than twice the root mean square 2m in 5 cases out of 100;
  • more than three times the root mean square 3m in 3 cases out of 1000.

It is unlikely that the random measurement error is greater than three times the root mean square, so tripled root mean square error is considered limiting:

Δ prev. = 3m

The marginal error is such a value of random error, the occurrence of which under the given measurement conditions is unlikely.

The root mean square error is also taken as the limiting error, equal to

Δprev = 2.5m ,

With an error probability of about 1%.

RMS error of the sum of the measured values

The square of the mean square error of the algebraic sum of the argument is equal to the sum of the squares of the mean square errors of the terms

m S 2 = m 1 2+m 2 2+m 3 2 + ..... + m n 2

In the particular case when m 1 = m 2 = m 3 = m n= m to determine the root mean square error of the arithmetic mean, use the formula

m S =

The root mean square error of the algebraic sum of equal measurements is several times greater than the root mean square error of one term.

Example.
If 9 angles are measured with a 30-second theodolite, then the root mean square error of the angle measurements will be

m coal = 30 " = ±1.5"

RMS error of the arithmetic mean
(accuracy of determining the arithmetic mean)

RMS error of the arithmetic mean (mµ )times less than the root mean square of one measurement.
This property of the root mean square error of the arithmetic mean allows you to improve the accuracy of measurements by increasing the number of measurements .

for example, it is required to determine the value of the angle with an accuracy of ± 15 seconds in the presence of a 30-second theodolite.

If you measure the angle 4 times ( n) and determine the arithmetic mean, then the root mean square error of the arithmetic mean ( mµ ) will be ± 15 seconds.

The root mean square error of the arithmetic mean ( m µ ) shows to what extent the influence of random errors is reduced during repeated measurements.

Example
A 5-fold measurement of the length of one line was made.
Based on the measurement results, calculate: the most probable value of its length L(average); probable errors (deviations from the arithmetic mean); root mean square error of one measurement m; accuracy of determining the arithmetic mean , and the most probable value of the line length, taking into account the root-mean-square error of the arithmetic mean ( L).

Processing distance measurements (example)

Table 12.1.

Measurement number

measurement result,
m

Most likely errors di, cm

The square of the most probable error, cm 2

Characteristic
accuracy

m=±=±19cm
mµ = 19 cm/= ±8 cm

Σ di = 0

di]2 = 1446

L= (980.65 ±0.08) m

12.4. WEIGHTS OF THE RESULTS OF UNEQUAL MEASUREMENTS

With unequal measurements, when the results of each measurement cannot be considered equally reliable, it is no longer possible to get by with the definition of a simple arithmetic mean. In such cases, the merit (or reliability) of each measurement result is taken into account.
The dignity of the measurement results is expressed by a certain number called the weight of this measurement. . Obviously, the arithmetic average will carry more weight than a single measurement, and measurements made with a more advanced and accurate instrument will have a greater degree of confidence than the same measurements made with a less accurate instrument.
Since the measurement conditions determine a different value of the root-mean-square error, it is customary to take the latter as basics of estimating weight values, measurements. In this case, the weights of the measurement results are taken inversely proportional to the squares of their corresponding root-mean-square errors .
So, if denoted by R and R measurement weights having root-mean-square errors, respectively m and µ , then we can write the proportionality relation:

For example, if µ the root mean square error of the arithmetic mean, and m- respectively, one dimension, then, as follows from

can be written:

i.e. the weight of the arithmetic mean in n times the weight of a single measurement.

Similarly, it can be found that the weight of an angle measurement made with a 15-second theodolite is four times the weight of an angle measurement made with a 30-second instrument.

In practical calculations, the weight of any one quantity is usually taken as a unit, and under this condition, the weights of the remaining measurements are calculated. So, in the last example, if we take the weight of the result of an angular measurement with a 30-second theodolite as R= 1, then the weight value of the measurement result with a 15-second theodolite will be R = 4.

12.5. REQUIREMENTS FOR FORMATTING THE RESULTS OF FIELD MEASUREMENTS AND THEIR PROCESSING

All materials of geodetic measurements consist of field documentation, as well as documentation of computational and graphic works. Many years of experience in the production of geodetic measurements and their processing allowed us to develop the rules for maintaining this documentation.

Registration of field documents

Field documents include materials for checking geodetic instruments, measurement logs and special forms, outlines, picket logs. All field documentation is considered valid only in the original. It is compiled in a single copy and, in case of loss, can be restored only by repeated measurements, which is practically not always possible.

The rules for keeping field logs are as follows.

1. Field journals should be filled out carefully, all numbers and letters should be written clearly and legibly.
2. Correction of numbers and their erasure, as well as writing numbers by numbers are not allowed.
3. Erroneous records of readings are crossed out with one line and “erroneous” or “misprint” is indicated on the right, and the correct results are inscribed on top.
4. All entries in the journals are made with a simple pencil of medium hardness, ink or a ballpoint pen; the use of chemical or colored pencils for this is not recommended.
5. When performing each type of geodetic survey, records of the measurement results are made in the appropriate journals of the established form. Before the start of work, the pages of the magazines are numbered and their number is certified by the head of the work.
6. In the process of field work, pages with rejected measurement results are crossed out diagonally with one line, indicate the reason for the rejection and the number of the page containing the results of repeated measurements.
7. In each journal, on the title page, fill in information about the geodetic instrument (brand, number, standard error of measurement), record the date and time of observations, weather conditions (weather, visibility, etc.), names of performers, provide the necessary diagrams, formulas and notes.
8. The journal must be filled in in such a way that another performer who is not involved in field work can accurately perform the subsequent processing of the measurement results. When filling out field journals, the following entry forms should be followed:
a) the numbers in the columns are written in such a way that all the digits of the corresponding digits are located one below the other without offset.
b) all results of measurements performed with the same accuracy are recorded with the same number of decimal places.

Example
356.24 and 205.60 m - correct,
356.24 and 205.6 m - wrong;
c) the values ​​of minutes and seconds in angular measurements and calculations are always written in two-digit numbers.

Example
127°07"05 " , not 127º7"5 " ;

d) in the numerical values ​​of the measurement results, write down such a number of digits that allows you to get the reading device of the corresponding measuring instrument. For example, if the length of the line is measured with a tape measure with millimeter divisions and the reading is carried out with an accuracy of 1 mm, then the reading should be recorded as 27.400 m, not 27.4 m. Or if the goniometer only allows reading whole minutes, then the reading will be written as 47º00 " , not 47º or 47º00"00".

12.5.1. The concept of the rules of geodetic calculations

The processing of the measurement results is started after checking all field materials. At the same time, one should adhere to the rules and techniques developed by practice, the observance of which facilitates the work of the calculator and allows him to rationally use computer technology and auxiliary means.
1. Before processing the results of geodetic measurements, a detailed computational scheme should be developed, which indicates the sequence of actions that allows obtaining the desired result in the simplest and fastest way.
2. Taking into account the amount of computational work, choose the most optimal means and methods of calculations that require the least cost while ensuring the required accuracy.
3. The accuracy of the calculation results cannot be higher than the measurement accuracy. Therefore, sufficient, but not excessive, accuracy of computational operations should be specified in advance.
4. When calculating, one should not use drafts, since rewriting digital material takes a lot of time and is often accompanied by errors.
5. To record the results of calculations, it is recommended to use special schemes, forms and statements that determine the procedure for calculations and provide intermediate and general control.
6. Without control, the calculation cannot be considered complete. Control can be performed using a different move (method) for solving the problem or by performing repeated calculations by another performer (in "two hands").
7. Calculations always end with the determination of errors and their mandatory comparison with the tolerances provided for by the relevant instructions.
8. Special requirements for computational work are imposed on the accuracy and clarity of recording numbers in computational forms, since carelessness in entries leads to errors.
As in field journals, when writing columns of numbers in computational schemes, digits of the same digits should be placed one under the other. In this case, the fractional part of the number is separated by a comma; it is desirable to write multi-digit numbers at intervals, for example: 2 560 129.13. Calculation records should be kept only in ink, in roman type; erroneous results are carefully crossed out and the corrected values ​​​​are written on top.
When processing measurement materials, one should know with what accuracy the results of calculations should be obtained in order not to operate with an excessive number of characters; if the final result of the calculation is obtained with more digits than necessary, then the numbers are rounded off.

12.5.2. Rounding numbers

Round up to n signs - means to keep in it the first n significant digits.
The significant digits of a number are all of its digits from the first non-zero digit on the left to the last recorded digit on the right. In this case, zeros on the right are not considered significant figures if they replace unknown figures or are put in place of other figures when rounding a given number.
For example, the number 0.027 has two significant digits, and the number 139.030 has six significant digits.

When rounding numbers, the following rules should be followed.
1. If the first of the discarded digits (counting from left to right) is less than 5, then the last remaining digit is retained unchanged.
For example, the number 145.873, after rounding to five significant digits, would be 145.87.
2. If the first of the discarded digits is greater than 5, then the last remaining digit is increased by one.
For example, the number 73.5672, after rounding it to four significant digits, will be 73.57.
3. If the last digit of the rounded number is the number 5 and it must be discarded, then the preceding digit in the number is increased by one only if it is odd (even number rule).
For example, the numbers 45.175 and 81.325, after rounding to 0.01, will be 45.18 and 81.32, respectively.

12.5.3. Graphic works

The value of graphic materials (plans, maps and profiles), which are the final result of geodetic surveys, is largely determined not only by the accuracy of field measurements and the correctness of their computational processing, but also by the quality of graphic execution. Graphic work should be carried out using carefully checked drawing tools: rulers, triangles, geodetic protractors, measuring compasses, sharpened pencils (T and TM), etc. The organization of the workplace has a great influence on the quality and productivity of drawing work. Drawing work should be carried out on sheets of high-quality drawing paper, fixed on a flat table or on a special drawing board. The drawn pencil original of the graphic document, after careful checking and correction, is drawn up in ink in accordance with the established conventional signs.

Questions and tasks for self-control

  1. What does the expression "measure something" mean?
  2. How are measurements classified?
  3. How are measuring devices classified?
  4. How are measurement results classified by accuracy?
  5. What measurements are called equal?
  6. What do the concepts mean: necessary and excess number of measurements?
  7. How are measurement errors classified?
  8. What causes systematic errors?
  9. What are the properties of random errors?
  10. What is called absolute (true) error?
  11. What is referred to as relative error?
  12. What is called the arithmetic mean in the theory of errors?
  13. What is called the mean square error in the theory of errors?
  14. What is the marginal mean square error?
  15. How is the root mean square error of the algebraic sum of equally accurate measurements and the root mean square error of one term related?
  16. What is the relationship between the root mean square error of the arithmetic mean and the root mean square error of one measurement?
  17. What does the root mean square error of the arithmetic mean show?
  18. What parameter is taken as the basis for estimating the weight values?
  19. What is the relationship between the weight of the arithmetic mean and the weight of a single measurement?
  20. What are the rules adopted in geodesy for keeping field logs?
  21. List the basic rules of geodetic calculations.
  22. Round to 0.01 the numbers 31.185 and 46.575.
  23. List the basic rules for performing graphic work.

What is "ACCUMULATION OF ERROR"? What is the correct spelling of this word. Concept and interpretation.

CUMULATION OF ERROR in the numerical solution of algebraic equations - the total effect of roundings made at individual steps of the computational process on the accuracy of the resulting solution of a linear algebraic equation. systems. The most common method for a priori estimation of the total influence of roundoff errors in numerical methods of linear algebra is the so-called scheme. reverse analysis. As applied to the solution of a system of linear algebraic equations, the reverse analysis scheme is as follows. The solution xy calculated by the direct method does not satisfy (1), but can be represented as an exact solution of the perturbed system. The quality of the direct method is estimated by the best a priori estimate that can be given for the matrix and vector norms. Such "best" and called. respectively, the matrix and vector of the equivalent perturbation for the method M. If estimates for and are available, then theoretically the error of the approximate solution can be estimated by the inequality Here is the condition number of the matrix A, and the matrix norm in (3) is assumed to be subordinate to the vector norm. , and the main meaning of (2) is the ability to compare the quality of different methods. Below is a view of some typical estimates for the matrix For methods with orthogonal transformations and floating-point arithmetic (in system (1) A and b are considered valid) In this estimate, the relative accuracy of arithmetic. operations in a computer, is the Euclidean matrix norm, f (n) is a function of the form, where n is the order of the system. The exact values ​​of the constant C of the exponent k are determined by such details of the computational process as the method of rounding, the use of the accumulation of scalar products, etc. Most often, k=1 or 3/2. In the case of Gauss-type methods, the right side of estimate (4) also includes a factor that reflects the possibility of growth of the elements of the matrix Ana at intermediate steps of the method compared to the initial level (such growth is absent in orthogonal methods). To reduce the value, various methods of choosing the leading element are used, preventing the increase in the elements of the matrix. For the square root of the method, which is usually used in the case of a positive definite matrix A, the strongest estimate is obtained. In these cases, in the study of N. p., other considerations are also applied (see -). Lit.: Givens W., "TJ. S. Atomic Energy Commiss. Repts. Ser. OR NL", 1954, No. 1574; Wilkinson J. H., Rounding errors in algebraic processes, L., 1963; Wilkinson J. stability in direct methods of linear algebra, M., 1969; his own, Computational foundations of linear algebra, M., 1977; Peters G., Wilkinson J. H., "Communs Assoc. Comput. Math.", 1975, v. 18, no. 1, pp. 20-24; Brouden C. G., "J. Inst. Math, and Appl.", 1974, v. 14, no. 2, p. 131-40; Reid J. K., in the book: Large Sparse Sets of Linear Equations, L.-N. Y., 1971, p. 231 - 254; Ikramov Kh. D., "J. Comput. math. and mat. Physics", 1978, vol. 18, no. 3, pp. 531-45. Kh. D. Ikramov. N. p. rounding off or method errors arise when solving problems where the solution is the result of a large number of sequentially performed arithmetic operations. Significant some of these problems are connected with the solution of algebraic problems, linear or nonlinear (see above). In turn, among algebraic problems, the most common problems arise in the approximation of differential equations. These problems are characterized by certain specific features. method of solving a problem follows the same or simpler laws as the NI of computational error; computational errors at each step are introduced in the most unfavorable way and receive a majorant error estimate.In the second case, these errors are considered to be random with a certain distribution law division. The nature of the N. p. depends on the problem being solved, the method of solution, and a number of other factors that at first glance may seem insignificant; this includes the form of writing numbers in a computer (fixed-point or floating-point), the order of execution of arithmetic. operations, etc. For example, in the problem of calculating the sum of N numbers, the order in which operations are performed is essential. Let the calculations be performed on a floating point machine with t bits and all numbers lie within. When directly calculated using the recursive formula, the majorant error estimate is of the order of 2-tN. You can do otherwise (see). When calculating pairwise sums (if N=2l+1 is odd) it is assumed. Then, their pairwise sums are calculated, etc. For after the steps of forming pairwise sums by formulas, a majorant estimate of the order error is obtained. in these cases, the application of the described technique leads to an increase in the load on the computer memory. However, it is possible to arrange a sequence of calculations so that the RAM load does not exceed -log2N cells. In the numerical solution of differential equations, the following cases are possible. As the grid step h tends to zero, the error grows as where. Such methods for solving problems are classified as unstable. Their use is episodic. character. Stable methods are characterized by an increase in error as The error of such methods is usually estimated as follows. An equation is constructed with respect to the perturbation introduced either by rounding off or by the errors of the method, and then the solution of this equation is investigated (see , ). In more complex cases, the method of equivalent perturbations (see , ) is used, developed in relation to the problem of studying the accumulation of computational errors in solving differential equations (see , , ). Calculations according to some calculation scheme with roundings are considered as calculations without roundings, but for an equation with perturbed coefficients. By comparing the solution of the original grid equation with the solution of the equation with perturbed coefficients, an error estimate is obtained. Considerable attention is paid to the choice of a method with, if possible, smaller values ​​of q and A(h). With a fixed method for solving the problem, the calculation formulas can usually be converted to the form where (see , ). This is especially important in the case of ordinary differential equations, where the number of steps in some cases turns out to be very large. The value of (h) can grow strongly with an increase in the interval of integration. Therefore, they try to apply methods with a smaller value of A(h) whenever possible. In the case of the Cauchy problem, the rounding error at each specific step with respect to subsequent steps can be considered as an error in the initial condition. Therefore, the lower bound (h) depends on the characteristic of the divergence of close solutions of the differential equation defined by the variational equation. In the case of a numerical solution of an ordinary differential equation, the equation in variations has the form and therefore, when solving the problem on the interval (x 0, X), one cannot count on the constant A (h) in the majorant estimate of the computational error, which is significantly better than methods of the Runge-Kutta type or methods of the Adams type (see , ), where the N. p. is mainly determined by the solution of the equation in variations. For a number of methods, the leading term of the method error accumulates according to a similar law, while the computational error accumulates much faster (see Fig. ). Practical area applicability of such methods turns out to be significantly narrower. The accumulation of the computational error essentially depends on the method used to solve the grid problem. For example, in solving grid boundary value problems corresponding to ordinary differential equations using the shooting and sweep methods, the N. p. has the character A(h)h-q, where q is the same. The values ​​of A(h) for these methods may differ so much that in a certain situation one of the methods becomes inapplicable. When solving the grid boundary value problem for the Laplace equation by the shooting method, the N. p. With a probabilistic approach to the study of N. p., in some cases, some law of error distribution is a priori assumed (see ), in other cases, a measure is introduced on the space of the problems under consideration and, based on this measure, a distribution law of rounding errors is obtained (see , ). With moderate accuracy in solving the problem, majorant and probabilistic approaches to estimating the accumulation of computational errors usually give qualitatively the same results: either in both cases, the N.I. occurs within acceptable limits, or in both cases, the N.I. exceeds such limits. Lit .: Voevodin V. V., Computational foundations of linear algebra, M., 1977; Shura-Bura M.R., "Applied Mathematics and Mechanics", 1952, vol. 16, no. 5, p. 575-88; Bakhvalov N. S., Numerical methods, 2nd ed., M., 1975; Wilkinson J. X., Algebraic eigenvalue problem, trans. from English, M.. 1970; Bakhvalov N. S., in the book: Computational methods and programming, in. 1, M., 1962, pp. 69-79; Godunov S. K., Ryaben'kii V. S., Difference schemes, 2nd ed., M., 1977; Bakhvalov N. S., "Reports of the Academy of Sciences of the USSR", 1955, vol. 104, no. 5, p. 683-86; his own, "J. Calculate, Mathematics and Mathematics of Physics", 1964; vol. 4, no. 3, p. 399-404; Lapshin E. A., ibid., 1971, vol. 11, No. 6, pp. 1425-36. N. S. Bakhvalov.

in the numerical solution of algebraic equations - the total effect of roundings made at individual steps of the computational process on the accuracy of the resulting solution of a linear algebraic equation. systems. The most common method for a priori estimation of the total influence of roundoff errors in numerical methods of linear algebra is the so-called scheme. reverse analysis. As applied to the solution of a system of linear algebraic equations

the reverse analysis scheme is as follows. The xui solution calculated by the direct method does not satisfy (1), but can be represented as an exact solution of the perturbed system

The quality of the direct method is estimated by the best a priori estimate that can be given for the norms of the matrix and vector . Such "best" and called. respectively, the matrix and vector of the equivalent perturbation for the method M.

If estimates for and are available, then theoretically the error of the approximate solution can be estimated by the inequality

Here is the condition number of the matrix A, and the matrix norm in (3) is assumed to be subordinate to the vector norm

In reality, the estimate for is rarely known, and the main meaning of (2) is the ability to compare the quality of different methods. Below is the form of some typical estimates for the matrix For methods with orthogonal transformations and floating point arithmetic (in system (1) A and b are considered valid)

In this estimate, the relative accuracy of arithmetic. computer operations, is the Euclidean matrix norm, f(n) is a function of the form , where n is the order of the system. The exact values ​​of the constant C of the exponent k are determined by such details of the computational process as the method of rounding, the use of the accumulation of scalar products, etc. Most often, k=1 or 3/2.

In the case of Gauss-type methods, the right side of estimate (4) also includes the factor , which reflects the possibility of growth of the elements of the matrix Ana at intermediate steps of the method compared to the initial level (such growth is absent in orthogonal methods). To reduce the value of , various methods of choosing the leading element are used, which prevent the increase in the elements of the matrix.

For square root method, which is usually used in the case of a positive-definite matrix A, the strongest estimate is obtained

There are direct methods (Jordan, bordering, conjugate gradients) for which the direct application of the inverse analysis scheme does not lead to efficient estimates. In these cases, in the study of N. p., other considerations are also applied (see -).

Lit.: Givens W., "TJ. S. Atomic Energy Commiss. Repts. Ser. OR NL", 1954, No. 1574; Wilkinson, J. H., Rounding errors in algebraic processes, L., 1963; Wilkinson J.

X. D. Ikramov.

N. p. rounding off or method errors arise when solving problems where the solution is the result of a large number of sequentially performed arithmetic. operations.

A significant part of such problems is connected with the solution of algebraic problems. problems, linear or non-linear (see above). In turn, among the algebraic problems, the most common problems arise when approximating differential equations. These tasks are characterized by certain specific features. peculiarities.

The N. P. of the method of solving a problem follows the same or simpler laws as the N. P. of computational error; N., p. method is investigated when evaluating the method for solving the problem.

When studying the accumulation of computational errors, two approaches are distinguished. In the first case, it is considered that the computational errors at each step are introduced in the most unfavorable way and a majorant error estimate is obtained. In the second case, these errors are considered to be random with a certain distribution law.

The nature of the N. p. depends on the problem being solved, the method of solution, and a number of other factors that at first glance may seem insignificant; this includes the form of writing numbers in a computer (fixed-point or floating-point), the order of execution of arithmetic. operations, etc. For example, in the problem of calculating the sum of N numbers

the order in which the operations are performed is important. Let the calculations be performed on a floating point machine with t bits and all numbers lie within . When directly calculated using the recursive formula, the majorant error estimate is of the order 2-tN. You can do otherwise (see). When calculating pairwise sums (if N=2l+1 odd) suppose . Next, their pairwise sums are calculated, and so on.

obtain a majorant error estimate of the order

In typical problems, the quantities a t are calculated according to formulas, in particular recurrent ones, or are sequentially entered into the main memory of the computer; in these cases, the application of the described technique leads to an increase in the load on the computer memory. However, it is possible to organize the sequence of calculations in such a way that the RAM load will not exceed -log 2 N cells.

In the numerical solution of differential equations, the following cases are possible. As the grid step h tends to zero, the error grows as where . Such methods for solving problems are classified as unstable. Their use is episodic. character.

Stable methods are characterized by an increase in error as The error of such methods is usually estimated as follows. An equation is constructed with respect to the perturbation introduced either by rounding off or by the errors of the method, and then the solution of this equation is investigated (see , ).

In more complex cases, the method of equivalent perturbations (see , ) is used, developed in relation to the problem of studying the accumulation of computational errors in solving differential equations (see , , ). Calculations according to some calculation scheme with roundings are considered as calculations without roundings, but for an equation with perturbed coefficients. By comparing the solution of the original grid equation with the solution of the equation with perturbed coefficients, an error estimate is obtained.

Considerable attention is paid to the choice of a method, if possible, with smaller values ​​of q and A(h) . With a fixed method for solving the problem, the calculation formulas can usually be converted to the form where (see , ). This is especially important in the case of ordinary differential equations, where the number of steps in some cases turns out to be very large.

The value of (h) can grow strongly with an increase in the interval of integration. Therefore, they try to apply methods, if possible, with a smaller value of A(h) . In the case of the Cauchy problem, the rounding error at each specific step with respect to subsequent steps can be considered as an error in the initial condition. Therefore, the lower bound (h) depends on the characteristic of the divergence of close solutions of the differential equation defined by the variational equation.

In the case of a numerical solution of an ordinary differential equation the equation in variations has the form

and therefore, when solving the problem on the segment ( x 0 , X) one cannot count on the constant A(h) in the majorant estimate of the computational error to be significantly better than

Therefore, when solving this problem, one-step methods of the Runge-Kutta type or methods of the Adams type (see , ), where the N.p. is mainly determined by the solution of the equation in variations, are most commonly used.

For a number of methods, the leading term of the method error accumulates according to a similar law, while the computational error accumulates much faster (see ). Practical area applicability of such methods turns out to be significantly narrower.

The accumulation of the computational error essentially depends on the method used to solve the grid problem. For example, when solving grid boundary value problems corresponding to ordinary differential equations by shooting and sweeping methods, the N. p. has the character A(h) h-q, where q is the same. The values ​​of A(h) for these methods may differ so much that in a certain situation one of the methods becomes inapplicable. When solving the grid boundary value problem for the Laplace equation by the shooting method, the N. p. has the character s 1/h , s>1, and in the case of the sweep method Ah-q. With a probabilistic approach to the study of N. p., in some cases, some law of error distribution is a priori assumed (see ), in other cases, a measure is introduced on the space of the problems under consideration and, based on this measure, a distribution law of rounding errors is obtained (see , ).

With moderate accuracy in solving the problem, majorant and probabilistic approaches to estimating the accumulation of computational errors usually give qualitatively the same results: either in both cases, the N.I. occurs within acceptable limits, or in both cases, the N.I. exceeds such limits.

Lit.: Voevodin V. V., Computational foundations of linear algebra, M., 1977; Shura-Bura M.R., "Applied Mathematics and Mechanics", 1952, vol. 16, no. 5, p. 575-88; Bakhvalov N. S., Numerical methods, 2nd ed., M., 1975; Wilkinson J. X., Algebraic eigenvalue problem, trans. from English, M.. 1970; Bakhvalov N. S., in the book: Computational methods and programming, in. 1, M., 1962, pp. 69-79; Godunov S. K., Ryaben'kii V. S., Difference schemes, 2nd ed., M., 1977; Bakhvalov N. S., "Reports of the Academy of Sciences of the USSR", 1955, vol. 104, no. 5, p. 683-86; his own, "J. Calculate, Mathematics and Mathematics of Physics", 1964; vol. 4, no. 3, p. 399-404; Lapshin E. A., ibid., 1971, vol. 11, No. 6, pp. 1425-36.

  • - deviations of the measurement results of the true values ​​of the measured quantity. Systema...
  • - metrological deviations. properties or parameters of measuring instruments from funeral ones, affecting the errors of measurement results ...

    Natural science. encyclopedic Dictionary

  • - deviations of the measurement results from the true values ​​of the measured quantity. They play a significant role in the production of a number of forensic examinations ...

    Forensic Encyclopedia

  • - : See also: - errors of measuring instruments - errors of measurements...
  • - Look...

    Encyclopedic Dictionary of Metallurgy

  • - deviations of the metrological parameters of measuring instruments from the nominal ones, affecting the errors of the measurement results ...

    Encyclopedic Dictionary of Metallurgy

  • - "... Periodic errors - errors, the value of which is a periodic function of time or the movement of the pointer of the measuring instrument .....

    Official terminology

  • - "... Constant errors are errors that retain their value for a long time, for example, during the entire series of measurements. They are most common .....

    Official terminology

  • - "... Progressive errors - continuously increasing or decreasing errors ...

    Official terminology

  • - see Observation Errors...

    Encyclopedic Dictionary of Brockhaus and Euphron

  • - measurement errors, deviations of measurement results from the true values ​​of the measured quantities. Distinguish systematic, casual and rough P. and. ...
  • - deviations of the metrological properties or parameters of measuring instruments from the nominal ones, affecting the errors of the measurement results obtained using these instruments ...

    Great Soviet Encyclopedia

  • - the difference between the measurement results and the true value of the measured quantity. The relative measurement error is the ratio of the absolute measurement error to the true value ...

    Modern Encyclopedia

  • - deviations of measurement results from the true values ​​of the measured quantity ...

    Big encyclopedic dictionary

  • - adj., number of synonyms: 3 corrected eliminated inaccuracies eliminated errors ...

    Synonym dictionary

  • - adj., number of synonyms: 4 correcting, eliminating flaws, eliminating inaccuracies, eliminating errors ...

    Synonym dictionary

"ACCUMULATION OF ERROR" in books

Technical errors

From the book Stars and a little nervous author

Technical errors

From the book Vain Perfections and Other Vignettes author Zholkovsky Alexander Konstantinovich

Technical inaccuracies Tales of successfully resisting force are not as far-fetched as we implicitly fear. Hitting usually assumes the passivity of the victim, and therefore it is thought out only one step forward and does not withstand a counterattack. Dad told me about one

Sins and errors

From the book How NASA Showed America the Moon author Rene Ralph

Sins and inaccuracies Despite the fictitious nature of their space navigation, NASA boasted of amazing accuracy in everything it did. Nine times in a row, the Apollo capsules landed perfectly in lunar orbit without the need for major course corrections. Lunar module,

initial accumulation of capital. Forced dispossession of peasants. Accumulation of wealth.

author

initial accumulation of capital. Forced dispossession of peasants. Accumulation of wealth. Capitalist production presupposes two basic conditions: 1) the presence of a mass of poor people, personally free and at the same time deprived of the means of production, and

Socialist accumulation. Accumulation and consumption in a socialist society.

From the book Political Economy author Ostrovityanov Konstantin Vasilievich

Socialist accumulation. Accumulation and consumption in a socialist society. The source of expanded socialist reproduction is socialist accumulation. Socialist accumulation is the use of a part of the net income of society,

Measurement errors

TSB

Errors of measuring instruments

From the book Great Soviet Encyclopedia (PO) of the author TSB

Ultrasound errors

From the book Thyroid Recovery A Guide for Patients author Ushakov Andrey Valerievich

Ultrasound errors When a patient came to me from St. Petersburg for a consultation, I saw three protocols of ultrasound examination at once. All of them were made by different specialists. Described differently. At the same time, the dates of the studies differed from each other by almost

Annex 13 Speech errors

From the book The Art of Getting Your Own author Stepanov Sergey Sergeevich

Appendix 13 Speech errors Even seemingly harmless phrases can often become a serious barrier to promotion. The famous American marketing specialist John R. Graham compiled a list of expressions, the use of which, according to his observations,

Speech errors

From the book How Much Are You Worth [Technology for a Successful Career] author Stepanov Sergey Sergeevich

Speech errors Even seemingly harmless phrases can often become a serious barrier to promotion. The famous American marketing specialist John R. Graham compiled a list of expressions, the use of which, according to his observations, did not allow

fatal errors

From the book The Black Swan [Under the sign of unpredictability] author Taleb Nassim Nicholas

Deadly errors Errors have such a destructive property: the more significant they are, the greater their masking effect. Nobody sees dead rats, and therefore the more deadly the risk, the less obvious it is, because the victims are excluded from the number of witnesses. How

Orientation errors

From the book The ABC of Tourism author Bardin Kirill Vasilievich

Orientation Errors So, a common orientation problem that a tourist has to solve is to get from one point to another using only a compass and a map. The area is unfamiliar and, moreover, closed, that is, devoid of any

Errors: Philosophy

From the author's book

Errors: philosophy On an intuitive level, we understand that our knowledge in many cases is not accurate. We can cautiously assume that our knowledge in general can be accurate only on a discrete scale. You can know exactly how many balls are in the bag, but you can’t know what their weight is,

Uncertainties: Models

From the author's book

Errors: Models When we measure something, it is convenient to represent the information (both conscious and unconscious) available at the time the measurements began in the form of models of an object or phenomenon. The "zero level" model is the model of having a quantity. We believe that she is -

Errors: what and how to control

From the author's book

Errors: what and how to control The choice of controlled parameters, measurement scheme, method and scope of control is made taking into account the output parameters of the product, its design and technology, the requirements and needs of the one who uses the controlled products. Yet again,

1.2.10. Processing indirect measurements.

With indirect measurements, the desired value of the physical quantity Y found based on the results X 1 , X 2 , … X i , … X n, direct measurements of other physical quantities associated with the desired known functional dependence φ:

Y= φ( X 1 , X 2 , … X i , … X n). (1.43)

Assuming that X 1 , X 2 , … X i , … X n are the corrected results of direct measurements, and the methodological errors of indirect measurements can be neglected, the result of indirect measurements can be found directly by formula (1.43).

If Δ X 1 , Δ X 2 , … Δ X i , … Δ X n– errors in the results of direct measurements of quantities X 1 , X 2 , … X i , … X n, then the error Δ of the result Y indirect measurement in the linear approximation can be found by the formula

Δ = . (1.44)

term

(1.45)

is the error component of the indirect measurement result, caused by the error Δ X i result X i direct measurement - is called a partial error, and the approximate formula (1.44) - the law of accumulation of partial errors. (1K22)

To estimate the error Δ of the result of an indirect measurement, it is necessary to have some information about the errors Δ X 1 , Δ X 2 , … Δ X i , … Δ X n results of direct measurements.

Usually, the limit values ​​of the error components of direct measurements are known. For example, for the error Δ X i known: the limit of the basic error, the limits of additional errors, the limit of non-excluded residuals of the systematic error, etc. Error Δ X i is equal to the sum of these errors:

,

and the limit value of this error ΔX i,p - the sum of the limits:

. (1.46)

Then the limit value Δ p of the error of the result of indirect measurement P = 1 can be found by the formula

Δ p =
. (1.47)

Boundary value Δ g of the error of the result of indirect measurement for the confidence level P = 0.95 can be found using the approximate formula (1.41). Taking into account (1.44) and (1.46), we obtain:

. (1.48)

After calculating Δ p or Δ g, the result of indirect measurement should be written in standard form (respectively, (1.40) or (1.42)). (1P3)

QUESTIONS:

1. For what tasks are used measuring instruments? What kind metrological characteristics Measuring equipment you know?

2. By what criteria are they classified metrological characteristics measuring instruments?

3. What component of the error of the measuring instrument is called basic?

4. What component of the error of the measuring instrument is called additional?

5. Define absolute, relative and reduced errors measuring instruments.

6. Define absolute error of the measuring transducer at the input and output.

7. How would you experimentally determine measuring transducer errors for input and output?

8. How interconnected absolute errors of the measuring transducer for input and output?

9. Define additive, multiplicative and non-linear error components of measuring equipment.

10. Why nonlinear component of the error of the measuring equipment sometimes called linearity error? For which transducer conversion functions it makes sense?

11. What information about the error of the measuring instrument does it give accuracy class?

12. Formulate the law of accumulation of partial errors.

13. Formulate error summation problem.

15. What is corrected value of the measurement result?

16. What is the purpose processing of measurement results?

17. How to calculate limit valueΔ p errors direct measurement result for the confidence level P= 1 and its limit valueΔ g for P = 0,95?

18. What measurement is called indirect? how find the result of an indirect measurement?

19. How to calculate limit valueΔ p errors indirect measurement result for the confidence level P= 1 and its limit valueΔ g for P = 0,95?

20. Give examples of methodological errors of direct and indirect measurements.

Control works on subsection 1.2 are given in (1KR1).

REFERENCES for section 1.

2. METHODS FOR MEASURING ELECTRIC QUANTITIES

2.1. Measurement of voltages and currents.

2.1.1. General information.

When choosing a means of measuring electrical voltages and currents, it is necessary, first of all, to take into account:

Kind of measured physical quantity (voltage or current);

The presence and nature of the dependence of the measured value on time over the observation interval (depends or not, the dependence is a periodic or non-periodic function, etc.);

The range of possible values ​​of the measured value;

Measured parameter (average value, effective value, maximum value in the observation interval, set of instantaneous values ​​in the observation interval, etc.);

Frequency range;

Required measurement accuracy;

The maximum observation time interval.

In addition, it is necessary to take into account the ranges of values ​​of the influencing quantities (ambient air temperature, supply voltage of the measuring instrument, output impedance of the signal source, electromagnetic interference, vibration, humidity, etc.), depending on the conditions of the measurement experiment.

The ranges of possible values ​​of voltages and currents are very wide. For example, currents can be of the order of 10 -16 A when measured in space and of the order of 10 5 A - in the circuits of powerful power plants. This section deals mainly with voltage and current measurements in the most common ranges in practice: from 10 -6 to 10 3 V and from 10 -6 to 10 4 A.

To measure voltages, analog (electromechanical and electronic) and digital voltmeters(2K1), DC and AC compensators (potentiometers), analog and digital oscilloscopes and measuring systems.

For measuring currents, electromechanical ammeters(2K2), as well as multimeters and measuring systems in which the measured current is first converted into a voltage proportional to it. In addition, an indirect method is used to experimentally determine currents, by measuring the voltage caused by the passage of current through a resistor with a known resistance.

2.1.2. Measurement of constant voltages by electromechanical devices.

To create voltmeters use the following measuring mechanisms(2K3): magnetoelectric(2K4), electromagnetic(2K5), electrodynamic(2K6), ferrodynamic(2K7) and electrostatic(2K8).

In a magnetoelectric measuring mechanism, the torque is proportional to the current in the moving coil. To build a voltmeter in series with the coil winding, an additional resistance is included. The measured voltage applied to this series connection is proportional to the current in the winding; therefore, the scale of the instrument can be graduated in units of voltage. The direction of the torque depends on the direction of the current, so pay attention to the polarity of the voltage applied to the voltmeter.

Input impedance R the input of the magnetoelectric voltmeter depends on the final value U to measuring range and total deflection current I on - current in the coil winding, at which the arrow of the device deviates to the full scale (it will be set at the mark U to). It's obvious that

R in = U to / I on. (2.1)

In multi-limit instruments, the value is often normalized R in, and current I on. Knowing the voltage U k for the measurement range used in this experiment, the value R in can be calculated by formula (2.1). For example, for a voltmeter with U k = 100 V and I po = 1 mA R in = 10 5 ohms.

To build electromagnetic, electrodynamic and ferrodynamic voltmeters, a similar circuit is used, only the additional resistance is connected in series with the winding of the fixed coil of the electromagnetic measuring mechanism or with the windings of the moving and fixed coils of the electrodynamic or ferrodynamic measuring mechanisms previously connected in series. The total deflection currents for these measuring mechanisms are usually significantly higher than for the magnetoelectric, so the input resistances of voltmeters are less.

Electrostatic voltmeters use an electrostatic measuring mechanism. The measured voltage is applied between fixed and movable plates isolated from each other. The input resistance is determined by the insulation resistance (about 10 9 ohms).

The most common electromechanical voltmeters with accuracy classes of 0.2. 0.5, 1.0, 1.5 allow you to measure DC voltages in the range from 0.1 to 10 4 V. To measure large voltages (usually more than 10 3 V), use voltage dividers(2K9). To measure voltages less than 0.1 V, magnetoelectric galvanometers(2K10) and devices based on them (for example, photogalvanometric devices), but it is more expedient to use digital voltmeters.

2.1.3. Measurement of direct currents by electromechanical devices.

To create ammeters use the following measuring mechanisms(2K3): magnetoelectric(2K4), electromagnetic(2K5), electrodynamic(2K6) and ferrodynamic(2K7).

In the simplest single-limit ammeters, the measured current circuit consists of a moving coil winding (for a magnetoelectric measuring mechanism), a fixed coil winding (for an electromagnetic measuring mechanism), or moving and fixed coil windings connected in series (for electrodynamic and ferrodynamic measuring mechanisms). Thus, unlike voltmeter circuits, they do not have additional resistances.

Multi-limit ammeters are built on the basis of single-limit ones, using various techniques to reduce sensitivity. For example, passing the measured current through part of the coil winding or including the coil windings in parallel. Shunts are also used - resistors with relatively low resistances, connected in parallel with the windings.

The most common electromechanical ammeters with accuracy classes 0.2. 0.5, 1.0, 1.5 allow you to measure direct currents in the range from 10 -6 to 10 4 A. To measure currents less than 10 -6 A, you can use magnetoelectric galvanometers(2K10) and devices based on them (for example, photogalvanometric devices).

2.1.4. Measurement of alternating currents and voltages

electromechanical devices.

Electromechanical ammeters and voltmeters are used to measure the effective values ​​of periodic currents and voltages. To create them, electromagnetic, electrodynamic and ferrodynamic, as well as electrostatic (only for voltmeters) measuring mechanisms are used. In addition, electromechanical ammeters and voltmeters also include devices based on a magnetoelectric measuring mechanism with AC or voltage to DC converters (rectifiers and thermoelectric devices).

The measuring circuits of electromagnetic, electrodynamic and ferrodynamic ammeters and AC voltmeters practically do not differ from the circuits of similar DC devices. All these devices can be used to measure both direct and alternating currents and voltages.

The instantaneous value of the torque in these devices is determined by the square of the instantaneous value of the current in the coil windings, and the position of the pointer depends on the average value of the torque. Therefore, the device measures the effective (rms) value of the measured periodic current or voltage, regardless of the shape of the curve. The most common ammeters and voltmeters with accuracy classes of 0.2. 0.5, 1.0, 1.5 allow you to measure alternating currents from 10 -4 to 10 2 A and voltages from 0.1 to 600 V in the frequency range from 45 Hz to 5 kHz.

Electrostatic voltmeters can also be used to measure both constant and effective values ​​of alternating voltages, regardless of the shape of the curve, since the instantaneous value of the torque in these devices is determined by the square of the instantaneous value of the measured voltage. The most common voltmeters with accuracy classes 0.5, 1.0, 1.5 allow you to measure alternating voltages from 1 to 10 5 V in the frequency range from 20 Hz to 10 MHz.

Magnetoelectric ammeters and voltmeters designed for operation in DC circuits cannot measure the effective values ​​of alternating currents and voltages. Indeed, the instantaneous value of the torque in these devices is proportional to the instantaneous value of the current in the coil. With a sinusoidal current, the average value of the torque and, accordingly, the instrument reading is zero. If the current in the coil has a constant component, then the reading of the device is proportional to the average value of the current in the coil.

To create AC ammeters and voltmeters based on a magnetoelectric measuring mechanism, AC-to-DC converters based on semiconductor diodes or thermal converters are used. On fig. 2.1 shows one of the possible circuits of the ammeter of the rectifier system, and in fig. 2.2 - thermoelectric.

In the ammeter of the rectifier system, the measured current i(t) straightens and passes through the coil winding of the magnetoelectric measuring mechanism IM. The reading of the device is proportional to the average modulo for the period T current value:

. (2.2)

Meaning I cp is proportional to the effective value of the current, however, the proportionality factor depends on the type of function i(t). All devices of the rectifier system are calibrated in the effective values ​​​​of currents (or voltages) of a sinusoidal form and are not intended for measurements in circuits with currents of arbitrary shape.

In the ammeter of a thermoelectric system, the measured current i(t) passes through the heater of the thermal converter TP. When it is heated, thermo-EMF arises at the free ends of the thermocouple, causing a direct current through the coil winding of the magnetoelectric measuring mechanism of the IM. The value of this current depends non-linearly on the effective value I measured current i(t) and little depends on its shape and spectrum.

Voltmeter circuits of rectifier and thermoelectric systems differ from ammeter circuits by the presence of an additional resistance connected in series to the circuit of the measured current i(t) and acting as a converter of the measured voltage into current.

The most common ammeters and voltmeters of the rectifier system with accuracy classes 1.0 and 1.5 allow you to measure alternating currents from 10 -3 to 10 A and voltages from 1 to 600 V in the frequency range from 45 Hz to 10 kHz.

The most common thermoelectric system ammeters and voltmeters with accuracy classes 1.0 and 1.5 allow measuring alternating currents from 10 -4 to 10 2 A and voltages from 0.1 to 600 V in the frequency range from 1 Hz to 50 MHz.

Usually, devices of rectifier and thermoelectric systems are made multi-range and combined, which allows them to be used to measure both alternating and direct currents and voltages.

2.1.5. DC voltage measurement

Unlike electromechanical analog voltmeters(2K11) electronic voltmeters incorporate voltage amplifiers. The informative parameter of the measured voltage is converted in these devices into direct current in the coil winding of the magnetoelectric measuring mechanism (2K4), the scale of which is calibrated in units of voltage.

The electronic voltmeter amplifier must have a stable gain in a certain frequency range from some lower frequency f n to the top f in. If a f n = 0, then such an amplifier is usually called DC amplifier, and if f n > 0 and the gain is zero at f = 0 – AC amplifier.

A simplified circuit of an electronic DC voltmeter consists of three main components: an input voltage divider (2K9), a DC amplifier connected to its output, and a magnetoelectric voltmeter. A high-resistance voltage divider and a DC amplifier provide a high input impedance of the electronic voltmeter (of the order of 1 MΩ). The division and gain factors can be discretely adjusted, which makes it possible to make multi-range voltmeters. Due to the high gain of electronic voltmeters, a higher sensitivity is provided compared to electromechanical ones.

A feature of DC electronic voltmeters is drift- slow changes in voltmeter readings at a constant measured voltage (1Q14), caused by changes in the parameters of the elements of the DC amplifier circuits. The drift of readings is most significant when measuring low voltages. Therefore, before starting measurements, it is necessary to use special adjusting elements to set the zero reading of the voltmeter with a shorted input.

If an alternating periodic voltage is applied to the voltmeter in question, then, due to the properties of the magnetoelectric measuring mechanism, it will measure the constant component of this voltage, unless the alternating component is too large and the voltmeter amplifier operates in a linear mode.

The most common analog electronic DC voltmeters allow you to measure voltages in the range from 10 -6 to 10 3 V. The values ​​​​of the limits of the basic reduced error depend on the measurement range and are usually ± (0.5 - 5.0)%.

2.1.6. Measurement of alternating voltages

analog electronic voltmeters.

Analog electronic voltmeters are mainly used to measure the effective values ​​of periodic voltages in a wide frequency range.

The main difference between the circuit of an electronic AC voltmeter and the circuit of a DC voltmeter considered above is associated with the presence of an additional node in it - a converter of the informative parameter of AC voltage to DC. Such transducers are often referred to as "detectors".

There are detectors of amplitude, modulo average and effective voltage values. The constant voltage at the output of the first is proportional to the amplitude of the voltage at its input, the constant voltage at the output of the second is proportional to the modulo average value of the input voltage, and the third is the effective one.

Each of the three indicated groups of detectors can, in turn, be divided into two groups: detectors with an open entrance and detectors with a closed entrance. For detectors with an open input, the output voltage depends on the DC component of the input voltage, and for detectors with a closed input, it does not. Obviously, if the circuit of an electronic voltmeter has a detector with a closed input or an AC amplifier, then the readings of such a voltmeter do not depend on the constant component of the measured voltage. Such a voltmeter is advantageous to use in cases where only the variable component of the measured voltage carries useful information.

Simplified diagrams of amplitude detectors with open and closed inputs are shown in Figs. 2.3 and 2.4.


When applied to the input of an amplitude detector with an open voltage input u(t) = U m sinωt capacitor is charged to voltage U m, which turns off the diode. At the same time, a constant voltage is maintained at the output of the detector. U m. If you apply an arbitrary voltage to the input, then the capacitor will be charged to the maximum positive value of this voltage.

When applying to the input of an amplitude detector with a closed voltage input u(t) = U m sinωt the capacitor is also charged to voltage U m and the output voltage u(t) = U m + U m sinωt. If such a voltage or a current proportional to it is applied to the coil winding of a magnetoelectric measuring mechanism, then the readings of the device will depend on the constant component of this voltage, equal to U m (2K4). When voltage is applied to the input u(t) = U Wed + U m sinωt, where U Wed– average voltage value u(t) , the capacitor is charged to a voltage U m + U Wed, and the output voltage is set u(t) = U m + U m sinωt, independent of U Wed .

Examples of modulo average and effective voltage detectors were considered in subsection 2.1.4 (Fig. 2.1 and 2.2, respectively).

Amplitude and modulo average detectors are simpler than RMS detectors, but voltmeters based on them can only be used to measure sinusoidal voltages. The fact is that their readings, depending on the type of detector, are proportional to the average modulo or amplitude values ​​of the measured voltage. Therefore, the considered analog electronic voltmeters can be calibrated in effective values ​​only for a certain form of the measured voltage. This is done for the most common - sinusoidal voltage.

The most common analog electronic voltmeters allow you to measure voltages from 10 -6 to 10 3 V in the frequency range from 10 to 10 9 Hz. The values ​​of the limits of the basic reduced error depend on the measurement range and the frequency of the measured voltage and are usually ± (0.5 - 5.0)%.

The method of measurement using electronic voltmeters differs from the method of using electromechanical voltmeters. This is due to the presence in them of electronic amplifiers with DC power supplies, usually operating from the AC mains.


If, however, terminal 6 is connected to the input terminal 1 of the voltmeter and, for example, the voltage is measured U 65 , then the measurement result will be distorted by the interference voltage, the value of which depends on the parameters of the equivalent circuits in Fig. 2.5 and 2.6.

With direct voltage measurement U 54 interference will distort the measurement result, regardless of how the voltmeter is connected. This can be avoided by indirect measurement by measuring the voltages U 64 and U 65 and calculated U 54 = U 64 - U 65 . However, the accuracy of such a measurement may not be high enough, especially if U 64 ≈ U 65 . (2K12)

Analytical chemistry

UDC 543.08+543.422.7

PREDICTION OF PHOTOMETRY ERRORS USING THE LAW OF ERRORS ACCUMULATION AND THE MONTE CARLO METHOD

IN AND. Golovanov, EM Danilina

In a computational experiment, with a combination of the law of propagation of errors and the Monte Carlo method, the influence of errors in the preparation of solutions, errors in a blank experiment, and transmission measurement errors on the metrological characteristics of photometric analysis was studied. It is found that the results of predicting errors by analytical and statistical methods are mutually consistent. It is shown that a feature of the Monte Carlo method is the possibility of predicting the distribution law of errors in photometry. On the example of a routine analysis scenario, the influence of the heteroscedasticity of the spread along the calibration curve on the quality of the analysis is considered.

Keywords: photometric analysis, error accumulation law, calibration graph, metrological characteristics, Monte Carlo method, stochastic simulation.

Introduction

Prediction of photometric analysis errors is mainly based on the use of the error accumulation law (ELL). For the case of a linear form of the law of light absorption: - 1§T \u003d A \u003d b1s, ZNO is usually written by the equation:

8A _ 8C _ 0.434-10^

A ‘8T-

In this case, the standard deviation of the measurement of the degree of transmission is assumed to be constant over the entire dynamic range of the photometer. At the same time, as noted in , in addition to instrumental errors, the accuracy of the analysis is affected by the error of a blank experiment, the error in setting the instrument scale limits, the cuvette error, chemical factors, and the error in setting the analytical wavelength. These factors are considered the main sources of error in the analysis result. Contributions to the accumulated error in the accuracy of the preparation of calibration solutions are usually neglected.

From this we see that equation (1) does not have significant prognostic power, since it takes into account the influence of only one factor. In addition, equation (1) is a consequence of the approximate expansion of the law of light absorption in a Taylor series. This raises the question of its accuracy, due to the neglect of the expansion terms above the first order. Mathematical analysis of decomposition residues is associated with computational difficulties and is not used in practice of chemical analysis.

The purpose of this work is to study the possibility of using the Monte Carlo method (method of statistical tests) as an independent method for studying and predicting the accumulation of photometric analysis errors, which complements and deepens the possibilities of ZNO.

Theoretical part

In this work, we will assume that the final random error of the calibration function is due not only to instrumental errors in measuring optical density, but also to errors in setting the instrument scale to 0 and 100% transmission (the error of

simple experiment), as well as errors in the preparation of calibration solutions. We neglect the other sources of errors mentioned above. Then we rewrite the equation of the Bouguer-Lambert-Beer law in a form convenient for further construction:

Ay \u003d ks " + A

In this equation, c51 is the concentration of the head standard solution of the colored substance, aliquots (Ya) of which are diluted in flasks with a nominal volume of Vsp to obtain a calibration series of solutions, Ay is the optical density of the blank experiment solution. Since, during photometry, the optical density of the tested solutions is measured relative to the blank solution, i.e., Ay is taken as conditional zero, then Ay = 0. (Note that the value of optical density measured in this case can be called conditional extinction.) In equation (2), the dimensionless quantity c" has the meaning of the concentration of the working solution, expressed in units of the concentration of the parent standard. We call the coefficient k the extinction of the standard, since Ag1 = e1c81 at c" = 1.

Let us apply to expression (2) the operator of the law of accumulation of random errors, assuming Va, Yd, and Ay to be random variables. We get:

Another independent random variable that affects the spread of A values ​​is the degree of transmission, since

A = -1§T, (4)

therefore, we add one more term to the dispersions on the left side of Eq. (3):

52a \u003d (0.434-10a) H + 8Іbі +

In this final record of the law of accumulation of errors, the absolute standard deviations of T, Ay and Yd are constant, and for Va the relative standard error is constant.

When constructing a stochastic model of the calibration function based on the Monte Carlo method, we consider that the possible values ​​x * of the random variables T, Ay, Ua and Yd are distributed according to the normal law. According to the Monte Carlo principle, we will play the possible values ​​using the inverse function method:

x; \u003d M (x1) + p-1 (r]) - inX |, (6)

where M(x) is the expectation (actual value) of the variable, ¥(r^) is the Laplace-Gauss function, q are the possible values ​​of the random variable R uniformly distributed over the interval (0,1), i.e. random numbers, sx - standard deviation of the corresponding variable, \ = 1...m - ordinal number of an independent random variable. After substituting expression (6) into equations (4) and (2), we have:

A" \u003d -18Xi \u003d -1810-a + P-1 (g]) 8t,

where A" = "k-+ x2

Calculations according to equation (7) return a separate implementation of the calibration function, i.e. dependence A" on the mathematical expectation M(s") (nominal value c"). Therefore, record (7) is an analytical expression of a random function. The cross sections of this function are obtained by repeatedly playing random numbers at each point of the calibration dependence. A sample set of implementations is processed by methods of mathematical statistics for the purpose of estimating the general parameters of calibration and testing hypotheses about the properties of the general population.

Obviously, the two approaches we are considering to the problem of predicting metrological characteristics in photometry - based on the ZNO, on the one hand, and based on the Monte Carlo method, on the other, should complement each other. In particular, from equation (5) one can obtain a result with a much smaller amount of calculations compared to (7), as well as ranking

calculate random variables by the significance of their contributions to the resulting error. Ranking allows you to abandon the screening experiment in statistical tests and a priori exclude insignificant variables from consideration. Equation (5) is easy to analyze mathematically in order to judge the nature of the contributions of factors to the total variance. Partial contributions of factors can be subdivided into independent of A, or increasing with increasing optical density. Therefore, sA as a function of A must be a monotonically increasing dependence without a minimum. When approximating the experimental data by equation (5), partial contributions of the same nature will be mixed, for example, the single error can be mixed with the error of a blank experiment. On the other hand, when statistically testing the model using the Monte Carlo method, it is possible to identify such important properties of the calibration graph as the law (laws) of the distribution of errors, as well as to evaluate the speed of convergence of sample estimates to general ones. On the basis of ZNO, such an analysis is impossible.

Description of the computational experiment

When constructing a simulation model for calibration, we assume that the calibration series of solutions was prepared in volumetric flasks with a nominal capacity of 50 ml and a maximum error of +0.05 ml. To a series of flasks, add from 1 to 17 ml of stock standard solution with a pipetting error of > 1%. Volume measurement errors were evaluated according to the reference book. Aliquots are added in 1 ml increments. In total, there are 17 solutions in the series, the optical density of which covers the range from 0.1 to 1.7 units. Then in equation (2) the coefficient k = 5. The error of a blank experiment is taken at the level of 0.01 units. optical density. The errors in measuring the degree of transmission, according to , depend only on the class of the device and are in the range from 0.1 to 0.5% T.

For greater binding of the conditions of the computational experiment to the laboratory experiment, we used data on the reproducibility of measurements of the optical densities of K2Cr2O7 solutions in the presence of 0.05 M H2SO4 on an SF-26 spectrophotometer. The authors approximate the experimental data on the interval A = 0.1 ... 1.5 by the parabola equation:

sBOCn*103 = 7.9-3.53A + 10.3A2. (eight)

We managed to fit the calculations according to the theoretical equation (5) to the calculations according to the empirical equation (8) using Newton's optimization method. We found that equation (5) satisfactorily describes the experiment at s(T) = 0.12%, s(Abi) = 0.007, and s r(Va) = 1.1%.

The independent error estimates given in the previous paragraph are in good agreement with those found during the fitting. For calculations according to equation (7), a program was created in the form of a sheet of MS Excel spreadsheets. The most significant feature of our Excel program is the use of NORMINV(RAND()) to generate normally distributed errors, see equation (6). In the special literature on statistical calculations in Excel, the Random Number Generation utility is described in detail, which in many cases is preferable to replace with functions of the NORMINV(RAND()) type. Such a replacement is especially convenient when creating your own Monte Carlo simulation programs.

Results and its discussion

Before proceeding to statistical tests, let us estimate the contributions of the terms on the left side of Eq. (5) to the total optical density dispersion. To do this, each term is normalized to the total variance. The calculations were performed at s(T) = 0.12%, s(Aw) = 0.007, Sr(Va)=l.l %, and s(Vfi) = 0.05. The calculation results are shown in fig. 1. We see that the contributions to the total variance of measurement errors Vfl can be neglected.

Whereas the contributions of another value Va

dominate in the range of optical densities 0.8__1.2. However, this conclusion is not general.

nature, since when measuring on a photometer with s(T) = 0.5%, the calibration errors, according to the calculation, are determined mainly by the scatter of Ay and the scatter of T. In fig. 2 compares the relative errors of optical density measurements predicted based on the ZNO (solid line) and the Monte Carlo method (icons). In statistical tests, the curve

errors were reconstructed from 100 realizations of the calibration dependence (1700 values ​​of optical densities). We see that both predictions are mutually consistent. The points are uniformly grouped around the theoretical curve. However, even with such a rather impressive statistical material, complete convergence is not observed. In any case, the scatter does not allow revealing the approximate nature of the STD, see the introduction.

0 0.4 0.8 1.2 1.6

Rice. 1. Weighted contributions of the terms of equation (5) to the variance A: 1 - for Ay; 2 - for Wah; 3 - for T; 4 - for

Rice. 2. Curve of errors of the calibration graph

It is known from the theory of mathematical statistics that with interval estimation of the mathematical expectation of a random variable, the reliability of estimation increases if the distribution law for this variable is known. In addition, in the case of a normal distribution, the estimate is the most efficient. Therefore, the study of the law of distribution of errors in the calibration graph is an important task. In such a study, first of all, the hypothesis of normality of the spread of optical densities at individual points of the graph is tested.

A simple way to test the main hypothesis is to calculate the skewness coefficients (a) and kurtosis coefficients (e) of empirical distributions, as well as their comparison with criterion values. The reliability of statistical inference increases with an increase in the volume of sample data. On fig. 3 shows sequences of coefficients for 17 sections of the calibration function. The coefficients are calculated from the results of 100 tests at each point. The critical values ​​of the coefficients for our example are |a| = 0.72 and |e| = 0.23.

From fig. 3, we can conclude that the dispersion of values ​​at the points of the graph, in general, does not

contradicts the normality hypothesis, since the sequences of coefficients have almost no preferred directionality. The coefficients are randomly localized near the zero line (shown by the dotted line). For a normal distribution, as is known, the expectation of the skewness coefficient and the kurtosis coefficient is zero. Judging by the fact that for all sections the asymmetry coefficients are significantly lower than the critical value, we can confidently speak about the symmetry of the distribution of calibration errors. It is possible that the error distributions are slightly pointed compared to the normal distribution curve. This conclusion follows from what is observed in Fig. 3 small poles

Rice. 3. Kurtosis coefficients (1) and skewness coefficients (2) at the points of the calibration graph

living shift of the central line of scattering coefficients of kurtosis. Thus, from the study of the model of the generalized calibration function of photometric analysis by the Monte Carlo method (2), we can conclude that the distribution of calibration errors is close to normal. Therefore, the calculation of confidence intervals for the results of photometric analysis using Student's coefficients can be considered quite justified.

When performing stochastic modeling, the rate of convergence of sample error curves (see Fig. 2) to the mathematical expectation of the curve was estimated. For the mathematical expectation of the error curve, we take the curve calculated from the ZNO. The closeness of the results of statistical tests with a different number of implementations of the calibration n to the theoretical curve will be estimated by the uncertainty coefficient 1 - R2. This coefficient characterizes the proportion of variation in the sample, which could not be described theoretically. We have established that the dependence of the uncertainty coefficient on the number of implementations of the calibration function can be described by the empirical equation I - K2 = -2.3n-1 + 1.6n~/a -0.1. From the equation we obtain that at n = 213 one should expect almost complete coincidence of the theoretical and empirical error curves. Thus, a consistent estimate of the errors of photometric analysis can be obtained only on a fairly large statistical material.

Let us consider the possibilities of the statistical test method for predicting the results of regression analysis of a calibration curve and using the curve to determine the concentrations of photometered solutions. To do this, we choose the measurement situation of routine analysis as a scenario. The construction of the graph is carried out with single measurements of the optical densities of a series of standard solutions. The concentration of the analyzed solution is found from the graph according to 3-4 results of parallel measurements. When choosing a regression model, one should take into account the fact that the spread of optical densities at different points of the calibration curve is not the same, see equation (8). In the case of heterocedastic scatter, it is recommended to use a weighted least squares (LLS) scheme. However, in the literature, we did not find clear indications of the reasons why the classical LSM scheme, one of the conditions for the applicability of which is the requirement that the spread be homoscedastic, is less preferable. These reasons can be established when processing the same statistical material obtained by the Monte Carlo method according to the scenario of routine analysis, with two versions of the least squares - classical and weighted.

As a result of the regression analysis of only one implementation of the calibration function, the following least squares estimates were obtained: k = 4.979 with Bk = 0.023. When evaluating the same characteristics of HMNC, we obtain k = 5.000 with Bk = 0.016. Regressions were restored using 17 standard solutions. The concentrations in the calibration series increased in arithmetic progression, and the optical densities changed just as uniformly in the range from 0.1 to 1.7 units. In the case of HMLC, the statistical weights of the points of the calibration curve were found using the dispersions calculated by equation (5).

The variances of estimates for both methods are statistically indistinguishable by Fisher's test at a 1% significance level. However, at the same level of significance, the LLS estimate of k differs from the LLS estimate by the 1j-criterion. The least squares estimate of the coefficient of the calibration curve is biased relative to the actual value of M(k) = 5.000, judging by the 1> test at a 5% significance level. Whereas the weighted least squares gives an estimate that does not contain a systematic error.

Let us now find out how the neglect of heteroscedasticity can affect the quality of chemical analysis. The table shows the results of a simulation experiment on the analysis of 17 control samples of a colored substance with different concentrations. Moreover, each analytical series included four solutions, i.e. for each sample, four parallel determinations were made. To process the results, two different calibration dependences were used: one was restored by a simple least squares method, and the second by a weighted one. We believe that control solutions were prepared for analysis in exactly the same way as calibration solutions.

From the table we see that the actual values ​​of the concentrations of control solutions, both in the case of HMNC and in the case of MNC, do not go beyond the confidence intervals, i.e., the analysis results do not contain significant systematic errors. The marginal errors of both methods do not differ statistically, in other words, both estimates

Comparison of the results of determination of concentrations has the same efficiency. From-

control solutions by two methods, here we can conclude that when

In routine analyses, the use of a simple unweighted least squares scheme is fully justified. The use of WMNC is preferable if the research task is only to determine the molar extinction. On the other hand, it should be borne in mind that our conclusions are of a statistical nature. It is likely that with an increase in the number of parallel determinations, the hypothesis of unbiased least squares concentration estimates will not be confirmed, even if systematic errors are insignificant from a practical point of view.

The sufficiently high quality of analysis based on a simple classical least squares scheme that we found seems especially unexpected if we take into account the fact that very strong heteroskedasticity is observed in the optical density range 0.1 h - 1.7. The degree of data heterogeneity can be judged by the weight function, which is well approximated by the polynomial w = 0.057A2 - 0.193A + 0.173. It follows from this equation that at the extreme points of the calibration, the statistical weights differ by more than 20 times. However, let us pay attention to the fact that the calibration functions were restored from 17 points of the graph, while only 4 parallel determinations were performed during the analysis. Therefore, the significant difference between the least squares and HLLS calibration functions that we found and the insignificant difference in the results of analysis using these functions can be explained by the significantly different number of degrees of freedom that were available when constructing statistical conclusions.

Conclusion

1. A new approach to stochastic modeling in photometric analysis based on the Monte Carlo method and the law of error accumulation using an Excel spreadsheet is proposed.

2. Based on 100 implementations of the calibration dependence, it is shown that the prediction of errors by the analytical and statistical methods are mutually consistent.

3. The coefficients of asymmetry and kurtosis along the calibration curve were studied. It is found that the variations in calibration errors obey a distribution law close to normal.

4. The effect of heteroscedasticity of the spread of optical densities during calibration on the quality of analysis is considered. It was found that in routine analyzes, the use of a simple unweighted least squares scheme does not lead to a noticeable decrease in the accuracy of the analysis results.

Literature

1. Bernstein, I.Ya. Spectrophotometric analysis in organic chemistry / I.Ya. Bernstein, Yu.L. Kaminsky. - L.: Chemistry, 1986. - 200 p.

2. Bulatov, M.I. A practical guide to photometric methods of analysis / M.I. Bulatov, I.P. Kalinkin. - L.: Chemistry, 1986. - 432 p.

3. Gmurman, V.E. Probability theory and mathematical statistics / V.E. Gmurman. - M.: Higher school, 1977. - 470 p.

No. s", s", found (P = 95%)

n/i set by OLS VMNK

1 0.020 0.021±0.002 0.021±0.002

2 0.040 0.041±0.001 0.041±0.001

3 0.060 0.061±0.003 0.061±0.003

4 0.080 0.080±0.004 0.080±0.004

5 0.100 0.098±0.004 0.098±0.004

6 0.120 0.122±0.006 0.121±0.006

7 0.140 0.140±0.006 0.139±0.006

8 0.160 0.163±0.003 0.162±0.003

9 0.180 0.181±0.006 0.180±0.006

10 0.200 0.201±0.002 0.200±0.002

11 0.220 0.219±0.008 0.218±0.008

12 0.240 0.242±0.002 0.241±0.002

13 0.260 0.262±0.008 0.261±0.008

14 0.280 0.281±0.010 0.280±0.010

15 0.300 0.307±0.015 0.306±0.015

16 0.320 0.325±0.013 0.323±0.013

17 0.340 0.340±0.026 0.339±0.026

4. Pravdin, P.V. Laboratory instruments and equipment made of glass / P.V. Pravdin. - M.: Chemistry, 1988.-336 p.

5. Makarova, N.V. Statistics in Excel / N.V. Makarova, V.Ya. Trofimets. - M.: Finance and statistics, 2002. - 368 p.

PREDICTION OF ERRORS IN PHOTOMETRY WITH THE USE OF ACCUMULATION OF ERRORS LAW AND MONTE CARLO METHOD

During computing experiment, in combination of the accumulation of errors law and Monte Carlo method, the influence of solution-making errors, blank experiment errors and optical transmission measurement errors upon metrological performance of photometrical analysis has been studied. It has been shown that the results of prediction by analytical and statistical methods are interconsistent. The unique feature of Monte Carlo method has been found to enable prediction of the accumulations of errors law in photometry. For the version of routine analysis the influence of heteroscedasticity of dispersion along calibration curve upon analysis of quality has been studied.

Keywords: photometric analysis, accumulation of errors law, calibration curve, metrological performance, Monte Carlo method, stochastic modeling.

Golovanov Vladimir Ivanovich - Dr. Sc. (Chemistry), Professor, Head of the Analytical Chemistry Subdepartment, South Ural State University.

Golovanov Vladimir Ivanovich - Doctor of Chemical Sciences, Professor, Head of the Department of Analytical Chemistry, South Ural State University.

Email: [email protected]

Danilina Elena Ivanovna - PhD (Chemistry), Associate Professor, Analytical Chemistry Subdepartment, South Ural State University.

Danilina Elena Ivanovna - PhD (Chemistry), Associate Professor, Department of Analytical Chemistry, South Ural State University.