Absolute and relative measurement errors. Measurement error

Physical quantities are characterized by the concept of "error accuracy". There is a saying that by taking measurements one can come to knowledge. So it will be possible to find out what is the height of the house or the length of the street, like many others.

Introduction

Let's understand the meaning of the concept of "measure the value." The measurement process is to compare it with homogeneous quantities, which are taken as a unit.

Liters are used to determine volume, grams are used to calculate mass. To make it more convenient to make calculations, we introduced the SI system of the international classification of units.

For measuring the length of the bog in meters, mass - kilograms, volume - cubic liters, time - seconds, speed - meters per second.

When calculating physical quantities, it is not always necessary to use the traditional method; it is enough to apply the calculation using a formula. For example, to calculate indicators such as average speed, you need to divide the distance traveled by the time spent on the road. This is how the average speed is calculated.

Using units of measurement that are ten, one hundred, one thousand times higher than the indicators of the accepted measuring units, they are called multiples.

The name of each prefix corresponds to its multiplier number:

  1. Deca.
  2. Hecto.
  3. Kilo.
  4. Mega.
  5. Giga.
  6. Tera.

In physical science, a power of 10 is used to write such factors. For example, a million is denoted as 10 6 .

In a simple ruler, the length has a unit of measure - a centimeter. It is 100 times smaller than a meter. A 15 cm ruler is 0.15 m long.

A ruler is the simplest type of measuring instrument for measuring length. More complex devices are represented by a thermometer - so that a hygrometer - to determine humidity, an ammeter - to measure the level of force with which an electric current propagates.

How accurate will the measurements be?

Take a ruler and a simple pencil. Our task is to measure the length of this stationery.

First you need to determine what is the division value indicated on the scale of the measuring device. On the two divisions, which are the nearest strokes of the scale, numbers are written, for example, "1" and "2".

It is necessary to calculate how many divisions are enclosed in the interval of these numbers. If you count correctly, you get "10". Subtract from the number that is greater, the number that will be less, and divide by the number that makes up the divisions between the digits:

(2-1)/10 = 0.1 (cm)

So we determine that the price that determines the division of stationery is the number 0.1 cm or 1 mm. It is clearly shown how the price indicator for division is determined using any measuring device.

By measuring a pencil with a length that is slightly less than 10 cm, we will use the knowledge gained. If there were no small divisions on the ruler, the conclusion would follow that the object has a length of 10 cm. This approximate value is called the measurement error. It indicates the level of inaccuracy that can be tolerated in the measurement.

By specifying the pencil length parameters with a higher level of accuracy, a larger division value achieves a greater measurement accuracy, which provides a smaller error.

In this case, absolutely accurate measurements cannot be made. And the indicators should not exceed the size of the division price.

It has been established that the dimensions of the measurement error are ½ of the price, which is indicated on the divisions of the instrument used to determine the dimensions.

After measuring the pencil at 9.7 cm, we determine the indicators of its error. This is a gap of 9.65 - 9.85 cm.

The formula that measures such an error is the calculation:

A = a ± D (a)

A - in the form of a quantity for measuring processes;

a - the value of the measurement result;

D - the designation of the absolute error.

When subtracting or adding values ​​with an error, the result will be equal to the sum of the error indicators, which is each individual value.

Introduction to the concept

If we consider depending on the way it is expressed, we can distinguish the following varieties:

  • Absolute.
  • Relative.
  • Given.

The absolute measurement error is indicated by the capital letter "Delta". This concept is defined as the difference between the measured and actual values ​​of the physical quantity that is being measured.

The expression of the absolute measurement error is the units of the quantity that needs to be measured.

When measuring mass, it will be expressed, for example, in kilograms. This is not a measurement accuracy standard.

How to calculate the error of direct measurements?

There are ways to represent and calculate them. To do this, it is important to be able to determine the physical quantity with the required accuracy, to know what the absolute measurement error is, that no one will ever be able to find it. You can only calculate its boundary value.

Even if this term is conditionally used, it indicates precisely the boundary data. Absolute and relative measurement errors are indicated by the same letters, the difference is in their spelling.

When measuring length, the absolute error will be measured in those units in which the length is calculated. And the relative error is calculated without dimensions, since it is the ratio of the absolute error to the measurement result. This value is often expressed as a percentage or fractions.

The absolute and relative measurement errors have several different ways of calculating, depending on what physical quantities.

The concept of direct measurement

The absolute and relative error of direct measurements depend on the accuracy class of the device and the ability to determine the weighing error.

Before talking about how the error is calculated, it is necessary to clarify the definitions. A direct measurement is a measurement in which the result is directly read from the instrument scale.

When we use a thermometer, ruler, voltmeter or ammeter, we always carry out direct measurements, since we use a device with a scale directly.

There are two factors that affect performance:

  • Instrument error.
  • The error of the reference system.

The absolute error limit for direct measurements will be equal to the sum of the error that the device shows and the error that occurs during the reading process.

D = D (pr.) + D (absent)

Medical thermometer example

Accuracy values ​​are indicated on the instrument itself. An error of 0.1 degrees Celsius is registered on a medical thermometer. The reading error is half the division value.

D = C/2

If the division value is 0.1 degrees, then for a medical thermometer, calculations can be made:

D \u003d 0.1 o C + 0.1 o C / 2 \u003d 0.15 o C

On the back side of the scale of another thermometer there is a technical specification and it is indicated that for the correct measurements it is necessary to immerse the thermometer with the entire back part. The measurement accuracy is not specified. The only remaining error is the counting error.

If the division value of the scale of this thermometer is 2 o C, then you can measure the temperature with an accuracy of 1 o C. These are the limits of the permissible absolute measurement error and the calculation of the absolute measurement error.

A special system for calculating accuracy is used in electrical measuring instruments.

Accuracy of electrical measuring instruments

To specify the accuracy of such devices, a value called the accuracy class is used. For its designation, the letter "Gamma" is used. To accurately determine the absolute and relative measurement errors, you need to know the accuracy class of the device, which is indicated on the scale.

Take, for example, an ammeter. Its scale indicates the accuracy class, which shows the number 0.5. It is suitable for measurements on direct and alternating current, refers to the devices of the electromagnetic system.

This is a fairly accurate device. If you compare it with a school voltmeter, you can see that it has an accuracy class of 4. This value must be known for further calculations.

Application of knowledge

Thus, D c \u003d c (max) X γ / 100

This formula will be used for specific examples. Let's use a voltmeter and find the error in measuring the voltage that the battery gives.

Let's connect the battery directly to the voltmeter, having previously checked whether the arrow is at zero. When the device was connected, the arrow deviated by 4.2 divisions. This state can be described as follows:

  1. It can be seen that the maximum value of U for this item is 6.
  2. Accuracy class -(γ) = 4.
  3. U(o) = 4.2 V.
  4. C=0.2 V

Using these formula data, the absolute and relative measurement errors are calculated as follows:

D U \u003d DU (ex.) + C / 2

D U (pr.) \u003d U (max) X γ / 100

D U (pr.) \u003d 6 V X 4/100 \u003d 0.24 V

This is the error of the device.

The calculation of the absolute measurement error in this case will be performed as follows:

D U = 0.24 V + 0.1 V = 0.34 V

Using the considered formula, you can easily find out how to calculate the absolute measurement error.

There is a rule for rounding errors. It allows you to find the average between the absolute error limit and the relative one.

Learning to determine the weighing error

This is one example of direct measurements. In a special place is weighing. After all, lever scales do not have a scale. Let's learn how to determine the error of such a process. The accuracy of mass measurement is affected by the accuracy of the weights and the perfection of the scales themselves.

We use a balance scale with a set of weights that must be placed exactly on the right side of the scale. Take a ruler for weighing.

Before starting the experiment, you need to balance the scales. We put the ruler on the left bowl.

The mass will be equal to the sum of the installed weights. Let us determine the measurement error of this quantity.

D m = D m (weights) + D m (weights)

The mass measurement error consists of two terms associated with scales and weights. To find out each of these values, at the factories for the production of scales and weights, products are supplied with special documents that allow you to calculate the accuracy.

Application of tables

Let's use a standard table. The error of the scale depends on how much mass is put on the scale. The larger it is, the larger the error, respectively.

Even if you put a very light body, there will be an error. This is due to the process of friction occurring in the axles.

The second table refers to a set of weights. It indicates that each of them has its own mass error. The 10-gram has an error of 1 mg, as well as the 20-gram. We calculate the sum of the errors of each of these weights, taken from the table.

It is convenient to write the mass and the mass error in two lines, which are located one under the other. The smaller the weight, the more accurate the measurement.

Results

In the course of the considered material, it was established that it is impossible to determine the absolute error. You can only set its boundary indicators. For this, the formulas described above in the calculations are used. This material is proposed for study at school for students in grades 8-9. Based on the knowledge gained, it is possible to solve problems for determining the absolute and relative errors.


Let some random variable a measured n times under the same conditions. The measurement results gave a set n various numbers

Absolute error- dimensional value. Among n values ​​of absolute errors necessarily meet both positive and negative.

For the most probable value of the quantity a usually take average the meaning of the measurement results

.

The larger the number of measurements, the closer the mean value is to the true value.

Absolute errori

.

Relative errori th dimension is called the quantity

Relative error is a dimensionless quantity. Usually, the relative error is expressed as a percentage, for this e i multiply by 100%. The value of the relative error characterizes the measurement accuracy.

Average absolute error is defined like this:

.

We emphasize the need to sum the absolute values ​​(modules) of the quantities D and i . Otherwise, the identical zero result will be obtained.

Average relative error is called the quantity

.

For a large number of measurements.

Relative error can be considered as the value of the error per unit of the measured quantity.

The accuracy of measurements is judged on the basis of a comparison of the errors of the measurement results. Therefore, the measurement errors are expressed in such a form that, in order to assess the accuracy, it would be sufficient to compare only the errors of the results, without comparing the sizes of the measured objects or knowing these sizes very approximately. It is known from practice that the absolute error of measuring the angle does not depend on the value of the angle, and the absolute error of measuring the length depends on the value of the length. The larger the length value, the greater the absolute error for this method and measurement conditions. Therefore, according to the absolute error of the result, it is possible to judge the accuracy of the angle measurement, but it is impossible to judge the accuracy of the length measurement. The expression of the error in relative form makes it possible to compare, in certain cases, the accuracy of angular and linear measurements.


Basic concepts of probability theory. Random error.

Random error called the component of the measurement error, which changes randomly with repeated measurements of the same quantity.

When repeated measurements of the same constant, unchanging quantity are carried out with the same care and under the same conditions, we get measurement results - some of them differ from each other, and some of them coincide. Such discrepancies in the measurement results indicate the presence of random error components in them.

Random error arises from the simultaneous action of many sources, each of which in itself has an imperceptible effect on the measurement result, but the total effect of all sources can be quite strong.

Random errors are an inevitable consequence of any measurement and are due to:

a) inaccurate readings on the scale of instruments and tools;

b) not identical conditions for repeated measurements;

c) random changes in external conditions (temperature, pressure, force field, etc.) that cannot be controlled;

d) all other influences on measurements, the causes of which are unknown to us. The magnitude of the random error can be minimized by repeated repetition of the experiment and appropriate mathematical processing of the results.

A random error can take on different absolute values, which cannot be predicted for a given measurement act. This error can equally be both positive and negative. Random errors are always present in an experiment. In the absence of systematic errors, they cause repeated measurements to scatter about the true value.

Let us assume that with the help of a stopwatch we measure the period of oscillation of the pendulum, and the measurement is repeated many times. Errors in starting and stopping the stopwatch, an error in the value of the reference, a small uneven movement of the pendulum - all this causes a scatter in the results of repeated measurements and therefore can be classified as random errors.

If there are no other errors, then some results will be somewhat overestimated, while others will be slightly underestimated. But if, in addition to this, the clock is also behind, then all the results will be underestimated. This is already a systematic error.

Some factors can cause both systematic and random errors at the same time. So, by turning the stopwatch on and off, we can create a small irregular spread in the moments of starting and stopping the clock relative to the movement of the pendulum and thereby introduce a random error. But if, in addition, every time we rush to turn on the stopwatch and are somewhat late turning it off, then this will lead to a systematic error.

Random errors are caused by a parallax error when reading the divisions of the instrument scale, shaking of the building foundation, the influence of slight air movement, etc.

Although it is impossible to exclude random errors of individual measurements, the mathematical theory of random phenomena makes it possible to reduce the influence of these errors on the final measurement result. It will be shown below that for this it is necessary to make not one, but several measurements, and the smaller the error value we want to obtain, the more measurements need to be taken.

Due to the fact that the occurrence of random errors is inevitable and unavoidable, the main task of any measurement process is to bring the errors to a minimum.

The theory of errors is based on two main assumptions, confirmed by experience:

1. With a large number of measurements, random errors of the same magnitude, but of a different sign, i.e. errors in the direction of increasing and decreasing the result are quite common.

2. Large absolute errors are less common than small ones, so the probability of an error decreases as its value increases.

The behavior of random variables is described by statistical regularities, which are the subject of probability theory. Statistical definition of probability w i developments i is the attitude

where n- total number of experiments, n i- the number of experiments in which the event i happened. In this case, the total number of experiments should be very large ( n®¥). With a large number of measurements, random errors obey a normal distribution (Gaussian distribution), the main features of which are the following:

1. The greater the deviation of the value of the measured value from the true value, the less the probability of such a result.

2. Deviations in both directions from the true value are equally probable.

From the above assumptions, it follows that in order to reduce the influence of random errors, it is necessary to measure this quantity several times. Suppose we are measuring some value x. Let produced n measurements: x 1 , x 2 , ... x n- by the same method and with the same care. It can be expected that the number dn obtained results, which lie in a fairly narrow interval from x before x + dx, should be proportional to:

The value of the taken interval dx;

Total number of measurements n.

Probability dw(x) that some value x lies in the interval from x before x+dx, defined as follows :

(with the number of measurements n ®¥).

Function f(X) is called the distribution function or probability density.

As a postulate of the theory of errors, it is assumed that the results of direct measurements and their random errors, with a large number of them, obey the law of normal distribution.

The distribution function of a continuous random variable found by Gauss x has the following form:

, where mis - distribution parameters .

The parameter m of the normal distribution is equal to the mean value á xñ a random variable, which, for an arbitrary known distribution function, is determined by the integral

.

In this way, the value m is the most probable value of the measured value x, i.e. her best estimate.

The parameter s 2 of the normal distribution is equal to the variance D of the random variable, which is generally determined by the following integral

.

The square root of the variance is called the standard deviation of the random variable.

The mean deviation (error) of the random variable ásñ is determined using the distribution function as follows

The average measurement error ásñ calculated from the Gaussian distribution function is related to the standard deviation s as follows:

< s > = 0.8s.

The parameters s and m are related as follows:

.

This expression allows you to find the standard deviation s if there is a normal distribution curve.

The graph of the Gaussian function is shown in the figures. Function f(x) is symmetrical with respect to the ordinate drawn at the point x= m; passes through the maximum at the point x= m and has an inflection at the points m ±s. Thus, the dispersion characterizes the width of the distribution function, or shows how widely the values ​​of a random variable are scattered relative to its true value. The more accurate the measurements, the closer to the true value the results of individual measurements, i.e. the value of s is less. Figure A shows the function f(x) for three values ​​s .

Area of ​​a figure bounded by a curve f(x) and vertical lines drawn from points x 1 and x 2 (Fig. B) , is numerically equal to the probability that the measurement result falls within the interval D x = x 1 -x 2 , which is called the confidence level. Area under the entire curve f(x) is equal to the probability of a random variable falling into the interval from 0 to ¥, i.e.

,

since the probability of a certain event is equal to one.

Using the normal distribution, error theory poses and solves two main problems. The first is an assessment of the accuracy of the measurements. The second is an assessment of the accuracy of the arithmetic mean of the measurement results.5. Confidence interval. Student's coefficient.

Probability theory allows you to determine the size of the interval in which with a known probability w are the results of individual measurements. This probability is called confidence level, and the corresponding interval (<x>±D x)w called confidence interval. The confidence level is also equal to the relative proportion of results that fall within the confidence interval.

If the number of measurements n is large enough, then the confidence probability expresses the proportion of the total number n those measurements in which the measured value was within the confidence interval. Each confidence level w corresponds to its confidence interval. w 2 80%. The wider the confidence interval, the more likely it is to get a result within that interval. In probability theory, a quantitative relationship is established between the value of the confidence interval, the confidence probability, and the number of measurements.

If we choose the interval corresponding to the average error as the confidence interval, that is, D a = AD añ, then for a sufficiently large number of measurements it corresponds to the confidence probability w 60%. As the number of measurements decreases, the confidence probability corresponding to such a confidence interval (á añ ± AD añ) decreases.

Thus, to estimate the confidence interval of a random variable, one can use the value of the average erroráD añ .

To characterize the magnitude of a random error, it is necessary to set two numbers, namely, the magnitude of the confidence interval and the magnitude of the confidence probability . Specifying only the magnitude of the error without the corresponding confidence probability is largely meaningless.

If the average measurement error ásñ is known, the confidence interval written as (<x> ±asñ) w, determined with confidence probability w= 0,57.

If the standard deviation s is known distribution of measurement results, the indicated interval has the form (<xtw s) w, where tw- coefficient depending on the value of the confidence probability and calculated according to the Gaussian distribution.

The most commonly used quantities D x are shown in table 1.

In physics and other sciences it is very often necessary to measure various quantities (for example, length, mass, time, temperature, electrical resistance, etc.).

Measurement- the process of finding the value of a physical quantity with the help of special technical means - measuring instruments.

Measuring device called a device by which a measured quantity is compared with a physical quantity of the same kind, taken as a unit of measurement.

There are direct and indirect measurement methods.

Direct measurement methods - methods in which the values ​​of the quantities being determined are found by direct comparison of the measured object with the unit of measurement (standard). For example, the length of a body measured by a ruler is compared with a unit of length - a meter, the mass of a body measured by scales is compared with a unit of mass - a kilogram, etc. Thus, as a result of direct measurement, the determined value is obtained immediately, directly.

Indirect measurement methods- methods in which the values ​​of the determined quantities are calculated from the results of direct measurements of other quantities with which they are connected by a known functional dependence. For example, determining the circumference of a circle based on the results of measuring the diameter or determining the volume of a body based on the results of measuring its linear dimensions.

Due to the imperfection of measuring instruments, our sense organs, the influence of external influences on the measuring equipment and the object of measurement, as well as other factors, all measurements can be made only with a certain degree of accuracy; therefore, the measurement results do not give the true value of the measured quantity, but only an approximate one. If, for example, body weight is determined with an accuracy of 0.1 mg, then this means that the found weight differs from the true body weight by less than 0.1 mg.

Accuracy of measurements - a characteristic of the quality of measurements, reflecting the proximity of the measurement results to the true value of the measured quantity.

The smaller the measurement errors, the greater the measurement accuracy. The measurement accuracy depends on the instruments used in the measurements and on the general measurement methods. It is absolutely useless to try to go beyond this limit of accuracy when making measurements under given conditions. It is possible to minimize the impact of causes that reduce the accuracy of measurements, but it is impossible to completely get rid of them, that is, more or less significant errors (errors) are always made during measurements. To increase the accuracy of the final result, any physical measurement must be made not once, but several times under the same experimental conditions.

As a result of the i-th measurement (i is the measurement number) of the value "X", an approximate number X i is obtained, which differs from the true value Xist by some value ∆X i = |X i - X|, which is a mistake or, in other words , error.The true error is not known to us, since we do not know the true value of the measured quantity.The true value of the measured physical quantity lies in the interval

Х i – ∆Х< Х i – ∆Х < Х i + ∆Х

where X i is the value of the X value obtained during the measurement (that is, the measured value); ∆X is the absolute error in determining the value of X.

Absolute error (error) of measurement ∆X is the absolute value of the difference between the true value of the measured quantity Xist and the measurement result X i: ∆X = |X ist - X i |.

Relative error (error) measurement δ (characterizing the measurement accuracy) is numerically equal to the ratio of the absolute measurement error ∆X to the true value of the measured value X sist (often expressed as a percentage): δ \u003d (∆X / X sist) 100% .

Measurement errors or errors can be divided into three classes: systematic, random and gross (misses).

Systematic they call such an error that remains constant or naturally (according to some functional dependence) changes with repeated measurements of the same quantity. Such errors arise as a result of the design features of the measuring instruments, shortcomings of the accepted measurement method, any omissions of the experimenter, the influence of external conditions or a defect in the measurement object itself.

In any measuring device, one or another systematic error is inherent, which cannot be eliminated, but the order of which can be taken into account. Systematic errors either increase or decrease the measurement results, that is, these errors are characterized by a constant sign. For example, if during weighing one of the weights has a mass of 0.01 g more than indicated on it, then the found value of the body weight will be overestimated by this amount, no matter how many measurements are made. Sometimes systematic errors can be taken into account or eliminated, sometimes this cannot be done. For example, fatal errors include instrument errors, which we can only say that they do not exceed a certain value.

Random mistakes called errors that change their magnitude and sign in an unpredictable way from experience to experience. The appearance of random errors is due to the action of many diverse and uncontrollable causes.

For example, when weighing with a balance, these reasons can be air vibrations, dust particles that have settled, different friction in the left and right suspension of the cups, etc. different values: X1, X2, X3,…, X i ,…, X n , where X i is the result of the i-th measurement. It is not possible to establish any regularity between the results, therefore the result of the i -th measurement X is considered a random variable. Random errors may have a certain effect on a single measurement, but with repeated measurements they obey statistical laws and their influence on the measurement results can be taken into account or significantly reduced.

Misses and blunders– excessively large errors that clearly distort the measurement result. This class of errors is most often caused by incorrect actions of the experimenter (for example, due to inattention, instead of the reading of the device “212”, a completely different number is written - “221”). Measurements containing misses and gross errors should be discarded.

Measurements can be made in terms of their accuracy by technical and laboratory methods.

When using technical methods, the measurement is carried out once. In this case, they are satisfied with such an accuracy at which the error does not exceed some specific, predetermined value, determined by the error of the measuring equipment used.

With laboratory measurement methods, it is required to indicate the value of the measured quantity more accurately than its single measurement by the technical method allows. In this case, several measurements are made and the arithmetic mean of the obtained values ​​is calculated, which is taken as the most reliable (true) value of the measured value. Then, the accuracy of the measurement result is assessed (accounting for random errors).

From the possibility of carrying out measurements by two methods, the existence of two methods for assessing the accuracy of measurements follows: technical and laboratory.

Measurement error

Measurement error- assessment of the deviation of the value of the measured value of the quantity from its true value. Measurement error is a characteristic (measure) of measurement accuracy.

  • Reduced error- relative error, expressed as the ratio of the absolute error of the measuring instrument to the conditionally accepted value of the quantity, which is constant over the entire measurement range or in part of the range. Calculated according to the formula

where X n- normalizing value, which depends on the type of measuring instrument scale and is determined by its graduation:

If the scale of the device is one-sided, i.e. the lower measurement limit is zero, then X n is determined equal to the upper limit of measurements;
- if the scale of the device is double-sided, then the normalizing value is equal to the width of the measurement range of the device.

The given error is a dimensionless value (it can be measured as a percentage).

Due to the occurrence

  • Instrumental / Instrumental Errors- errors that are determined by the errors of the measuring instruments used and are caused by the imperfection of the operating principle, the inaccuracy of the scale graduation, and the lack of visibility of the device.
  • Methodological errors- errors due to the imperfection of the method, as well as simplifications underlying the methodology.
  • Subjective / operator / personal errors- errors due to the degree of attentiveness, concentration, preparedness and other qualities of the operator.

In engineering, devices are used to measure only with a certain predetermined accuracy - the main error allowed by the normal under normal operating conditions for this device.

If the device is operated under conditions other than normal, then an additional error occurs, increasing the overall error of the device. Additional errors include: temperature, caused by the deviation of the ambient temperature from normal, installation, due to the deviation of the position of the device from the normal operating position, etc. 20°C is taken as normal ambient temperature, and 01.325 kPa as normal atmospheric pressure.

A generalized characteristic of measuring instruments is an accuracy class determined by the limit values ​​of the permissible basic and additional errors, as well as other parameters that affect the accuracy of measuring instruments; the value of the parameters is established by the standards for certain types of measuring instruments. The accuracy class of measuring instruments characterizes their accuracy properties, but is not a direct indicator of the accuracy of measurements performed using these instruments, since the accuracy also depends on the measurement method and the conditions for their implementation. Measuring instruments, the limits of the permissible basic error of which are given in the form of reduced basic (relative) errors, are assigned accuracy classes selected from a number of the following numbers: (1; 1.5; 2.0; 2.5; 3.0; 4.0 ;5.0;6.0)*10n, where n = 1; 0; -one; -2 etc.

According to the nature of the manifestation

  • random error- error, changing (in magnitude and in sign) from measurement to measurement. Random errors can be associated with the imperfection of devices (friction in mechanical devices, etc.), shaking in urban conditions, with the imperfection of the object of measurement (for example, when measuring the diameter of a thin wire, which may not have a completely round cross section as a result of the imperfection of the manufacturing process ), with the features of the measured quantity itself (for example, when measuring the number of elementary particles passing per minute through a Geiger counter).
  • Systematic error- an error that changes over time according to a certain law (a special case is a constant error that does not change over time). Systematic errors can be associated with instrument errors (incorrect scale, calibration, etc.) not taken into account by the experimenter.
  • Progressive (drift) error is an unpredictable error that changes slowly over time. It is a non-stationary random process.
  • Gross error (miss)- an error resulting from an oversight of the experimenter or a malfunction of the equipment (for example, if the experimenter incorrectly read the division number on the scale of the device, if there was a short circuit in the electrical circuit).

According to the method of measurement

  • Accuracy of direct measurements
  • Uncertainty of indirect measurements- error of the calculated (not measured directly) value:

If a F = F(x 1 ,x 2 ...x n) , where x i- directly measured independent quantities with an error Δ x i, then:

see also

  • Measurement of physical quantities
  • System for automated data collection from meters over the air

Literature

  • Nazarov N. G. Metrology. Basic concepts and mathematical models. M.: Higher school, 2002. 348 p.
  • Laboratory classes in physics. Textbook / Goldin L. L., Igoshin F. F., Kozel S. M. and others; ed. Goldina L. L. - M .: Science. Main edition of physical and mathematical literature, 1983. - 704 p.

Wikimedia Foundation. 2010 .

time measurement error- laiko matavimo paklaida statusas T sritis automatika atitikmenys: engl. time measuring error vok. Zeitmeßfehler, m rus. time measurement error, fpranc. erreur de mesure de temps, f … Automatikos terminų žodynas

systematic error (measurement)- introduce a systematic error - Topics oil and gas industry Synonyms introduce a systematic error EN bias ...

STANDARD ERRORS OF MEASUREMENT- Evaluation of the extent to which a certain set of measurements obtained in a given situation (for example, in a test or in one of several parallel forms of a test) can be expected to deviate from the true values. Designated as a (M) ...

overlay error- Caused by the superposition of short response output pulses when the time interval between input current pulses is less than the duration of an individual response output pulse. Overlay errors can be ... ... Technical Translator's Handbook

error- 01.02.47 error (digital data) (1-4): The result of collecting, storing, processing and transmitting data, in which the bit or bits take inappropriate values, or there are not enough bits in the data stream. 4) Terminological ... ... Dictionary-reference book of terms of normative and technical documentation

There is no movement, said the bearded sage. The other was silent and began to walk before him. He could not have objected more strongly; All praised the convoluted answer. But, gentlemen, this funny case Another example brings me to mind: After all, every day ... Wikipedia

ERROR OPTIONS- The size of the variance, which cannot be explained by controllable factors. The error of variance is offset by sampling errors, measurement errors, experimental errors, etc… Explanatory Dictionary of Psychology

Absolute measurement error called the value determined by the difference between the measurement result x and the true value of the measured quantity x 0:

Δ x = |x - x 0 |.

The value δ, equal to the ratio of the absolute measurement error to the measurement result, is called the relative error:

Example 2.1. The approximate value of the number π is 3.14. Then its error is 0.00159. The absolute error can be considered equal to 0.0016, and the relative error equal to 0.0016/3.14 = 0.00051 = 0.051%.

Significant numbers. If the absolute error of the value a does not exceed one unit of the last digit of the number a, then they say that the number has all the signs correct. Approximate numbers should be written down, keeping only the correct signs. If, for example, the absolute error of the number 52400 is equal to 100, then this number should be written, for example, as 524·10 2 or 0.524·10 5 . You can estimate the error of an approximate number by indicating how many true significant digits it contains. When counting significant digits, zeros on the left side of the number are not counted.

For example, the number 0.0283 has three valid significant digits, and 2.5400 has five valid significant digits.

Number Rounding Rules. If the approximate number contains extra (or incorrect) characters, then it should be rounded. When rounding, an additional error occurs, not exceeding half the unit of the last significant digit ( d) rounded number. When rounding, only correct signs are preserved; extra characters are discarded, and if the first discarded digit is greater than or equal to d/2, then the last stored digit is increased by one.

Extra digits in integers are replaced by zeros, and in decimal fractions they are discarded (as well as extra zeros). For example, if the measurement error is 0.001 mm, then the result 1.07005 is rounded up to 1.070. If the first of the zero-modified and discarded digits is less than 5, the remaining digits are not changed. For example, the number 148935 with a measurement precision of 50 has a rounding of 148900. If the first digit to be replaced with zeros or discarded is 5, and it is followed by no digits or zeros, then rounding is performed to the nearest even number. For example, the number 123.50 is rounded up to 124. If the first digit to be replaced with zeros or discarded is greater than 5 or equal to 5, but followed by a significant digit, then the last remaining digit is increased by one. For example, the number 6783.6 is rounded up to 6784.

Example 2.2. When rounding the number 1284 to 1300, the absolute error is 1300 - 1284 = 16, and when rounding to 1280, the absolute error is 1280 - 1284 = 4.


Example 2.3. When rounding the number 197 to 200, the absolute error is 200 - 197 = 3. The relative error is 3/197 ≈ 0.01523 or approximately 3/200 ≈ 1.5%.

Example 2.4. The seller weighs the watermelon on a scale. In the set of weights, the smallest is 50 g. Weighing gave 3600 g. This number is approximate. The exact weight of the watermelon is unknown. But the absolute error does not exceed 50 g. The relative error does not exceed 50/3600 = 1.4%.

Errors in solving the problem on PC

Three types of errors are usually considered as the main sources of error. These are the so-called truncation errors, rounding errors, and propagation errors. For example, when using iterative methods for finding the roots of nonlinear equations, the results are approximate, in contrast to direct methods that give an exact solution.

Truncation errors

This type of error is associated with the error inherent in the problem itself. It may be due to inaccuracy in the definition of the initial data. For example, if any dimensions are specified in the condition of the problem, then in practice for real objects these dimensions are always known with some accuracy. The same goes for any other physical parameters. This also includes the inaccuracy of the calculation formulas and the numerical coefficients included in them.

Propagation errors

This type of error is associated with the use of one or another method of solving the problem. In the course of calculations, an accumulation or, in other words, error propagation inevitably occurs. In addition to the fact that the original data themselves are not accurate, a new error arises when they are multiplied, added, etc. The accumulation of the error depends on the nature and number of arithmetic operations used in the calculation.

Rounding errors

This type of error is due to the fact that the true value of a number is not always accurately stored by the computer. When a real number is stored in the computer's memory, it is written as a mantissa and exponent in much the same way as a number is displayed on a calculator.