What does it mean to find a solution to a nonlinear equation. Solving Nonlinear Equations by the Method of Simple Iterations - abstract

Mathematics as a science arose in connection with the need to solve practical problems: measurements on the ground, navigation, etc. As a result, mathematics was numerical mathematics and its goal was to obtain a solution in the form of a number. The numerical solution of applied problems has always interested mathematicians. The largest representatives of the past combined in their studies the study of natural phenomena, obtaining their mathematical description, i.e. his mathematical model and his research. The analysis of complicated models required the creation of special, usually numerical methods for solving problems. The names of some of these methods indicate that they were developed by the largest scientists of their time. These are the methods of Newton, Euler, Lobachevsky, Gauss, Chebyshev, Hermite.

The present time is characterized by a sharp expansion of the applications of mathematics, largely associated with the creation and development of computer technology. As a result of the emergence of computers in less than 40 years, the speed of operations has increased from 0.1 operations per second with manual counting to 10 operations per second on modern computers.

The widespread opinion about the omnipotence of modern computers gives rise to the impression that mathematicians have got rid of all the troubles associated with the numerical solution of problems, and the development of new methods for solving them is no longer so significant. In reality, the situation is different, since the needs of evolution, as a rule, set before science tasks that are on the verge of its capabilities. The expansion of the application of mathematics led to the mathematization of various branches of science: chemistry, economics, biology, geology, geography, psychology, medicine, technology, etc.

There are two circumstances that initially led to the desire for the mathematization of sciences:

firstly, only the use of mathematical methods makes it possible to give a quantitative character to the study of one or another phenomenon of the material world;

secondly, and this is the main thing, only the mathematical way of thinking makes an object. This method of research is called a computational experiment - the study is fully objective.

Recently, another factor has appeared that has a strong impact on the processes of knowledge mathematization. This is the rapid development of computer technology. The use of computers for solving scientific, engineering, and applied problems in general is entirely based on their mathematization.

mathematical models.

Modern technology for the study of complex problems is based on the construction and analysis, usually with the help of a computer, of mathematical models of the problem being studied. Usually, a computational experiment, as we have already seen, consists of a number of stages: setting a problem, building a mathematical model (mathematical formulation of the problem), developing a numerical method, developing an algorithm for implementing a numerical method, developing a program, debugging a program, performing calculations, analyzing results.

So, the use of computers for solving any scientific or engineering problem is inevitably associated with the transition from a real process or phenomenon to its mathematical model. Thus, the application of models in scientific research and engineering practice is the art of mathematical modeling.

A model is usually called a represented or materially realized system that reproduces the main most significant features of a given phenomenon.

The main requirements for the mathematical model are the adequacy of the phenomenon under consideration, i.e. it should sufficiently reflect the characteristic features of the phenomenon. At the same time, it should have comparative simplicity and accessibility of research.

The mathematical model reflects the dependence between the conditions for the occurrence of the phenomenon under study and its results in certain mathematical constructions. Most often, the following mathematical concepts are used as such constructions: function, functional, operator, numerical equation, ordinary differential equation, partial differential equation.

Mathematical models can be classified according to different criteria: static and dynamic, concentrated and distributed; deterministic and probabilistic.

Consider the problem of finding the roots of the nonlinear equation

The roots of equation (1) are those values ​​of x that, when substituting, turn it into an identity. Only for the simplest equations it is possible to find a solution in the form of formulas, i.e. analytical form. More often it is necessary to solve equations by approximate methods, the most widespread among which, in connection with the advent of computers, are numerical methods.

The algorithm for finding roots by approximate methods can be divided into two stages. At the first, the location of the roots is studied and their separation is carried out. There is an area in which there is a root of the equation or an initial approximation to the root x 0 . The simplest way to solve this problem is to study the graph of the function f(x) . In the general case, to solve it, it is necessary to involve all means of mathematical analysis.

The existence on the found interval of at least one root of equation (1) follows from the Bolzano condition:

f(a)*f(b)<0 (2)

It is also assumed that the function f(x) is continuous on the given segment. However, this condition does not answer the question about the number of roots of the equation on a given interval. If the requirement of continuity of the function is supplemented with the requirement of its monotonicity, and this follows from the sign-constancy of the first derivative, then we can assert the existence of a unique root on a given segment.

When localizing roots, it is also important to know the basic properties of this type of equation. For example, recall some properties of algebraic equations:

where are real coefficients.

  • a) An equation of degree n has n roots, among which there can be both real and complex ones. Complex roots form complex conjugate pairs and, therefore, the equation has an even number of such roots. For an odd value of n, there is at least one real root.
  • b) The number of positive real roots is less than or equal to the number of variable signs in the sequence of coefficients. Replacing x with -x in equation (3) allows you to estimate the number of negative roots in the same way.

At the second stage of solving equation (1), using the obtained initial approximation, an iterative process is constructed that makes it possible to refine the value of the root with some predetermined accuracy. The iterative process consists in successive refinement of the initial approximation. Each such step is called an iteration. As a result of the iteration process, a sequence of approximate values ​​of the roots of the equation is found. If this sequence approaches the true value of the root x as n grows, then the iterative process converges. An iterative process is said to converge to at least order m if the following condition is satisfied:

where С>0 is some constant. If m=1 , then one speaks of first-order convergence; m=2 - about quadratic, m=3 - about cubic convergence.

Iterative cycles end if, for a given permissible error, the criteria for absolute or relative deviations are met:

or the smallness of the residual:

This work is devoted to the study of an algorithm for solving nonlinear equations using Newton's method.

There are many different methods for solving nonlinear equations, some of them are presented below:

  • 1)Iteration Method. When solving a non-linear equation by iteration, we use the equation in the form x=f(x). The initial value of the argument x 0 and the accuracy e are set. The first approximation of the solution x 1 is found from the expression x 1 \u003d f (x 0), the second - x 2 \u003d f (x 1), etc. In the general case, the i+1 approximation is found by the formula xi+1 =f(xi). We repeat this procedure until |f(xi)|>e. The condition for the convergence of the iteration method |f"(x)|
  • 2)Newton's method. When solving a nonlinear equation by the Newton method, the initial value of the argument x 0 and the accuracy e are set. Then, at the point (x 0, F (x 0)) we draw a tangent to the graph F (x) and determine the intersection point of the tangent with the abscissa axis x 1. At the point (x 1, F (x 1)) we again build a tangent, find the next approximation of the desired solution x 2, etc. We repeat this procedure until |F(xi)| > e. To determine the point of intersection (i + 1) of the tangent with the abscissa axis, we use the following formula

x i+1 \u003d x i -F (x i) F "(x i).

Convergence condition for the tangent method F(x 0) F""(x)>0, etc.

3). dichotomy method. The solution technique is reduced to the gradual division of the initial uncertainty interval in half according to the formula

C to \u003d a to + in to / 2.

In order to choose the necessary one from the two resulting segments, it is necessary to find the value of the function at the ends of the resulting segments and consider the one on which the function will change its sign, that is, the condition f (a k) * f (in k)<0.

The process of dividing the segment is carried out until the length of the current uncertainty interval is less than the specified accuracy, that is, in k - a k< E. Тогда в качестве приближенного решения уравнения будет точка, соответствующая середине интервала неопределённости.

4). chord method. The idea of ​​the method is that a chord is constructed on the segment that contracts the ends of the arc of the graph of the function y=f(x), and the point c, the intersection of the chord with the abscissa axis, is considered an approximate value of the root

c = a - (f(a) x (a-b)) / (f(a) - f(b)),

c \u003d b - (f (b) × (a-b)) / (f (a) - f (b)).

The next approximation is sought on the interval or depending on the signs of the function values ​​at points a,b,c

x* O if f(c) H f(a) > 0 ;

x* O if f(c) x f(b)< 0 .

If f "(x) does not change sign to , then denoting c \u003d x 1 and considering a or b as the initial approximation, we get the iterative formulas of the chord method with a fixed right or left point.

x 0 \u003d a, x i + 1 \u003d x i - f (x i) (b-x i) / (f (b) -f (x i), with f "(x) H f "(x)\u003e 0;

x 0 \u003d b, x i + 1 \u003d x i - f (x i) (x i -a) / (f (x i) -f (a), with f "(x) H f "(x)< 0 .

The convergence of the chord method is linear

Algebraic and transcendental equations. Root localization methods.

The most general form of the nonlinear equation:

f(x)=0 (2.1)

where is the function f(x) is defined and continuous on a finite or infinite interval [a, b].

Definition 2.1. Any number that inverts a function f(x) to zero is called the root of equation (2.1).

Definition 2.2. A number is called the root of the k-th multiplicity, if together with the function f(x) its derivatives up to (k-1)-th order inclusive are equal to zero:

Definition 2.3. A single root is called a simple root.

Nonlinear equations with one variable are subdivided into algebraic and transcendental.

Definition 2.4 . Equation (2.1) is called algebraic if the function F(x) is algebraic.

By algebraic transformations, from any algebraic equation, one can obtain an equation in canonical form:

where are the real coefficients of the equation, x is the unknown.

It is known from algebra that every algebraic equation has at least one real or two complex conjugate roots.

Definition 2.5. Equation (2.1) is called transcendental if the function F(x) is not algebraic.

Solving equation (2.1) means:

  • 1. Determine whether the equation has roots.
  • 2. Determine the number of roots of the equation.
  • 3. Find the values ​​of the roots of the equation with a given accuracy.

The equations encountered in practice often cannot be solved by analytical methods. Numerical methods are used to solve such equations.

The algorithm for finding the root of an equation using a numerical method consists of two stages:

  • 1) department or localization root, i.e. setting an interval that contains one root:
  • 2) clarification root values ​​by the method of successive approximations.

Root localization methods. The theoretical basis of the root separation algorithm is the Cauchy theorem on intermediate values ​​of a continuous function.

Theorem 2.1. If the function y \u003d f (x) is continuous on the segment [a, b] and f (a) \u003d A, f (b) \u003d B, then for any point C lying between A and B, there is a point that.

Consequence. If the function y \u003d f (x) is continuous on the segment [a, b] and takes on values ​​of different signs at its ends, then on this segment there is at least one root of the equation f (x) \u003d 0.

Let the domain of definition and continuity of a function be a finite segment [a,b]. Divide the segment into n parts: ,

Calculating sequentially the values ​​of the function at points, we find such segments for which the condition is satisfied:

those. , or, . These segments contain at least one root.

Theorem 2.2. If the function y \u003d f (x) is continuous on the segment [a; b), f (a) f (b)<0 и f`(х) на интервале (а;b) сохраняет знак, то внутри отрезка [а;b] существует единственный корень уравнения f(х) = 0.

To separate the roots, you can also use the graph of the function at= f (X). The roots of equation (2.1) are those values X, at which the graph of the function y=f(x) crosses the x-axis. The construction of a graph of a function, even with low accuracy, usually gives an idea of ​​the location of the roots of equation (2.1). If plotting the function y \u003d f (x) causes difficulty, then the original equation (2.1) should be converted to the form c1(x)= c2(x) so that the graphs of the functions at= c1(x) and at= c2(x) were quite simple. The abscissas of the intersection points of these graphs will be the roots of equation (2.1).

Example 1 Separate the roots of the equation x 2 -2cosx=0.

Solution. Let's consider two ways of separating the roots.

  • a) Graphical way. Let's rewrite the equation in the form x 2 =2cosx and build a graph of the functions y=x 2 and y=2cosx in the same coordinate system (Figure 5). since these graphs intersect at two points, the equation has two roots located symmetrically about the origin on the intervals (-/2; 0) and (0; /2).
  • b) Analytical method. Let f(x)= x 2 -2cosx. Because f(x) is an even function, it suffices to consider only non-negative values ​​of x. Due to the inequality 2cosx2

Derivative f"(x)=2(x+sinx). On the interval (0; /2) f"(x)>0 , therefore, f(x) here monotonically increases and its graph can cross the axis X no more than one point. notice, that f(0)=- 2<0, аf(/2)=(/2) 2>0. This means that the equation has one positive root lying on the interval (0; /2). Since the function is even, the equation also has one negative root, symmetrical to the positive one. Now let's move on to the refinement of the root. To apply the combined root refinement method, you need to make sure that f ""(x) on (0; /2) preserves the sign, and choose the initial approximation of the root for applying the tangent method. It must satisfy the condition: f(x)f ""(x)>0. Because f ""(x)=2(1+cosx) is positive on , then /2 can be taken as the initial approximation of the root in the tangent method. Therefore, one can put x=/21,570796, x 1 =0 (see algorithm diagram). In our case, the chord method will give an approximate value of the root with a disadvantage, and the tangent method - with an excess.

Consider one iterative step of root refinement. Calculate the values f(0), f(/2), f"(/2). New values x 1 and x find, respectively, by the formulas:

|x-x 1 |=0.387680.4>10 -4 =.

The specified accuracy is not achieved, and the calculations must be continued.

Iteration number

x 1

f(x 1 )

|x-x 1 |

Therefore, the approximate value of the root with the required accuracy was found as a result of three iterations and is approximately equal to 1.0217.

Due to the symmetry of the graph of the function f(x) the value of the second root is approximately equal to -1.0217.

Root clarification.

Formulation of the problem . Let us assume that the desired root of equation (2.1) is separated, i.e. segment [a; b], which has one and only one root of the equation. Any point of this segment can be taken as an approximate value of the root. The error of this approximation does not exceed the length [a; b]. Consequently, the problem of finding an approximate value of the root with a given accuracy is reduced to finding the segment [a; b] (b - a<), содержащего только один корень уравнения (2.1). Эту задачу обычно называют задачей root refinements.

Description of numerical methods. Numerical methods make it possible to find solutions to certain problems, knowing in advance that the results obtained will be calculated with a certain error, therefore, for many numerical methods, it is necessary to know in advance the “level of accuracy” that the resulting solution will correspond to.

In this regard, the problem of finding the roots of a polynomial of the form (3.1)

is of particular interest, because the formulas for finding the roots of even a cubic equation are rather complicated. If it is necessary to find the roots of a polynomial whose degree is, for example, 5, then one cannot do without the help of numerical methods, especially since the probability of such a polynomial having natural (or integer, or exact roots with a "short" fractional part) is quite small, and there are no formulas for finding the roots of an equation of degree greater than 4. De facto, all further operations will be reduced to clarification of the roots, whose intervals are approximately known in advance. The easiest way to find these "approximate" roots is to use graphical methods.

To find the roots of a polynomial, there are several numerical methods: the iteration method, the method of chords and tangents, the half division method, the secant method.

Bisection method(also known as the "method of dividing a segment in half") is also recursive, i.e. provides for repetition, taking into account the results obtained.

The essence of the method of half division is as follows:

  • - the function F(x) is given;
  • - the permissible error Q is determined;
  • - some interval [ a , b ] is defined, which exactly contains the solution of the equation.

1) We calculate the value of the coordinate E, taking the middle of the segment, i.e.

E \u003d (a + b) / 2 (3.2)

  • 2) Calculate the values ​​of F(a), F(b), F(E), and perform the following check: If F(E)>Q, then the root is found with the specified accuracy. If F(E)
  • 3) Go to point 1.

Method of simple iterations (method of successive approximations). We replace equation (2.1) with an equivalent equation

x=(x) (3.3)

can be done in various ways, for example

x=x+cf(x), c0. (3.4)

Let us assume that some initial approximation of the root of equation (3.3) is chosen. We define a numerical sequence by the formulas

X n+1 =(x n ), n=0,1,2,… (3.5)

Such a sequence is called iterative.

If on the segment containing x 0 and all subsequent approximations x n , nN, the function (x) has a continuous derivative "(x) and |"(x)|q<1, то итерационная последовательность (3.5) сходится к единственному на корню уравнения (3.3). Скорость сходимости определяется неравенством

From this inequality, in particular, it follows that the rate of convergence of the method of simple iteration depends on the value of q: the smaller q, the faster the convergence.

Therefore, in practice, when finding the roots by the method of simple iteration, it is desirable to represent equation (2.1) in the form (3.3) in such a way that the derivative "(x) in the neighborhood of the root is possibly smaller in absolute value. For this, the parameter c from the formula is sometimes used (3.4).

Newton's method (tangent method). If a sufficiently good initial approximation is known for which the following inequality holds:

then you can calculate the only root of the equation using Newton's formula

As an initial approximation, you can use the boundaries of the interval, and:

If on.

At each iteration of this method, the amount of calculations is greater than in the methods of bisections and iterations, since it is necessary to find not only the value of the function, but also its derivative. However, the rate of convergence of Newton's method is much higher.

Theorem. Let be the root of the equation, i.e. , and is continuous. Then there is a neighborhood of the root such that if the initial approximation belongs to this neighborhood, then for Newton's method the sequence of values ​​converges to at. The error of the th approximation of the root can be estimated by the formula:

where is the largest value of the modulus of the second derivative on the segment, is the smallest value of the modulus of the first derivative on the segment.

Stop rule:

Method of chords and tangents (combined). This method is based on constructing a schematic graph of a function, determining the intervals of its intersection with the abscissa axis, and then “compressing” this interval using constructed chords and tangents to the graph of this function.

It should be noted that there are also separately the method of chords (gives the value of the root with a deficiency) and the method of tangents (with an excess). However, the advantage of the combined method lies in the “two-sided compression” of the considered segment.

Consider the following case:

  • - the function F(x) is given and its graph is built;
  • - the permissible error Q is determined
  • - on the basis of the graph, a segment is defined on which the graph of the function intersects the abscissa axis, therefore, on this segment there is a root of the polynomial under consideration (we denote it by A)

The further algorithm is reduced to the following actions:

  • 1) we build a tangent to the graph of the function at the point F(b)
  • 2) we calculate the x-coordinate of the intersection of the tangent with the abscissa axis according to the formula (3.9) and denote it by b "
  • 3) we construct a chord to the graph of the function passing through the points F(a) and F(b).
  • 4) We calculate the point of intersection of the chord with the abscissa axis according to the formula (2) and denote it by a".

Thus, we get a new segment , which (according to the definitions of a chord and a tangent) still contains the solution of equation A.

Now we take the segment as a new segment and repeat steps 1-4 until the difference F(b)-F(a) becomes less than the initially embedded error Q. We also note that after this it is recommended to take the arithmetic mean F as the desired solution (a) and F(b).

Thus, if the chord (tangent) gives the value of the root with an excess, then this root is taken as the new right boundary, and if with a deficiency, then the left one. In both cases, the exact root lies between the points of intersection of the chord and the tangent with the abscissa axis.

Remarks on the method of chords and tangents. Since the solution of the problem requires finding the derivative of the function F(x), the method of chords and tangents is quite difficult to implement at the program level, because the rules for calculating derivatives in a general form are rather cumbersome for the "understanding" of a computer; when directly specifying the derivative for each degree of the polynomial, the computer memory is seriously loaded, which greatly slows down the work, and setting the function and, accordingly, its derivative directly in the program code is unacceptable. However, using this method, the convergence of the interval to the root occurs most quickly, especially if the method of chords and tangents is combined with the bisection method, because the middle of the new segment often gives a completely satisfactory solution.

The secant method. The secant method can be obtained from Newton's method by replacing the derivative with an approximate expression - the difference formula:

Formula (3.8) uses the two previous approximations u. Therefore, for a given initial value, it is necessary to calculate the next approximation, for example, by the Newton method with an approximate replacement of the derivative by the formula

Algorithm of the secant method:

1) the initial value and error are given. Compute

2) for n= 1,2, ….. while the condition is satisfied, we calculate by formula (3.8).

Formulation of the problem

Root separation

Root Refinement

1.2.3.2. Iteration Method

1.2.3.4. chord method

Formulation of the problem

Algebraic equations

( 1.2.1-1)

transcendental equation

(1.2.1-2)

Iterative refinement of roots.

At the stage of separation of the roots, the problem of finding the narrowest possible segments , which contains one and only one root of the equation, is solved.

The root refinement step aims to calculate an approximate value of the root with a given accuracy. In this case, iterative methods for calculating successive approximations to the root are used: x 0 , x 1 , ..., x n , ..., in which each subsequent approximation x n+1 is calculated based on the previous x n . Each step is called an iteration. If the sequence x 0 , x 1 , ..., x n , … as n ® ¥ has a limit equal to the value of the root , then the iterative process is said to converge.

There are various ways to separate and refine the roots, which we will discuss below.

Root separation

The root of the equation f(x)=0 is considered separated (localized) on the segment if this equation has no other roots on this segment. To separate the roots of the equation, it is necessary to divide the range of admissible values ​​of the function f(x) into fairly narrow segments, each of which contains only one root. Exist graphic and analytical root separation methods.

Root Refinement

The task of refining the root of the equation with accuracy separated by the segment is to find such an approximate value of the root for which the inequality . If the equation has not one, but several roots, then the refinement stage is carried out for each separated root.

Half division method

Let the root of the equation f(x)=0 be separated on the segment , that is, there is a single root on this segment, and the function on this segment is continuous.

The bisection method allows you to get a sequence of nested segments , , …,,…, such that f(a i).f(b i)< 0 , where i=1,2,…,n, and the length of each subsequent segment is half the length of the previous one:

Sequential narrowing of the segment around the unknown value of the root ensures execution at some step n inequalities |b n - a n |< e. Поскольку при этом для любого хÎ будет выполняться неравенство | - х| <, то с точностью любое

Can be taken as an approximate value of the root, for example, its midpoint

In the bisection method, from iteration to iteration, the length of the initial segment is consistently reduced by half (Fig. 1.2.3-1). Therefore, at the nth step, the following estimate of the error of the result is valid:

( 1.2.3-1)

where is the exact value of the root, x n н is the approximate value of the root at the nth step.

Comparing the resulting error estimate with the given accuracy , we can estimate the required number of steps:

(1.2.3-2)

It can be seen from the formula that the decrease in the value e(increase in accuracy) leads to a significant increase in the amount of calculations, therefore, in practice, the half-division method is used for a relatively rough finding of the root, and its further refinement is carried out using other, more efficient methods.

Rice. 1.2.3-2. Scheme of the bisection method algorithm

The scheme of the bisection algorithm is shown in fig. 1.2.3-2. The above algorithm assumes that the left side of the equation f(x) is designed as a software module.

Example 1.2.3-1. Specify the root of the equation x 3 +x-1=0 with an accuracy of =0.1, which is localized on the segment .

The results are conveniently presented using Table 1.2.3-3.

Table 1.2.3-3

k a b f(a) f(b) (a+b)/2 f((a+b)/2) a k b k
-1 0.5 -0.375 0.5
0.5 -0.375 0.75 0.172 0.5 0.75
0.5 0.75 -0.375 0.172 0.625 -0.131 0.625 0.75
0.625 0.75 -0.131 0.172 0.688 0.0136 0.625 0.688

After the fourth iteration, the length of the segment |b 4 -a 4 | = |0.688-0.625| = 0.063 has become less than the value e, therefore, for the approximate value of the root, you can take the value of the middle of this segment: x \u003d (a 4 + b 4) / 2 \u003d 0.656 .

The value of the function f(x) at the point x = 0.656 is f(0.656) = -0.062 .

Iteration Method

The iteration method involves replacing the equation f(x)=0 with an equivalent equation x=j(x). If the root of the equation is separated on the segment , then based on the initial approximation x 0 н, you can get a sequence of approximations to the root

x 1 \u003d j (x 0), x 2 \u003d j (x 1), ..., , ( 1.2.3-3)

where the function j(x) is called an iterating function.

The convergence condition for the simple iteration method is determined by the following theorem.

Let the root X* equations x=j(x) separated on a segmentand constructed a sequence of approximations according to the rule x n \u003d j (x n -1) . Then if all members of the sequence x n =j(x n -1) н and there is such q(0 that for everyone x О performed|j'(x)| = q<1, then this sequence is convergent and the limit of the sequence is the value of the root x* , i.e. the iteration process converges to the root of the equation regardless of the initial approximation.

Thus, if the condition of convergence of the iteration method is satisfied, then the sequence x 0 , x 1 , x 2 , …, x n ,…, obtained using the formula x n +1 = j(x n ), converges to the exact value of the root:

The condition j(x)н for xн means that all approximations x 1 , x 2 , …, x n ,…, obtained by the iterative formula, must belong to the segment where the root is separated.


To estimate the error of the iteration method, the condition

per number q can take the largest value |j"(x)| , and the process of iterations should be continued until the inequality

(1.2.3-5)

In practice, a simplified error estimation formula is often used. For example, if 0

|x n -1 - x n | £ .

Using the iterative formula x n +1 = j(x n) allows you to get the value of the root of the equation f(x)=0 with any degree of accuracy .

Geometric illustration of the iteration method. On the X0Y plane, we plot the graphs of the functions y=x and y=j(x ). The root of the equation x=j(x) is the abscissa of the intersection point of the graphs of the function y = j(x ) and direct y=x. Let's take some initial approximation x 0 н . On the curve y \u003d j (x) it corresponds to the point A 0 \u003d j (x 0). To find the next approximation, draw a straight horizontal line through point A 0 to the intersection with the straight line y \u003d x (point B 1) and lower the perpendicular to the intersection with the curve (point A 1), that is, x 1 \u003d j (x 0) . Continuing the construction in a similar way, we have a broken line A 0, B 1, A 1, B 2, A 2 ..., for which the common abscissas of the points represent a successive approximation x 1, x 2, ..., x n ("ladder") to the root X*. From fig. 1.2.3-3a it can be seen that the process converges to the root of the equation.

Consider now another form of the curve y = j(x) (Fig. 1.2.6b). In this case, the broken line A 0 , B 1 , A 1 , B 2 , A 2 ... has the form of a “spiral”. However, in this case, convergence is also observed.

It is easy to see that in the first case the derivative satisfies the condition 0< j’(x)< 1, а во втором случае производная j’(x)<0иj’(x)>-one. Thus, it is obvious that if |j'(x)|<1, то процесс итераций сходится к корню.

Now consider the cases where |j'(x) |> 1. In fig. 1.2.3-4a shows the case when j'(x)>1, and in fig. 1.2.3-4b - when j'(x)< -1. В обоих случаях процесс итерации расходится, то есть, полученное на очередной итерации значение х все дальше удаляется от истинного значения корня.

Ways to improve the convergence of the iteration process. Consider two options for representing the function j(x) in the transition from the equation f(x) to x=j(x).

1. Let the function j(x) be differentiable and monotone in neighborhoods of the root, and let there be a number k £ |j‘(x)|, where k ³ 1 (i.e., the process diverges). Let us replace the equation x=j(x) with its equivalent equation x=Y(x ) , where Y(x) = 1/j(x)(let's move on to the inverse function). Then

which means q=1/k< 1 и процесс будет сходиться.

2. We represent the function j(x) as j(x) = x - lf(x), where l is the coefficient , not equal

zero. For the process to converge, it is necessary that
0<|j¢(x)| = |1 - lf¢(x)| < 1. Возьмем l= 2/(m 1 +M 1 ), where m 1 and M 1 are the minimum and maximum values ​​of f'(x) (m 1 =min|f'(x)|, M 1 =max|f'(x)|) for хн, i.e. 0£ m 1 £ f¢(x) £ M 1 £1. Then

and the process will converge, the recursive formula has the form

If f¢(x)< 0, то в рекуррентной формуле f(x) следует умножить на -1 .

The parameter λ can also be determined by the rule:

If , then , and if , then , where .

The scheme of the algorithm of the iteration method is shown in fig. 1.2.3-5.

The original equation f(x)=0 has been transformed to a form convenient for iterations: The left side of the original equation f(x) and the iterating function fi(x) in the algorithm are designed as separate software modules.

Rice. 1.2.3-5. Iteration Method Algorithm Diagram

Example 1.2.3-2. Refine the root of the equation 5x – 8∙ln(x) – 8 =0 with an accuracy of 0.1, which is localized on the segment .

We bring the equation to a form convenient for iterations:

Therefore, for the approximate value of the root of the equation, we take the value x 3 =3.6892, which provides the required accuracy of calculations. At this point f(x 3)=0.0027.

chord method

Geometric interpretation of the chord method is as follows
(Fig.1.2.3-8).

Let's draw a straight line segment through points A and B. The next approximation x 1 is the abscissa of the intersection point of the chord with the 0x axis. Let's construct the equation of a straight line segment:

Let's put y = 0 and find the value x = x 1 (another approximation):

We repeat the calculation process to obtain the next approximation to the root - x 2 :

In our case (Fig. 1.2.11) and the calculation formula of the chord method will look like

This formula is valid when point b is taken as a fixed point, and point a acts as an initial approximation.

Consider another case (Fig. 1.2.3-9), when .

The straight line equation for this case has the form

The next approximation x 1 at y = 0

Then the recursive formula for the method of chords for this case has the form

It should be noted that for the fixed point in the method of chords, the end of the segment is chosen for which the condition f (x) ∙ f¢¢ (x)>0 is satisfied.

Thus, if the point a is taken as a fixed point , then x 0 = b acts as an initial approximation, and vice versa.

Sufficient conditions that ensure the calculation of the root of the equation f(x)=0 using the formula of chords will be the same as for the tangent method (Newton's method), but instead of the initial approximation, a fixed point is chosen. The chord method is a modification of Newton's method. The difference is that the next approximation in the Newton method is the point of intersection of the tangent with the 0X axis, and in the method of chords - the point of intersection of the chord with the 0X axis - the approximations converge to the root from different sides.

The estimate of the error of the chord method is determined by the expression

(1.2.3-15)

Termination condition of the iteration process by the method of chords

(1.2.3-16)

If M 1<2m 1 , то для оценки погрешности метода может быть использована формула | x n - x n -1 | £ e.

Example 1.2.3-4. Specify the root of the equation e x - 3x = 0, separated on a segment with an accuracy of 10 -4 .

Let's check the convergence condition:

Therefore, a=0 should be chosen as a fixed point, and x 0 \u003d 1 should be taken as the initial approximation, since f (0) \u003d 1> 0 and f (0) * f "(0)> 0.

Calculation results obtained using the formula
1.2.3-14 are presented in Table 1.2.3-4.

Table 1.2.3-4

Rice. 1.2.3-10. Scheme of the chord method algorithm

The non-linear equation is

1) algebraic or transcendental equation

2) algebraic equation

3) trigonometric equation

4) transcendental equation

Topic 1.2. Methods for solving nonlinear equations

Formulation of the problem

Root separation

1.2.2.1. Graphic separation of roots

1.2.2.2. Analytical Branch of Roots

Root Refinement

1.2.3.1. Half division method

1.2.3.2. Iteration Method

1.2.3.3. Newton's method (tangent method)

1.2.3.4. chord method

1.2.3.5. Comparison of methods for solving nonlinear equations

1.2.4. Test tasks on the topic "Methods for solving nonlinear equations"

Formulation of the problem

One of the most important and most common problems of mathematical analysis is the problem of determining the roots of an equation with one unknown, which can be represented in general form as f(x) = 0. Depending on the form of the function f(x), algebraic and transcendental equations are distinguished. Algebraic equations are called equations in which the value of the function f(x) is a polynomial of the nth degree:

f (x) \u003d P (x) \u003d a n x n + a 2 x 2 + ... + a 1 x + a 0 \u003d 0. ( 1.2.1-1)

Any non-algebraic equation is called transcendental equation. The function f(x) in such equations is at least one of the following functions: exponential, logarithmic, trigonometric, or inverse trigonometric.

The solution of the equation f (x) \u003d 0 is the set of roots, that is, such values ​​\u200b\u200bof the independent variable for which the equation turns into an identity. However, the exact values ​​of the roots can only be found analytically for some types of equations. In particular, formulas expressing the solution of an algebraic equation can only be obtained for equations not higher than the fourth degree. There are even fewer opportunities for obtaining an exact solution of transcendental equations. It should be noted that the problem of finding the exact values ​​of the roots is not always correct. So, if the coefficients of the equation are approximate numbers, the accuracy of the calculated values ​​of the roots certainly cannot exceed the accuracy of the original data. These circumstances force us to consider the possibility of finding the roots of the equation with limited accuracy (approximate roots).

The problem of finding the root of an equation with a given accuracy (>0) is considered solved if an approximate value is calculated, which differs from the exact value of the root by no more than the value e

(1.2.1-2)

The process of finding the approximate root of the equation consists of two stages:

1) separation of roots (localization of roots);

Equations that contain unknown functions raised to a power greater than one are called non-linear.
For example, y=ax+b is a linear equation, x^3 - 0.2x^2 + 0.5x + 1.5 = 0 is non-linear (generally written as F(x)=0).

A system of nonlinear equations is the simultaneous solution of several nonlinear equations with one or more variables.

There are many methods solving nonlinear equations and systems of nonlinear equations, which are usually classified into 3 groups: numerical, graphical and analytical. Analytical methods make it possible to determine the exact values ​​of the solution of equations. Graphical methods are the least accurate, but allow in complex equations to determine the most approximate values, from which in the future you can begin to find more accurate solutions to the equations. The numerical solution of non-linear equations involves passing through two stages: the separation of the root and its refinement to a certain specified accuracy.
The separation of the roots is carried out in various ways: graphically, using various specialized computer programs, etc.

Let's consider several methods for refining roots with a specific accuracy.

Methods for the numerical solution of nonlinear equations

half division method.

The essence of the half division method is to divide the interval in half (с=(a+b)/2) and discard the part of the interval in which there is no root, i.e. condition F(a)xF(b)

Fig.1. Using the method of half division in solving nonlinear equations.

Consider an example.


Let's divide the segment into 2 parts: (a-b)/2 = (-1+0)/2=-0.5.
If the product F(a)*F(x)>0, then the beginning of the segment a is transferred to x (a=x), otherwise, the end of the segment b is transferred to the point x (b=x). We divide the resulting segment in half again, etc. All calculations are shown in the table below.

Fig.2. Calculation results table

As a result of calculations, we obtain the value, taking into account the required accuracy, equal to x=-0.946

chord method.

When using the chord method, a segment is specified, in which there is only one root with the specified accuracy e. A line (chord) is drawn through the points in the segment a and b, which have coordinates (x(F(a); y(F(b))). Next, the points of intersection of this line with the abscissa axis (point z) are determined.
If F(a)xF(z)

Fig.3. Using the method of chords in solving nonlinear equations.

Consider an example. It is necessary to solve the equation x^3 - 0.2x^2 + 0.5x + 1.5 = 0 to within e

In general, the equation looks like: F(x)= x^3 - 0.2x^2 + 0.5x + 1.5

Find the values ​​of F(x) at the ends of the segment :

F(-1) = - 0.2>0;

Let's define the second derivative F''(x) = 6x-0.4.

F''(-1)=-6.4
F''(0)=-0.4

At the ends of the segment, the condition F(-1)F’’(-1)>0 is observed, therefore, to determine the root of the equation, we use the formula:


All calculations are shown in the table below.


Fig.4. Calculation results table

As a result of calculations, we obtain the value, taking into account the required accuracy, equal to x=-0.946

Tangent Method (Newton)

This method is based on the construction of tangents to the graph, which are drawn at one of the ends of the interval. At the point of intersection with the X-axis (z1), a new tangent is built. This procedure continues until the obtained value is comparable with the desired accuracy parameter e (F(zi)

Fig.5. Using the method of tangents (Newton) in solving nonlinear equations.

Consider an example. It is necessary to solve the equation x^3 - 0.2x^2 + 0.5x + 1.5 = 0 to within e

In general, the equation looks like: F(x)= x^3 - 0.2x^2 + 0.5x + 1.5

Let's define the first and second derivatives: F'(x)=3x^2-0.4x+0.5, F''(x)=6x-0.4;

F''(-1)=-6-0.4=-6.4
F''(0)=-0.4
The condition F(-1)F''(-1)>0 is fulfilled, so the calculations are made according to the formula:

Where x0=b, F(a)=F(-1)=-0.2

All calculations are shown in the table below.


Fig.6. Calculation results table

As a result of calculations, we obtain the value, taking into account the required accuracy, equal to x=-0.946

Consider the problem of finding the roots of the nonlinear equation

The roots of equation (1) are those values ​​of x that, when substituting, turn it into an identity. Only for the simplest equations it is possible to find a solution in the form of formulas, i.e. analytical form. More often it is necessary to solve equations by approximate methods, the most widespread among which, in connection with the advent of computers, are numerical methods.

The algorithm for finding roots by approximate methods can be divided into two stages. At the first, the location of the roots is studied and their separation is carried out. There is an area in which there is a root of the equation or an initial approximation to the root x 0 . The simplest way to solve this problem is to study the graph of the function f(x) . In the general case, to solve it, it is necessary to involve all means of mathematical analysis.

The existence on the found interval of at least one root of equation (1) follows from the Bolzano condition:

f(a)*f(b)<0 (2)

It is also assumed that the function f(x) is continuous on the given segment. However, this condition does not answer the question about the number of roots of the equation on a given interval. If the requirement of continuity of the function is supplemented with the requirement of its monotonicity, and this follows from the sign-constancy of the first derivative, then we can assert the existence of a unique root on a given segment.

When localizing roots, it is also important to know the basic properties of this type of equation. For example, recall some properties of algebraic equations:

where are real coefficients.

  • a) An equation of degree n has n roots, among which there can be both real and complex ones. Complex roots form complex conjugate pairs and, therefore, the equation has an even number of such roots. For an odd value of n, there is at least one real root.
  • b) The number of positive real roots is less than or equal to the number of variable signs in the sequence of coefficients. Replacing x with -x in equation (3) allows you to estimate the number of negative roots in the same way. iteration newton dichotomy non-linear

At the second stage of solving equation (1), using the obtained initial approximation, an iterative process is constructed that makes it possible to refine the value of the root with some predetermined accuracy. The iterative process consists in successive refinement of the initial approximation. Each such step is called an iteration. As a result of the iteration process, a sequence of approximate values ​​of the roots of the equation is found. If this sequence approaches the true value of the root x as n grows, then the iterative process converges. An iterative process is said to converge to at least order m if the following condition is satisfied:

where С>0 is some constant. If m=1 , then one speaks of first-order convergence; m=2 - about quadratic, m=3 - about cubic convergence.

Iterative cycles end if, for a given permissible error, the criteria for absolute or relative deviations are met:

or the smallness of the residual:

This work is devoted to the study of an algorithm for solving nonlinear equations using Newton's method.

Department: ASOIiU

Laboratory work

On the topic: FINDING THE ROOT OF A NONLINEAR EQUATION. METHODS FOR SOLVING A SYSTEM OF NONLINEAR EQUATIONS

Moscow, 2008


FINDING THE ROOT OF A NONLINEAR EQUATION

1. Statement of the problem

Let a function be given that is continuous together with its several derivatives. It is required to find all or some real roots of the equation

This task is divided into several subtasks. First, it is necessary to determine the number of roots, to investigate their nature and location. Second, find the approximate values ​​of the roots. Thirdly, choose the roots of interest to us from them and calculate them with the required accuracy e. The first and second tasks are solved, as a rule, by analytical or graphical methods. In the case when only the real roots of equation (1) are sought, it is useful to compile a table of function values. If the function has different signs in two neighboring nodes of the table, then between these nodes lies an odd number of roots of the equation (at least one). If these nodes are close, then most likely there is only one root between them.

The found approximate values ​​of the roots can be refined using various iterative methods. Let's consider three methods: 1) the method of dichotomy (or dividing the segment in half); 2) simple iteration method; and 3) Newton's method.


2. Methods for solving the problem

2.1 Method of dividing a segment in half

The simplest method for finding the root of the nonlinear equation (1) is the half division method.

Let a continuous function be given on the segment. If the values ​​of the function at the ends of the segment have different signs, i.e. then this means that inside the given segment there is an odd number of roots. Let, for definiteness, have only one root. The essence of the method is to halve the length of the segment at each iteration. We find the middle of the segment (see Fig. 1) Calculate the value of the function and select the segment on which the function changes its sign. Divide the new segment in half again. And we continue this process until the length of the segment is equal to the predetermined error in calculating the root e. The construction of several successive approximations according to formula (3) is shown in Figure 1.

So, the algorithm of the dichotomy method:

1. Set distance and error e.

2. If f(a) and f(b) have the same signs, issue a message about the impossibility of finding the root and stop.


Fig.1. The method of dividing a segment in half for solving an equation of the form f(x)=0.

3. Otherwise calculate c=(a+b)/2

4. If f(a) and f(c) have different signs, put b=c, otherwise a=c.

5. If the length of the new segment is , then calculate the value of the root c=(a+b)/2 and stop, otherwise go to step 3.

Since the length of the segment is reduced by 2 N times in N steps, the given error in finding the root e will be reached in iterations.

As can be seen, the rate of convergence is low, but the advantages of the method include simplicity and unconditional convergence of the iterative process. If the segment contains more than one root (but an odd number), then one will always be found.

Comment. To determine the interval in which the root lies, an additional analysis of the function is required, based either on analytical estimates or on the use of a graphical solution method. It is also possible to organize a search of function values ​​at different points until the function sign-changing condition is met

2.2 Simple iteration method

When using this method, the original nonlinear equation (1) must be rewritten in the form

Let's denote the root of this equation as C * . Let the initial approximation of the root be known. Substituting this value into the right side of equation (2), we obtain a new approximation

etc. For the (n+1)-step, we obtain the following approximation

(3)

Thus, according to formula (3), we obtain a sequence С 0 , С 1 ,…,С n +1 , which tends to the root С * at n®¥. The iterative process stops if the results of two successive iterations are close, i.e., the condition

(4)


Let us study the condition and rate of convergence of the numerical sequence (C n ) for n®¥. Recall the definition of the rate of convergence. A sequence (C n ) converging to the limit С * has a rate of convergence of order a if, for n®¥, the condition

Let us assume that it has a continuous derivative, then the error at the (n+1)-th iteration step e n +1 =C n +1 -C * =g(C n)-g(C *) can be represented as a series

e n+1 » C n+1 – C * = g¢(C *) (C n -C *) +¼@ g¢(C *) e n +¼

Thus, we obtain that under the condition

çg¢(C *) ç<1(6)

sequence (3) will converge to the root with a linear speed a=1. Condition (6) is a condition for the convergence of the simple iteration method. Obviously, the success of the method depends on how well the function is chosen.

For example, to extract the square root, i.e., solve an equation of the form x \u003d a 2, you can put

x \u003d g 1 (x) \u003d a / x (7a)


x=g 2 (x)=(x+a/x)/2.(7b)

It is easy to show that

½g 1" (C)½=1,

½g 2" (C)½<1.

Thus, the first process (7a) does not converge at all, while the second one (7b) converges for any initial approximation C 0 >0.

Rice. 2. Graphical interpretation of the method of simple iterations for solving an equation of the form x=g(x).

Construction of several successive approximations by formula (3)

С 0 , С 1 , …, С n = C *

shown in Figure 2.

2.3 Newton's method

In the literature, this method is often called the tangent method, as well as the linearization method. We choose the initial approximation С 0 . Let us assume that the deviation С 0 from the true value of the root С * is small, then, expanding f(C *) into a Taylor series at the point С 0 , we obtain

f(C *) = f(C 0) + f¢(C 0) (C * -C 0) +¼(8)

If f¢(C 0) ¹ 0 , then in (8) we can restrict ourselves to terms linear in DC =C-C 0 . Considering that f(C *)=0, from (9) we can find the following approximation for the root

C 1 \u003d C 0 - f (C 0) / f¢ (C 0)

or for the (n+1)th approximation

C n+1 = C n – f (C n) / f ¢(C n) (9)

To terminate the iterative process, one of two conditions can be used

çC n +1 – C n ç

çf(C n +1) ç

The study of the convergence of Newton's method is carried out similarly to the previous case. Independently obtain that under the condition

½f""(C)/2f"(C)½<1.

Newton's method has a quadratic convergence rate ().

Rice. 3. Graphical interpretation of Newton's method for solving an equation of the form f(x)=0.

Construction of several successive approximations by formula (9)

С 0 , С 1 , …, С n = C *

shown in Figure 3.

1. For a given function f(x)

Determine the number of real roots of the equation f(x)=0, their location and approximate values ​​(build a graph or print a table of values).

· Calculate one of the found roots (any) with an accuracy of e=0.5*10 -3 .

For calculations, use the method of dividing the segment in half (determine the number of iterations), and then find the same root using Newton's method (also determining the number of iteration steps).

Compare your results.

Task options

1.x3 –3x 2 +6x – 5 = 0 2.x3 +sinx –12x-1=0

3. x 3 –3x 2 –14x – 8 = 0 4. 3x + cos x + 1 =0

5. x 2 +4sin x -1 = 0 6. 4x -ln x = 5

7. x 6 –3x 2 +x – 1 = 0 8. x 3 – 0.1x 2 +0.3x –0.6 = 0

9.10. (x -1) 3 + 0.5e x = 0

11.12.x5 -3x2 + 1 = 0

13. x 3 -4x 2 -10x -10 = 0 14.

15. 16.

19. 20.

23. 24. x 4 - 2.9x 3 +0.1x 2 + 5.8x - 4.2=0

25.x4 +2.83x3 - 4.5x2 -64x-20=0 26.

METHODS FOR SOLVING A SYSTEM OF NONLINEAR EQUATIONS

1. Formulation of the problem

Let it be required to solve a system of n nonlinear equations:

(1)

There are no direct methods for solving system (1). Only in some cases can this system be solved directly. For example, for the case of two equations, it is sometimes possible to express one unknown variable in terms of another and thus reduce the problem to solving one nonlinear equation with respect to one unknown.

The system of equations (1) can be briefly written in vector form:

. (2)

Equation (2) can have one or more roots in the domain D. It is required to establish the existence of the roots of the equation and find the approximate values ​​of these roots. To find the roots, iterative methods are usually used, in which the choice of the initial approximation is of fundamental importance. The initial approximation is sometimes known from physical considerations. In the case of two unknowns, the initial approximation can be found graphically: plot the curves f 1 (x 1 , x 2)=0 and f 2 (x 1 , x 2)=0 on the plane (x 1 , x 2) and find their intersection points. For three or more variables (as well as for complex roots), there are no satisfactory ways to select the initial approximation.

Let us consider two main iterative methods for solving the system of equations (1), (2) - the simple iteration method and Newton's method.

2. Methods for solving a system of nonlinear equations

2.1 Simple iteration method

Let us represent system (1) in the form

(3)

or in vector form:

(4)

The algorithm of the simple iteration method is as follows. We choose some zero approximation

The next approximation is found by the formulas:


or in more detail:

(5)

The iterative process (5) continues until the changes in all unknowns in two successive iterations become small, i.e.

In practice, the inequality is often used instead of the last condition:

(6)

where is the rms norm of an n-dimensional vector , i.e.

When using this method, success is largely determined by a good choice of initial approximation: it should be close enough to the true solution. Otherwise, the iterative process may not converge. If the process converges, then its rate of convergence is linear.

2.2. Newton's method

In the translated literature, you can find the name Newton-Raphson method. This method converges much faster than the simple iteration method.

Let some approximation to the root be known, so that

Then the original system (2) can be written as follows:

Expanding equation (7) in a Taylor series in the vicinity of the point and restricting ourselves to linear terms in deviation , we obtain:

or in coordinate form:

(8)

System (8) can be rewritten as:


(9)

The resulting system (9) is a system of linear algebraic equations with respect to increments

The value of the functions F 1 , F 2 , …, F n and their derivatives in (9) are calculated at

.

The determinant of system (9) is the Jacobian J:

(10)

For the existence of a unique solution to the system of equations (9), it must be different from zero. Having solved system (9), for example, by the Gauss method, we find a new approximation:

.

We check condition (6). If it is not satisfied, we also find the Jacobian (10) with a new approximation and again solve (9), thus, we find the 2nd approximation, and so on.

Iterations stop as soon as condition (6) is satisfied.

Using Newton's method, find solutions to a system of nonlinear equations with a given accuracy. Examine the convergence of the iterative process.

Task options

1 2

3 4

5 6

7 8

9 10

11 12

13 14.

15. 16.

17. 18.

19. 20.

21. 22.