Using the method of Lagrange multipliers find the extrema of the function. Lagrange method (constant variations)

Let us first consider the case of a function of two variables. The conditional extremum of the function $z=f(x,y)$ at the point $M_0(x_0;y_0)$ is the extremum of this function, reached under the condition that the variables $x$ and $y$ in the vicinity of this point satisfy the constraint equation $\ varphi(x,y)=0$.

The name "conditional" extremum is due to the fact that the additional condition $\varphi(x,y)=0$ is imposed on the variables. If it is possible to express one variable in terms of another from the connection equation, then the problem of determining the conditional extremum is reduced to the problem of the usual extremum of a function of one variable. For example, if $y=\psi(x)$ follows from the constraint equation, then substituting $y=\psi(x)$ into $z=f(x,y)$, we get a function of one variable $z=f\left (x,\psi(x)\right)$. In the general case, however, this method is of little use, so a new algorithm is required.

Method of Lagrange multipliers for functions of two variables.

The method of Lagrange multipliers is that to find the conditional extremum, the Lagrange function is composed: $F(x,y)=f(x,y)+\lambda\varphi(x,y)$ (the parameter $\lambda$ is called the Lagrange multiplier ). The necessary extremum conditions are given by a system of equations from which the stationary points are determined:

$$ \left \( \begin(aligned) & \frac(\partial F)(\partial x)=0;\\ & \frac(\partial F)(\partial y)=0;\\ & \varphi (x,y)=0.\end(aligned)\right.$$

The sign $d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("" )dy^2$. If at a stationary point $d^2F > 0$, then the function $z=f(x,y)$ has a conditional minimum at this point, but if $d^2F< 0$, то условный максимум.

There is another way to determine the nature of the extremum. From the constraint equation we get: $\varphi_(x)^(")dx+\varphi_(y)^(")dy=0$, $dy=-\frac(\varphi_(x)^("))(\varphi_ (y)^("))dx$, so at any stationary point we have:

$$d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=F_(xx)^( "")dx^2+2F_(xy)^("")dx\left(-\frac(\varphi_(x)^("))(\varphi_(y)^("))dx\right)+ F_(yy)^("")\left(-\frac(\varphi_(x)^("))(\varphi_(y)^("))dx\right)^2=\\ =-\frac (dx^2)(\left(\varphi_(y)^(") \right)^2)\cdot\left(-(\varphi_(y)^("))^2 F_(xx)^(" ")+2\varphi_(x)^(")\varphi_(y)^(")F_(xy)^("")-(\varphi_(x)^("))^2 F_(yy)^ ("")\right)$$

The second factor (located in brackets) can be represented in this form:

Elements of the $\left| \begin(array) (cc) F_(xx)^("") & F_(xy)^("") \\ F_(xy)^("") & F_(yy)^("") \end (array) \right|$ which is the Hessian of the Lagrange function. If $H > 0$ then $d^2F< 0$, что указывает на условный максимум. Аналогично, при $H < 0$ имеем $d^2F >$0, i.e. we have a conditional minimum of the function $z=f(x,y)$.

Note on the form of the $H$ determinant. show/hide

$$ H=-\left|\begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_ (xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \ end(array) \right| $$

In this situation, the rule formulated above changes as follows: if $H > 0$, then the function has a conditional minimum, and for $H< 0$ получим условный максимум функции $z=f(x,y)$. При решении задач следует учитывать такие нюансы.

Algorithm for studying a function of two variables for a conditional extremum

  1. Compose the Lagrange function $F(x,y)=f(x,y)+\lambda\varphi(x,y)$
  2. Solve system $ \left \( \begin(aligned) & \frac(\partial F)(\partial x)=0;\\ & \frac(\partial F)(\partial y)=0;\\ & \ varphi(x,y)=0.\end(aligned)\right.$
  3. Determine the nature of the extremum at each of the stationary points found in the previous paragraph. To do this, use any of the following methods:
    • Compose the determinant $H$ and find out its sign
    • Taking into account the constraint equation, calculate the sign of $d^2F$

Lagrange multiplier method for functions of n variables

Suppose we have a function of $n$ variables $z=f(x_1,x_2,\ldots,x_n)$ and $m$ constraint equations ($n > m$):

$$\varphi_1(x_1,x_2,\ldots,x_n)=0; \; \varphi_2(x_1,x_2,\ldots,x_n)=0,\ldots,\varphi_m(x_1,x_2,\ldots,x_n)=0.$$

Denoting the Lagrange multipliers as $\lambda_1,\lambda_2,\ldots,\lambda_m$, we compose the Lagrange function:

$$F(x_1,x_2,\ldots,x_n,\lambda_1,\lambda_2,\ldots,\lambda_m)=f+\lambda_1\varphi_1+\lambda_2\varphi_2+\ldots+\lambda_m\varphi_m$$

The necessary conditions for the presence of a conditional extremum are given by a system of equations from which the coordinates of stationary points and the values ​​of the Lagrange multipliers are found:

$$\left\(\begin(aligned) & \frac(\partial F)(\partial x_i)=0; (i=\overline(1,n))\\ & \varphi_j=0; (j=\ overline(1,m)) \end(aligned) \right.$$

It is possible to find out whether a function has a conditional minimum or a conditional maximum at the found point, as before, using the sign $d^2F$. If at the found point $d^2F > 0$, then the function has a conditional minimum, but if $d^2F< 0$, - то условный максимум. Можно пойти иным путем, рассмотрев следующую матрицу:

Matrix determinant $\left| \begin(array) (ccccc) \frac(\partial^2F)(\partial x_(1)^(2)) & \frac(\partial^2F)(\partial x_(1)\partial x_(2) ) & \frac(\partial^2F)(\partial x_(1)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(1)\partial x_(n)) \\ \frac(\partial^2F)(\partial x_(2)\partial x_1) & \frac(\partial^2F)(\partial x_(2)^(2)) & \frac(\partial^2F )(\partial x_(2)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(2)\partial x_(n))\\ \frac(\partial^2F )(\partial x_(3) \partial x_(1)) & \frac(\partial^2F)(\partial x_(3)\partial x_(2)) & \frac(\partial^2F)(\partial x_(3)^(2)) &\ldots & \frac(\partial^2F)(\partial x_(3)\partial x_(n))\\ \ldots & \ldots & \ldots &\ldots & \ ldots\\ \frac(\partial^2F)(\partial x_(n)\partial x_(1)) & \frac(\partial^2F)(\partial x_(n)\partial x_(2)) & \ frac(\partial^2F)(\partial x_(n)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(n)^(2))\\ \end( array) \right|$ highlighted in red in the $L$ matrix is ​​the Hessian of the Lagrange function. We use the following rule:

  • If the signs of the corner minors are $H_(2m+1),\; H_(2m+2),\ldots,H_(m+n)$ matrices $L$ coincide with the sign $(-1)^m$, then the stationary point under study is the conditional minimum point of the function $z=f(x_1,x_2 ,x_3,\ldots,x_n)$.
  • If the signs of the corner minors are $H_(2m+1),\; H_(2m+2),\ldots,H_(m+n)$ alternate, and the sign of the minor $H_(2m+1)$ coincides with the sign of the number $(-1)^(m+1)$, then the studied stationary the point is the conditional maximum point of the function $z=f(x_1,x_2,x_3,\ldots,x_n)$.

Example #1

Find the conditional extremum of the function $z(x,y)=x+3y$ under the condition $x^2+y^2=10$.

The geometric interpretation of this problem is as follows: it is required to find the largest and smallest value of the applicate of the plane $z=x+3y$ for the points of its intersection with the cylinder $x^2+y^2=10$.

It is somewhat difficult to express one variable in terms of another from the constraint equation and substitute it into the function $z(x,y)=x+3y$, so we will use the Lagrange method.

Denoting $\varphi(x,y)=x^2+y^2-10$, we compose the Lagrange function:

$$ F(x,y)=z(x,y)+\lambda \varphi(x,y)=x+3y+\lambda(x^2+y^2-10);\\ \frac(\partial F)(\partial x)=1+2\lambda x; \frac(\partial F)(\partial y)=3+2\lambda y. $$

Let us write down the system of equations for determining the stationary points of the Lagrange function:

$$ \left \( \begin(aligned) & 1+2\lambda x=0;\\ & 3+2\lambda y=0;\\ & x^2+y^2-10=0. \end (aligned)\right.$$

If we assume $\lambda=0$, then the first equation becomes: $1=0$. The resulting contradiction says that $\lambda\neq 0$. Under the condition $\lambda\neq 0$, from the first and second equations we have: $x=-\frac(1)(2\lambda)$, $y=-\frac(3)(2\lambda)$. Substituting the obtained values ​​into the third equation, we get:

$$ \left(-\frac(1)(2\lambda) \right)^2+\left(-\frac(3)(2\lambda) \right)^2-10=0;\\ \frac (1)(4\lambda^2)+\frac(9)(4\lambda^2)=10; \lambda^2=\frac(1)(4); \left[ \begin(aligned) & \lambda_1=-\frac(1)(2);\\ & \lambda_2=\frac(1)(2). \end(aligned) \right.\\ \begin(aligned) & \lambda_1=-\frac(1)(2); \; x_1=-\frac(1)(2\lambda_1)=1; \; y_1=-\frac(3)(2\lambda_1)=3;\\ & \lambda_2=\frac(1)(2); \; x_2=-\frac(1)(2\lambda_2)=-1; \; y_2=-\frac(3)(2\lambda_2)=-3.\end(aligned) $$

So, the system has two solutions: $x_1=1;\; y_1=3;\; \lambda_1=-\frac(1)(2)$ and $x_2=-1;\; y_2=-3;\; \lambda_2=\frac(1)(2)$. Let us find out the nature of the extremum at each stationary point: $M_1(1;3)$ and $M_2(-1;-3)$. To do this, we calculate the determinant $H$ at each of the points.

$$ \varphi_(x)^(")=2x;\; \varphi_(y)^(")=2y;\; F_(xx)^("")=2\lambda;\; F_(xy)^("")=0;\; F_(yy)^("")=2\lambda.\\ H=\left| \begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_(xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \end(array) \right|= \left| \begin(array) (ccc) 0 & 2x & 2y\\ 2x & 2\lambda & 0 \\ 2y & 0 & 2\lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right| $$

At the point $M_1(1;3)$ we get: $H=8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & 1 & 3\\ 1 & -1/2 & 0 \\ 3 & 0 & -1/2 \end(array) \right|=40 > 0$, so at point $M_1(1;3)$ the function $z(x,y)=x+3y$ has a conditional maximum, $z_(\max)=z(1;3)=10$.

Similarly, at the point $M_2(-1;-3)$ we find: $H=8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & -1 & -3\\ -1 & 1/2 & 0 \\ -3 & 0 & 1/2 \end(array) \right|=-40$. Since $H< 0$, то в точке $M_2(-1;-3)$ имеем условный минимум функции $z(x,y)=x+3y$, а именно: $z_{\min}=z(-1;-3)=-10$.

I note that instead of calculating the value of the determinant $H$ at each point, it is much more convenient to open it in a general way. In order not to clutter up the text with details, I will hide this method under a note.

Determinant $H$ notation in general form. show/hide

$$ H=8\cdot\left|\begin(array)(ccc)0&x&y\\x&\lambda&0\\y&0&\lambda\end(array)\right| =8\cdot\left(-\lambda(y^2)-\lambda(x^2)\right) =-8\lambda\cdot\left(y^2+x^2\right). $$

In principle, it is already obvious which sign $H$ has. Since none of the points $M_1$ or $M_2$ coincides with the origin, then $y^2+x^2>0$. Therefore, the sign of $H$ is opposite to the sign of $\lambda$. You can also complete the calculations:

$$ \begin(aligned) &H(M_1)=-8\cdot\left(-\frac(1)(2)\right)\cdot\left(3^2+1^2\right)=40;\ \ &H(M_2)=-8\cdot\frac(1)(2)\cdot\left((-3)^2+(-1)^2\right)=-40. \end(aligned) $$

The question about the nature of the extremum at the stationary points $M_1(1;3)$ and $M_2(-1;-3)$ can be solved without using the determinant $H$. Find the sign of $d^2F$ at each stationary point:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=2\lambda \left( dx^2+dy^2\right) $$

I note that the notation $dx^2$ means exactly $dx$ raised to the second power, i.e. $\left(dx\right)^2$. Hence we have: $dx^2+dy^2>0$, so for $\lambda_1=-\frac(1)(2)$ we get $d^2F< 0$. Следовательно, функция имеет в точке $M_1(1;3)$ условный максимум. Аналогично, в точке $M_2(-1;-3)$ получим условный минимум функции $z(x,y)=x+3y$. Отметим, что для определения знака $d^2F$ не пришлось учитывать связь между $dx$ и $dy$, ибо знак $d^2F$ очевиден без дополнительных преобразований. В следующем примере для определения знака $d^2F$ уже будет необходимо учесть связь между $dx$ и $dy$.

Answer: at the point $(-1;-3)$ the function has a conditional minimum, $z_(\min)=-10$. At the point $(1;3)$ the function has a conditional maximum, $z_(\max)=10$

Example #2

Find the conditional extremum of the function $z(x,y)=3y^3+4x^2-xy$ under the condition $x+y=0$.

The first way (the method of Lagrange multipliers)

Denoting $\varphi(x,y)=x+y$ we compose the Lagrange function: $F(x,y)=z(x,y)+\lambda \varphi(x,y)=3y^3+4x^2 -xy+\lambda(x+y)$.

$$ \frac(\partial F)(\partial x)=8x-y+\lambda; \; \frac(\partial F)(\partial y)=9y^2-x+\lambda.\\ \left \( \begin(aligned) & 8x-y+\lambda=0;\\ & 9y^2-x+\ lambda=0;\\&x+y=0.\end(aligned)\right.$$

Solving the system, we get: $x_1=0$, $y_1=0$, $\lambda_1=0$ and $x_2=\frac(10)(9)$, $y_2=-\frac(10)(9)$ , $\lambda_2=-10$. We have two stationary points: $M_1(0;0)$ and $M_2 \left(\frac(10)(9);-\frac(10)(9) \right)$. Let us find out the nature of the extremum at each stationary point using the determinant $H$.

$$ H=\left| \begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_(xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \end(array) \right|= \left| \begin(array) (ccc) 0 & 1 & 1\\ 1 & 8 & -1 \\ 1 & -1 & 18y \end(array) \right|=-10-18y $$

At point $M_1(0;0)$ $H=-10-18\cdot 0=-10< 0$, поэтому $M_1(0;0)$ есть точка условного минимума функции $z(x,y)=3y^3+4x^2-xy$, $z_{\min}=0$. В точке $M_2\left(\frac{10}{9};-\frac{10}{9}\right)$ $H=10 >0$, so at this point the function has a conditional maximum, $z_(\max)=\frac(500)(243)$.

We investigate the nature of the extremum at each of the points by a different method, based on the sign of $d^2F$:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=8dx^2-2dxdy+ 18ydy^2 $$

From the constraint equation $x+y=0$ we have: $d(x+y)=0$, $dx+dy=0$, $dy=-dx$.

$$ d^2 F=8dx^2-2dxdy+18ydy^2=8dx^2-2dx(-dx)+18y(-dx)^2=(10+18y)dx^2 $$

Since $ d^2F \Bigr|_(M_1)=10 dx^2 > 0$, then $M_1(0;0)$ is the conditional minimum point of the function $z(x,y)=3y^3+4x^ 2-xy$. Similarly, $d^2F \Bigr|_(M_2)=-10 dx^2< 0$, т.е. $M_2\left(\frac{10}{9}; -\frac{10}{9} \right)$ - точка условного максимума.

Second way

From the constraint equation $x+y=0$ we get: $y=-x$. Substituting $y=-x$ into the function $z(x,y)=3y^3+4x^2-xy$, we obtain some function of the variable $x$. Let's denote this function as $u(x)$:

$$ u(x)=z(x,-x)=3\cdot(-x)^3+4x^2-x\cdot(-x)=-3x^3+5x^2. $$

Thus, we reduced the problem of finding the conditional extremum of a function of two variables to the problem of determining the extremum of a function of one variable.

$$ u_(x)^(")=-9x^2+10x;\\ -9x^2+10x=0; \; x\cdot(-9x+10)=0;\\ x_1=0; \ ;y_1=-x_1=0;\\ x_2=\frac(10)(9);\;y_2=-x_2=-\frac(10)(9).$$

Got points $M_1(0;0)$ and $M_2\left(\frac(10)(9); -\frac(10)(9)\right)$. Further research is known from the course of the differential calculus of functions of one variable. Investigating the sign of $u_(xx)^("")$ at each stationary point or checking the sign change of $u_(x)^(")$ at the found points, we get the same conclusions as when solving the first method. For example, check sign $u_(xx)^("")$:

$$u_(xx)^("")=-18x+10;\\ u_(xx)^("")(M_1)=10;\;u_(xx)^("")(M_2)=- 10.$$

Since $u_(xx)^("")(M_1)>0$, then $M_1$ is the minimum point of the function $u(x)$, while $u_(\min)=u(0)=0$ . Since $u_(xx)^("")(M_2)<0$, то $M_2$ - точка максимума функции $u(x)$, причём $u_{\max}=u\left(\frac{10}{9}\right)=\frac{500}{243}$.

The values ​​of the function $u(x)$ under the given connection condition coincide with the values ​​of the function $z(x,y)$, i.e. the found extrema of the function $u(x)$ are the desired conditional extrema of the function $z(x,y)$.

Answer: at the point $(0;0)$ the function has a conditional minimum, $z_(\min)=0$. At the point $\left(\frac(10)(9); -\frac(10)(9) \right)$ the function has a conditional maximum, $z_(\max)=\frac(500)(243)$.

Let's consider one more example, in which we find out the nature of the extremum by determining the sign of $d^2F$.

Example #3

Find the maximum and minimum values ​​of the function $z=5xy-4$ if the variables $x$ and $y$ are positive and satisfy the constraint equation $\frac(x^2)(8)+\frac(y^2)(2) -1=0$.

Compose the Lagrange function: $F=5xy-4+\lambda \left(\frac(x^2)(8)+\frac(y^2)(2)-1 \right)$. Find the stationary points of the Lagrange function:

$$ F_(x)^(")=5y+\frac(\lambda x)(4); \; F_(y)^(")=5x+\lambda y.\\ \left \( \begin(aligned) & 5y+\frac(\lambda x)(4)=0;\\ & 5x+\lambda y=0;\\ & \frac(x^2)(8)+\frac(y^2)(2)- 1=0;\\ & x > 0; \; y > 0. \end(aligned) \right.$$

All further transformations are carried out taking into account $x > 0; \; y > 0$ (this is stipulated in the condition of the problem). From the second equation, we express $\lambda=-\frac(5x)(y)$ and substitute the found value into the first equation: $5y-\frac(5x)(y)\cdot \frac(x)(4)=0$ , $4y^2-x^2=0$, $x=2y$. Substituting $x=2y$ into the third equation, we get: $\frac(4y^2)(8)+\frac(y^2)(2)-1=0$, $y^2=1$, $y =1$.

Since $y=1$, then $x=2$, $\lambda=-10$. The nature of the extremum at the point $(2;1)$ is determined from the sign of $d^2F$.

$$ F_(xx)^("")=\frac(\lambda)(4); \; F_(xy)^("")=5; \; F_(yy)^("")=\lambda. $$

Since $\frac(x^2)(8)+\frac(y^2)(2)-1=0$, then:

$$ d\left(\frac(x^2)(8)+\frac(y^2)(2)-1\right)=0; \; d\left(\frac(x^2)(8) \right)+d\left(\frac(y^2)(2) \right)=0; \; \frac(x)(4)dx+ydy=0; \; dy=-\frac(xdx)(4y). $$

In principle, here you can immediately substitute the coordinates of the stationary point $x=2$, $y=1$ and the parameter $\lambda=-10$, thus obtaining:

$$ F_(xx)^("")=\frac(-5)(2); \; F_(xy)^("")=-10; \; dy=-\frac(dx)(2).\\ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^(" ")dy^2=-\frac(5)(2)dx^2+10dx\cdot \left(-\frac(dx)(2) \right)-10\cdot \left(-\frac(dx) (2) \right)^2=\\ =-\frac(5)(2)dx^2-5dx^2-\frac(5)(2)dx^2=-10dx^2. $$

However, in other problems for a conditional extremum, there may be several stationary points. In such cases, it is better to represent $d^2F$ in a general form, and then substitute the coordinates of each of the found stationary points into the resulting expression:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=\frac(\lambda) (4)dx^2+10\cdot dx\cdot \frac(-xdx)(4y) +\lambda\cdot \left(-\frac(xdx)(4y) \right)^2=\\ =\frac (\lambda)(4)dx^2-\frac(5x)(2y)dx^2+\lambda \cdot \frac(x^2dx^2)(16y^2)=\left(\frac(\lambda )(4)-\frac(5x)(2y)+\frac(\lambda \cdot x^2)(16y^2) \right)\cdot dx^2 $$

Substituting $x=2$, $y=1$, $\lambda=-10$, we get:

$$ d^2 F=\left(\frac(-10)(4)-\frac(10)(2)-\frac(10 \cdot 4)(16) \right)\cdot dx^2=- 10dx^2. $$

Since $d^2F=-10\cdot dx^2< 0$, то точка $(2;1)$ есть точкой условного максимума функции $z=5xy-4$, причём $z_{\max}=10-4=6$.

Answer: at the point $(2;1)$ the function has a conditional maximum, $z_(\max)=6$.

In the next part, we will consider the application of the Lagrange method for functions of a larger number of variables.

The method for determining the conditional extremum begins with the construction of an auxiliary Lagrange function, which, in the region of feasible solutions, reaches a maximum for the same values ​​of the variables x 1 , x 2 , ..., x n , which is the objective function z . Let the problem of determining the conditional extremum of the function z=f(X) under restrictions φ i ( x 1 , x 2 , ..., x n ) = 0, i = 1, 2, ..., m , m < n

Compose a function

which is called Lagrange function. X , - constant factors ( Lagrange multipliers). Note that the Lagrange multipliers can be given an economic meaning. If a f(x 1 , x 2 , ..., x n ) - income according to the plan X = (x 1 , x 2 , ..., x n ) , and the function φ i (x 1 , x 2 , ..., x n ) are the costs of the i-th resource corresponding to this plan, then X , - price (estimation) of the i-th resource, which characterizes the change in the extreme value of the objective function depending on the change in the size of the i-th resource (marginal estimate). L(X) - function n+m variables (x 1 , x 2 , ..., x n , λ 1 , λ 2 , ..., λ n ) . Determining the stationary points of this function leads to the solution of the system of equations

It is easy to see that . Thus, the problem of finding the conditional extremum of the function z=f(X) reduces to finding the local extremum of the function L(X) . If the stationary point is found, then the question of the existence of an extremum in the simplest cases is solved on the basis of sufficient conditions for the extremum - the study of the sign of the second differential d 2 L(X) at a stationary point, provided that the variable increments Δx i - related by relationships

obtained by differentiating the constraint equations.

Solving a system of nonlinear equations with two unknowns using the Solver tool

Setting Finding a solution allows you to find a solution to a system of nonlinear equations with two unknowns:

where
- non-linear function of variables x and y ,
is an arbitrary constant.

It is known that the pair x , y ) is a solution to the system of equations (10) if and only if it is a solution to the following equation in two unknowns:

FROM on the other hand, the solution of system (10) is the intersection points of two curves: f ] (x, y) = C and f 2 (x, y) = C 2 on surface XOY.

From this follows a method for finding the roots of the system. nonlinear equations:

    Determine (at least approximately) the interval of existence of a solution to the system of equations (10) or equation (11). Here it is necessary to take into account the type of equations included in the system, the domain of definition of each of their equations, etc. Sometimes the selection of the initial approximation of the solution is used;

    Tabulate the solution of equation (11) for the variables x and y on the selected interval, or build graphs of functions f 1 (x, y) = C, and f 2 (x, y) = C 2 (system(10)).

    Localize the estimated roots of the system of equations - find several minimum values ​​from the tabulation table of the roots of equation (11), or determine the intersection points of the curves included in the system (10).

4. Find the roots for the system of equations (10) using the add-on Search for a solution.

Parameter name Meaning
Article subject: Lagrange method.
Rubric (thematic category) Maths

To find a polynomial means to determine the values ​​of its coefficient . To do this, using the interpolation condition, you can form a system of linear algebraic equations (SLAE).

The determinant of this SLAE is usually called the Vandermonde determinant. The Vandermonde determinant is not equal to zero when for , that is, in the case when there are no matching nodes in the lookup table. Τᴀᴋᴎᴍ ᴏϬᴩᴀᴈᴏᴍ, it can be argued that the SLAE has a solution and this solution is unique. Solving the SLAE and determining the unknown coefficients one can construct an interpolation polynomial.

A polynomial that satisfies the conditions of interpolation, when interpolated by the Lagrange method, is constructed as a linear combination of polynomials of the nth degree:

Polynomials are called basic polynomials. To Lagrange polynomial satisfies the interpolation conditions, it is extremely important that the following conditions be satisfied for its basic polynomials:

for .

If these conditions are met, then for any we have:

Τᴀᴋᴎᴍ ᴏϬᴩᴀᴈᴏᴍ, the fulfillment of the given conditions for the basic polynomials means that the interpolation conditions are also satisfied.

Let us determine the form of basic polynomials based on the restrictions imposed on them.

1st condition: at .

2nd condition: .

Finally, for the basic polynomial, we can write:

Then, substituting the resulting expression for the basic polynomials into the original polynomial, we obtain the final form of the Lagrange polynomial:

A particular form of the Lagrange polynomial at is usually called the linear interpolation formula:

.

The Lagrange polynomial taken at is usually called the quadratic interpolation formula:

Lagrange method. - concept and types. Classification and features of the category "Lagrange method." 2017, 2018.

  • - Lagrange method (method of variation of an arbitrary constant).

    Linear remote controls. Definition. type control i.e. linear with respect to the unknown function and its derivative is called linear. For a solution of this type, ur-th consider two methods: the Lagrange method and the Bernoulli method. Let's consider a homogeneous DE.


  • - Linear remote control, homogeneous and heterogeneous. The concept of a general solution. Lagrange's method of variation of products of constants.

    Definition. DU is called homogeneous if f-i can be represented as f-i in relation to their arguments Example. F-th is called homogeneous f-th measurement if Examples: 1) - 1st order of homogeneity. 2) - 2nd order of homogeneity. 3) - zero order of homogeneity (just homogeneous... .


  • - Lecture 8. Application of partial derivatives: tasks for extremum. Lagrange method.

    Extremum tasks are of great importance in economic calculations. This is the calculation, for example, of maximum income, profit, minimum costs, depending on several variables: resources, production assets, etc. The theory of finding extremums of functions... .


  • - T.2.3. DE of higher orders. Equation in total differentials. T.2.4. Linear DE of the second order with constant coefficients. Lagrange method.

    3. 2. 1. DE with separable variables S.R. 3. In natural science, technology and economics, one often has to deal with empirical formulas, i.e. formulas compiled on the basis of the processing of statistical data or ...

  • Method of Lagrange multipliers.

    The Lagrange multiplier method is one of the methods that allow solving non-linear programming problems.

    Nonlinear programming is a branch of mathematical programming that studies methods for solving extremal problems with a non-linear objective function and a domain of feasible solutions defined by non-linear constraints. In economics, this corresponds to the fact that results (efficiency) increase or decrease disproportionately to changes in the scale of resource use (or, equivalently, the scale of production): for example, due to the division of production costs in enterprises into variables and conditionally constants; due to saturation of demand for goods, when each subsequent unit is more difficult to sell than the previous one, etc.

    The problem of nonlinear programming is posed as the problem of finding the optimum of a certain objective function

    F(x 1 ,…x n), F (x) → max

    under conditions

    g j (x 1 ,…x n)≥0, g (x) ≤ b , x ≥ 0

    where x-vector of required variables;

    F (x) -objective function;

    g (x) is the constraint function (continuously differentiable);

    b - vector of constraint constants.

    The solution of a nonlinear programming problem (global maximum or minimum) can belong either to the boundary or to the interior of the admissible set.

    In contrast to a linear programming problem, in a non-linear programming problem the optimum does not necessarily lie on the boundary of the region defined by the constraints. In other words, the problem is to choose such non-negative values ​​of variables, subject to a system of constraints in the form of inequalities, under which the maximum (or minimum) of the given function is achieved. In this case, the forms of neither the objective function nor the inequalities are stipulated. There may be different cases: the objective function is non-linear, and the constraints are linear; the objective function is linear, and the constraints (at least one of them) are non-linear; both the objective function and the constraints are nonlinear.

    The problem of non-linear programming is found in the natural sciences, engineering, economics, mathematics, business relations and the science of government.



    Nonlinear programming, for example, is associated with a basic economic problem. So in the problem of the allocation of limited resources, either efficiency is maximized, or, if the consumer is studied, consumption in the presence of constraints that express conditions of scarcity of resources. In such a general formulation, the mathematical formulation of the problem may turn out to be impossible, but in specific applications, the quantitative form of all functions can be determined directly. For example, an industrial enterprise produces plastic products. Production efficiency here is measured by profit, and constraints are interpreted as available labor, production space, equipment productivity, etc.

    The "cost-effectiveness" method also fits into the scheme of non-linear programming. This method was developed for use in decision-making in government. The overall efficiency function is welfare. Two non-linear programming problems arise here: the first is the maximization of the effect with limited costs, the second is the minimization of costs, provided that the effect is above a certain minimum level. This problem is usually well modeled using non-linear programming.

    The results of solving the problem of nonlinear programming are helpful in making government decisions. The resulting solution is, of course, recommended, so it is necessary to investigate the assumptions and accuracy of the formulation of the nonlinear programming problem before making a final decision.

    Nonlinear problems are complex, often they are simplified by leading to linear ones. To do this, it is conditionally assumed that in a particular area the objective function increases or decreases in proportion to the change in independent variables. This approach is called the method of piecewise linear approximations; however, it is applicable only to certain types of nonlinear problems.

    Nonlinear problems under certain conditions are solved using the Lagrange function: having found its saddle point, they also find the solution to the problem. Gradient methods occupy an important place among computational algorithms for N.P. There is no universal method for nonlinear problems and, apparently, there may not be, since they are extremely diverse. Multi-extremal problems are especially difficult to solve.

    One of the methods that allow reducing the problem of nonlinear programming to solving a system of equations is the Lagrange method of indefinite multipliers.

    Using the method of Lagrange multipliers, in essence, the necessary conditions are established that allow one to identify optimum points in optimization problems with constraints in the form of equalities. In this case, the problem with constraints is transformed into an equivalent problem of unconstrained optimization, in which some unknown parameters appear, called Lagrange multipliers.

    The Lagrange multiplier method consists in reducing problems for a conditional extremum to problems for an unconditional extremum of an auxiliary function - the so-called. Lagrange functions.

    For the problem of the extremum of the function f(x 1 , x 2 ,..., x n) under conditions (coupling equations) φ i(x 1 , x 2 , ..., x n) = 0, i= 1, 2,..., m, the Lagrange function has the form

    L(x 1, x 2… x n ,λ 1, λ 2 ,…λm)=f(x 1, x 2… x n)+∑ i -1 m λ i φ i (x 1, x 2… x n)

    Multipliers λ 1 , λ 2 , ..., λm called Lagrange multipliers.

    If the quantities x 1 , x 2 , ..., x n , λ 1 , λ 2 , ..., λm are the solutions of the equations that determine the stationary points of the Lagrange function, namely, for differentiable functions, they are solutions of the system of equations

    then under sufficiently general assumptions x 1 , x 2 , ..., x n deliver an extremum of the function f.

    Consider the problem of minimizing a function of n variables, taking into account one constraint in the form of an equality:

    Minimize f(x 1, x 2… x n) (1)

    with restrictions h 1 (x 1, x 2… x n)=0 (2)

    In accordance with the Lagrange multiplier method, this problem is transformed into the following unconstrained optimization problem:

    minimize L(x,λ)=f(x)-λ*h(x) (3)

    where The function L(х;λ) is called the Lagrange function,

    λ is an unknown constant, which is called the Lagrange multiplier. No requirements are imposed on the sign of λ.

    Let, for a given value λ=λ 0, the unconditional minimum of the function L(x,λ) with respect to x is reached at the point x=x 0 and x 0 satisfies the equation h 1 (x 0)=0. Then, as it is easy to see, x 0 minimizes (1) taking into account (2), since for all values ​​of x satisfying (2), h 1 (x)=0 and L(x,λ)=min f(x).

    Of course, it is necessary to choose the value λ=λ 0 in such a way that the coordinate of the unconditional minimum point x 0 satisfies equality (2). This can be done if, considering λ as a variable, one finds the unconditional minimum of the function (3) in the form of a function λ, and then chooses the value of λ at which equality (2) is satisfied. Let's illustrate this with a specific example.

    Minimize f(x)=x 1 2 +x 2 2 =0

    with the restriction h 1 (x)=2x 1 +x 2 -2=0=0

    The corresponding unconstrained optimization problem is written as follows:

    minimize L(x,λ)=x 1 2 +x 2 2 -λ(2x 1 +x 2 -2)

    Solution. Equating the two components of the gradient L to zero, we obtain

    → x 1 0 =λ

    → x 2 0 =λ/2

    In order to check whether the stationary point x° corresponds to the minimum, we calculate the elements of the Hessian matrix of the function L(x; u), considered as a function of x,

    which turns out to be positive definite.

    This means that L(x, u) is a convex function of x. Therefore, the coordinates x 1 0 =λ, x 2 0 =λ/2 determine the global minimum point. The optimal value of λ is found by substituting the values ​​x 1 0 and x 2 0 into the equation 2x 1 +x 2 =2, whence 2λ+λ/2=2 or λ 0 =4/5. Thus, the conditional minimum is reached at x 1 0 =4/5 and x 2 0 =2/5 and is equal to min f(x)=4/5.

    When solving the problem from the example, we considered L(x;λ) as a function of two variables x 1 and x 2 and, in addition, assumed that the value of the parameter λ was chosen so that the restriction was satisfied. If the solution of the system

    J=1,2,3,…,n

    cannot be obtained in the form of explicit functions of λ, then the values ​​of x and λ are found by solving the following system, consisting of n + 1 equations with n + 1 unknowns:

    J=1,2,3,…,n., h 1 (x)=0

    Numerical search methods (for example, Newton's method) can be used to find all possible solutions of a given system. For each of the solutions (), one should calculate the elements of the Hessian matrix of the function L, considered as a function of x, and find out whether this matrix is ​​positive definite (local minimum) or negative definite (local maximum).

    The method of Lagrange multipliers can be extended to the case when the problem has several constraints in the form of equalities. Consider a general problem that requires

    Minimize f(x)

    under restrictions h k =0, k=1, 2, ..., K.

    The Lagrange function takes the following form:

    Here λ 1 , λ 2 , ..., λk-Lagrange multipliers, i.e. unknown parameters whose values ​​need to be determined. Equating the partial derivatives of L with respect to x to zero, we obtain the following system of n equations with n unknowns:

    If it turns out to be difficult to find a solution to the above system in the form of functions of the vector λ, then the system can be extended by including restrictions in the form of equalities

    The solution of the extended system, consisting of n + K equations with n + K unknowns, determines the stationary point of the function L. Then the procedure for checking for a minimum or maximum is implemented, which is carried out on the basis of calculating the elements of the Hessian matrix of the function L, considered as a function of x, similar to the one as it was done in the case of a problem with one constraint. For some problems, an extended system of n+K equations with n+K unknowns may not have solutions, and the Lagrange multiplier method turns out to be inapplicable. However, it should be noted that such tasks are quite rare in practice.

    Let us consider a special case of the general problem of nonlinear programming, assuming that the system of constraints contains only equations, there are no conditions for the non-negativity of the variables and and are continuous functions together with their partial derivatives. Therefore, having solved the system of equations (7), all points are obtained at which function (6) can have extreme values.

    Algorithm of the method of Lagrange multipliers

    1. We compose the Lagrange function.

    2. We find the partial derivatives of the Lagrange function with respect to the variables x J ,λ i and equate them to zero.

    3. We solve the system of equations (7), find the points at which the objective function of the problem can have an extremum.

    4. Among the points suspicious of an extremum, we find those at which the extremum is reached, and calculate the values ​​of function (6) at these points.

    Example.

    Initial data: According to the production plan, the enterprise needs to produce 180 products. These products can be manufactured in two technological ways. In the production of x 1 products in method 1, the costs are 4x 1 + x 1 2 rubles, and in the manufacture of x 2 products in method 2, they are 8x 2 + x 2 2 rubles. Determine how many products each of the methods should be made so that the cost of production is minimal.

    The objective function for the problem has the form
    ® min under the conditions x 1 +x 2 =180, x 2 ≥0.
    1. Compose the Lagrange function
    .
    2. We calculate the partial derivatives with respect to x 1, x 2, λ and equate them to zero:

    3. Solving the resulting system of equations, we find x 1 \u003d 91, x 2 \u003d 89

    4. Having made a replacement in the objective function x 2 \u003d 180-x 1, we get a function of one variable, namely f 1 \u003d 4x 1 +x 1 2 +8 (180-x 1) + (180-x 1) 2

    Calculate or 4x 1 -364=0 ,

    whence we have x 1 * =91, x 2 * =89.

    Answer: The number of products manufactured by the first method is x 1 \u003d 91, by the second method x 2 \u003d 89, while the value of the objective function is 17278 rubles.

    FROM The essence of the Lagrange method is to reduce the conditional extremum problem to the solution of the unconditional extremum problem. Consider a non-linear programming model:

    (5.2)

    where
    are well-known functions,

    a
    are given coefficients.

    Note that in this formulation of the problem, the constraints are given by equalities, and there is no condition for the variables to be nonnegative. In addition, we assume that the functions
    are continuous with their first partial derivatives.

    Let us transform conditions (5.2) in such a way that the left or right parts of the equalities contain zero:

    (5.3)

    Let's compose the Lagrange function. It includes the objective function (5.1) and the right-hand sides of the constraints (5.3), taken respectively with the coefficients
    . There will be as many Lagrange coefficients as there are constraints in the problem.

    The extremum points of the function (5.4) are the extremum points of the original problem and vice versa: the optimal plan of the problem (5.1)-(5.2) is the global extremum point of the Lagrange function.

    Indeed, let the solution be found
    problem (5.1)-(5.2), then conditions (5.3) are satisfied. Let's substitute the plan
    into the function (5.4) and verify the validity of equality (5.5).

    Thus, in order to find the optimal plan of the original problem, it is necessary to investigate the Lagrange function for an extremum. The function has extreme values ​​at the points where its partial derivatives are equal zero. Such points are called stationary.

    We define the partial derivatives of the function (5.4)

    ,

    .

    After equalization zero derivatives we get the system m+n equations with m+n unknown

    ,(5.6)

    In the general case, the system (5.6)-(5.7) will have several solutions, which include all the maxima and minima of the Lagrange function. In order to highlight the global maximum or minimum, the values ​​of the objective function are calculated at all found points. The largest of these values ​​will be the global maximum, and the smallest will be the global minimum. In some cases it is possible to use sufficient conditions for a strict extremum continuous functions (see Problem 5.2 below):

    let the function
    is continuous and twice differentiable in some neighborhood of its stationary point (those.
    )). Then:

    a ) if
    ,
    (5.8)

    then is the strict maximum point of the function
    ;

    b) if
    ,
    (5.9)

    then is the strict minimum point of the function
    ;

    G ) if
    ,

    then the question of the presence of an extremum remains open.

    Moreover, some solutions of the system (5.6)-(5.7) may be negative. Which is not consistent with the economic meaning of the variables. In this case, the possibility of replacing negative values ​​with zero should be analyzed.

    Economic meaning of Lagrange multipliers. Optimal multiplier value
    shows how much the value of the criterion will change Z when increasing or decreasing the resource j per unit, since

    The Lagrange method can also be applied when the constraints are inequalities. So, finding the extremum of the function
    under conditions

    ,

    performed in several stages:

    1. Determine the stationary points of the objective function, for which they solve the system of equations

    .

    2. From stationary points, those are selected whose coordinates satisfy the conditions

    3. The Lagrange method is used to solve the problem with equality constraints (5.1)-(5.2).

    4. The points found at the second and third stages are examined for a global maximum: the values ​​of the objective function at these points are compared - the largest value corresponds to the optimal plan.

    Task 5.1 Let us solve Problem 1.3, considered in the first section, by the Lagrange method. The optimal distribution of water resources is described by a mathematical model

    .

    Compose the Lagrange function

    Find the unconditional maximum of this function. To do this, we calculate the partial derivatives and equate them to zero

    ,

    Thus, we have obtained a system of linear equations of the form

    The solution of the system of equations is the optimal plan for the distribution of water resources over irrigated areas

    , .

    Quantities
    measured in hundreds of thousands of cubic meters.
    - the amount of net income per one hundred thousand cubic meters of irrigation water. Therefore, the marginal price of 1 m 3 of irrigation water is
    den. units

    The maximum additional net income from irrigation will be

    160 12.26 2 +7600 12.26-130 8.55 2 +5900 8.55-10 16.19 2 +4000 16.19=

    172391.02 (den. units)

    Task 5.2 Solve a non-linear programming problem

    We represent the constraint as:

    .

    Compose the Lagrange function and determine its partial derivatives

    .

    To determine the stationary points of the Lagrange function, one should equate its partial derivatives to zero. As a result, we obtain a system of equations

    .

    From the first equation follows

    . (5.10)

    Expression substitute into the second equation

    ,

    from which there are two solutions for :

    and
    . (5.11)

    Substituting these solutions into the third equation, we obtain

    ,
    .

    The values ​​of the Lagrange multiplier and the unknown we calculate by expressions (5.10) - (5.11):

    ,
    ,
    ,
    .

    Thus, we got two extremum points:

    ;
    .

    In order to find out whether these points are maximum or minimum points, we use the sufficient conditions for a strict extremum (5.8)-(5.9). Pre expression for , obtained from the restriction of the mathematical model, we substitute into the objective function

    ,

    . (5.12)

    To check the conditions for a strict extremum, we should determine the sign of the second derivative of the function (5.11) at the extreme points we have found
    and
    .

    ,
    ;

    .

    In this way, (·)
    is the minimum point of the original problem (
    ), a (·)
    - maximum point.

    Optimal Plan:

    ,
    ,
    ,

    .