Find the conditional extremum of a function using the Lagrangian method online. Local extremes

Subscribe
Join the “koon.ru” community!
In contact with:

Necessary and sufficient conditions for extremum of functions of two variables. A point is called a minimum (maximum) point of a function if in a certain neighborhood of the point the function is defined and satisfies the inequality (respectively, the maximum and minimum points are called extremum points of the function.

A necessary condition for an extremum. If at an extremum point a function has first partial derivatives, then they vanish at this point. It follows that to find the extremum points of such a function, one must solve a system of equations. Points whose coordinates satisfy this system are called critical points of the function. Among them there may be maximum points, minimum points, and also points that are not extremum points.

Sufficient extremum conditions are used to identify extremum points from a set of critical points and are listed below.

Let the function have continuous second partial derivatives at the critical point. If at this point it is true

condition then it is a minimum point at and a maximum point at If at a critical point then it is not an extremum point. In this case, a more subtle study of the nature of the critical point is required, which in this case may or may not be an extremum point.

Extrema of functions of three variables. In the case of a function of three variables, the definitions of extremum points repeat verbatim the corresponding definitions for a function of two variables. We limit ourselves to presenting the procedure for studying a function for an extremum. When solving a system of equations, one should find the critical points of the function, and then at each of the critical points calculate the values

If all three quantities are positive, then the critical point in question is the minimum point; if then this critical point is a maximum point.

Conditional extremum of a function of two variables. A point is called a conditional minimum (maximum) point of a function provided that there is a neighborhood of the point at which the function is defined and in which (respectively) for all points whose coordinates satisfy the equation

To find conditional extremum points, use the Lagrange function

where the number is called the Lagrange multiplier. Solving a system of three equations

find the critical points of the Lagrange function (as well as the value of the auxiliary factor A). At these critical points there may be a conditional extremum. The above system provides only necessary conditions for an extremum, but not sufficient ones: it can be satisfied by the coordinates of points that are not points of a conditional extremum. However, based on the essence of the problem, it is often possible to establish the nature of the critical point.

Conditional extremum of a function of several variables. Let us consider a function of variables provided that they are related by the equations

Extrema of functions of several variables. A necessary condition for an extremum. Sufficient condition for an extremum. Conditional extremum. Lagrange multiplier method. Finding the largest and smallest values.

Lecture 5.

Definition 5.1. Dot M 0 (x 0, y 0) called maximum point functions z = f (x, y), If f (x o , y o) > f(x,y) for all points (x, y) M 0.

Definition 5.2. Dot M 0 (x 0, y 0) called minimum point functions z = f (x, y), If f (x o , y o) < f(x,y) for all points (x, y) from some neighborhood of a point M 0.

Note 1. The maximum and minimum points are called extremum points functions of several variables.

Remark 2. The extremum point for a function of any number of variables is determined in a similar way.

Theorem 5.1(necessary conditions for an extremum). If M 0 (x 0, y 0)– extremum point of the function z = f (x, y), then at this point the first-order partial derivatives of this function are equal to zero or do not exist.

Proof.

Let's fix the value of the variable at, counting y = y 0. Then the function f (x, y 0) will be a function of one variable X, for which x = x 0 is the extremum point. Therefore, by Fermat's theorem, or does not exist. The same statement is proved similarly for .

Definition 5.3. Points belonging to the domain of a function of several variables at which the partial derivatives of the function are equal to zero or do not exist are called stationary points this function.

Comment. Thus, the extremum can only be reached at stationary points, but it is not necessarily observed at each of them.

Theorem 5.2(sufficient conditions for an extremum). Let in some neighborhood of the point M 0 (x 0, y 0), which is a stationary point of the function z = f (x, y), this function has continuous partial derivatives up to the 3rd order inclusive. Let us denote Then:

1) f(x,y) has at the point M 0 maximum if AC–B² > 0, A < 0;

2) f(x,y) has at the point M 0 minimum if AC–B² > 0, A > 0;

3) there is no extremum at the critical point if AC–B² < 0;



4) if AC–B² = 0, further research is needed.

Proof.

Let us write the second order Taylor formula for the function f(x,y), remembering that at a stationary point the first-order partial derivatives are equal to zero:

Where If the angle between the segment M 0 M, Where M (x 0 +Δ x, y 0 +Δ at), and the O axis X denote φ, then Δ x =Δ ρ cos φ, Δ y =Δρsinφ. In this case, Taylor's formula will take the form: . Let Then we can divide and multiply the expression in brackets by A. We get:

Let us now consider four possible cases:

1) AC-B² > 0, A < 0. Тогда , и at sufficiently small Δρ. Therefore, in some neighborhood M 0 f (x 0 + Δ x, y 0 +Δ y)< f (x 0 , y 0), that is M 0– maximum point.

2) Let AC–B² > 0, A > 0. Then , And M 0– minimum point.

3) Let AC-B² < 0, A> 0. Consider the increment of arguments along the ray φ = 0. Then from (5.1) it follows that , that is, when moving along this ray, the function increases. If we move along a ray such that tg φ 0 = -A/B, That , therefore, when moving along this ray, the function decreases. So, period M 0 is not an extremum point.

3`) When AC–B² < 0, A < 0 доказательство отсутствия экстремума проводится

similar to the previous one.

3``) If AC–B² < 0, A= 0, then . Wherein . Then for sufficiently small φ the expression 2 B cosφ + C sinφ is close to 2 IN, that is, it saves permanent sign, and sinφ changes sign in the vicinity of the point M 0. This means that the increment of the function changes sign in the vicinity of a stationary point, which is therefore not an extremum point.

4) If AC–B² = 0, and , , that is, the sign of the increment is determined by the sign of 2α 0. At the same time, further research is necessary to clarify the question of the existence of an extremum.

Example. Let's find the extremum points of the function z = x² - 2 xy + 2y² + 2 x. To find stationary points, we solve the system . So, the stationary point is (-2,-1). Wherein A = 2, IN = -2, WITH= 4. Then AC–B² = 4 > 0, therefore, at a stationary point an extremum is reached, namely a minimum (since A > 0).

Definition 5.4. If the function arguments f (x 1 , x 2 ,…, x n) are bound by additional conditions in the form m equations ( m< n) :

φ 1 ( x 1, x 2,…, x n) = 0, φ 2 ( x 1, x 2,…, x n) = 0, …, φ m ( x 1, x 2,…, x n) = 0, (5.2)

where the functions φ i have continuous partial derivatives, then equations (5.2) are called connection equations.

Definition 5.5. Extremum of the function f (x 1 , x 2 ,…, x n) when conditions (5.2) are met, it is called conditional extremum.

Comment. We can offer the following geometric interpretation of the conditional extremum of a function of two variables: let the arguments of the function f(x,y) related by the equation φ (x,y)= 0, defining some curve in the O plane xy. Reconstructing perpendiculars to plane O from each point of this curve xy until it intersects with the surface z = f (x,y), we obtain a spatial curve lying on the surface above the curve φ (x,y)= 0. The task is to find the extremum points of the resulting curve, which, of course, in the general case do not coincide with the unconditional extremum points of the function f(x,y).

Let us determine the necessary conditions for a conditional extremum for a function of two variables by first introducing the following definition:

Definition 5.6. Function L (x 1 , x 2 ,…, x n) = f (x 1 , x 2 ,…, x n) + λ 1 φ 1 (x 1 , x 2 ,…, x n) +

+ λ 2 φ 2 (x 1 , x 2 ,…, x n) +…+λ m φ m (x 1 , x 2 ,…, x n), (5.3)

Where λi – some are constant, called Lagrange function, and the numbers λiindefinite Lagrange multipliers.

Theorem 5.3(necessary conditions for a conditional extremum). Conditional extremum of a function z = f (x, y) in the presence of the coupling equation φ ( x, y)= 0 can only be achieved at stationary points of the Lagrange function L (x, y) = f (x, y) + λφ (x, y).

Proof. The coupling equation specifies an implicit relationship at from X, therefore we will assume that at there is a function from X: y = y(x). Then z There is complex function from X, and its critical points are determined by the condition: . (5.4) From the coupling equation it follows that . (5.5)

Let us multiply equality (5.5) by some number λ and add it to (5.4). We get:

, or .

The last equality must be satisfied at stationary points, from which it follows:

(5.6)

A system of three equations for three unknowns is obtained: x, y and λ, and the first two equations are the conditions for the stationary point of the Lagrange function. By eliminating the auxiliary unknown λ from system (5.6), we find the coordinates of the points at which the original function can have a conditional extremum.

Remark 1. The presence of a conditional extremum at the found point can be checked by studying the second-order partial derivatives of the Lagrange function by analogy with Theorem 5.2.

Remark 2. Points at which the conditional extremum of the function can be reached f (x 1 , x 2 ,…, x n) when conditions (5.2) are met, can be defined as solutions of the system (5.7)

Example. Let's find the conditional extremum of the function z = xy given that x + y= 1. Let's compose the Lagrange function L(x, y) = xy + λ (x + y – 1). System (5.6) looks like this:

Where -2λ=1, λ=-0.5, x = y = -λ = 0.5. Wherein L(x,y) can be represented in the form L(x, y) = - 0,5 (x–y)² + 0.5 ≤ 0.5, therefore at the found stationary point L(x,y) has a maximum and z = xy – conditional maximum.

First, let's consider the case of a function of two variables. The conditional extremum of a function $z=f(x,y)$ at the point $M_0(x_0;y_0)$ is the extremum of this function, achieved under the condition that the variables $x$ and $y$ in the vicinity of this point satisfy the connection equation $\ varphi (x,y)=0$.

The name “conditional” extremum is due to the fact that the variables are subject to additional condition$\varphi(x,y)=0$. If one variable can be expressed from the connection equation through another, then the problem of determining the conditional extremum is reduced to the problem of determining the usual extremum of a function of one variable. For example, if the connection equation implies $y=\psi(x)$, then substituting $y=\psi(x)$ into $z=f(x,y)$, we obtain a function of one variable $z=f\left (x,\psi(x)\right)$. In the general case, however, this method is of little use, so the introduction of a new algorithm is required.

Lagrange multiplier method for functions of two variables.

The Lagrange multiplier method consists of constructing a Lagrange function to find a conditional extremum: $F(x,y)=f(x,y)+\lambda\varphi(x,y)$ (the $\lambda$ parameter is called the Lagrange multiplier ). The necessary conditions for the extremum are specified by a system of equations from which the stationary points are determined:

$$ \left \( \begin(aligned) & \frac(\partial F)(\partial x)=0;\\ & \frac(\partial F)(\partial y)=0;\\ & \varphi (x,y)=0. \end(aligned) \right. $$

A sufficient condition from which one can determine the nature of the extremum is the sign $d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("" )dy^2$. If at a stationary point $d^2F > 0$, then the function $z=f(x,y)$ has a conditional minimum at this point, but if $d^2F< 0$, то условный максимум.

There is another way to determine the nature of the extremum. From the coupling equation we obtain: $\varphi_(x)^(")dx+\varphi_(y)^(")dy=0$, $dy=-\frac(\varphi_(x)^("))(\varphi_ (y)^("))dx$, therefore at any stationary point we have:

$$d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=F_(xx)^( "")dx^2+2F_(xy)^("")dx\left(-\frac(\varphi_(x)^("))(\varphi_(y)^("))dx\right)+ F_(yy)^("")\left(-\frac(\varphi_(x)^("))(\varphi_(y)^("))dx\right)^2=\\ =-\frac (dx^2)(\left(\varphi_(y)^(") \right)^2)\cdot\left(-(\varphi_(y)^("))^2 F_(xx)^(" ")+2\varphi_(x)^(")\varphi_(y)^(")F_(xy)^("")-(\varphi_(x)^("))^2 F_(yy)^ ("") \right)$$

The second factor (located in brackets) can be represented in this form:

The elements of the determinant $\left| are highlighted in red. \begin(array) (cc) F_(xx)^("") & F_(xy)^("") \\ F_(xy)^("") & F_(yy)^("") \end (array)\right|$, which is the Hessian of the Lagrange function. If $H > 0$, then $d^2F< 0$, что указывает на условный максимум. Аналогично, при $H < 0$ имеем $d^2F >0$, i.e. we have a conditional minimum of the function $z=f(x,y)$.

A note regarding the notation of the determinant $H$. show\hide

$$ H=-\left|\begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_ (xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \ end(array) \right| $$

In this situation, the rule formulated above will change as follows: if $H > 0$, then the function has a conditional minimum, and if $H< 0$ получим условный максимум функции $z=f(x,y)$. При решении задач следует учитывать такие нюансы.

Algorithm for studying a function of two variables for a conditional extremum

  1. Compose the Lagrange function $F(x,y)=f(x,y)+\lambda\varphi(x,y)$
  2. Solve the system $ \left \( \begin(aligned) & \frac(\partial F)(\partial x)=0;\\ & \frac(\partial F)(\partial y)=0;\\ & \ varphi (x,y)=0. \end(aligned) \right.$
  3. Determine the nature of the extremum at each of the stationary points found in the previous paragraph. To do this, use any of the following methods:
    • Compose the determinant of $H$ and find out its sign
    • Taking into account the coupling equation, calculate the sign of $d^2F$

Lagrange multiplier method for functions of n variables

Let's say we have a function of $n$ variables $z=f(x_1,x_2,\ldots,x_n)$ and $m$ coupling equations ($n > m$):

$$\varphi_1(x_1,x_2,\ldots,x_n)=0; \; \varphi_2(x_1,x_2,\ldots,x_n)=0,\ldots,\varphi_m(x_1,x_2,\ldots,x_n)=0.$$

Denoting the Lagrange multipliers as $\lambda_1,\lambda_2,\ldots,\lambda_m$, we compose the Lagrange function:

$$F(x_1,x_2,\ldots,x_n,\lambda_1,\lambda_2,\ldots,\lambda_m)=f+\lambda_1\varphi_1+\lambda_2\varphi_2+\ldots+\lambda_m\varphi_m$$

The necessary conditions for the presence of a conditional extremum are given by a system of equations from which the coordinates of stationary points and the values ​​of the Lagrange multipliers are found:

$$\left\(\begin(aligned) & \frac(\partial F)(\partial x_i)=0; (i=\overline(1,n))\\ & \varphi_j=0; (j=\ overline(1,m)) \end(aligned) \right.$$

You can find out whether a function has a conditional minimum or a conditional maximum at the found point, as before, using the sign $d^2F$. If at the found point $d^2F > 0$, then the function has a conditional minimum, but if $d^2F< 0$, - то условный максимум. Можно пойти иным путем, рассмотрев следующую матрицу:

Determinant of the matrix $\left| \begin(array) (ccccc) \frac(\partial^2F)(\partial x_(1)^(2)) & \frac(\partial^2F)(\partial x_(1)\partial x_(2) ) & \frac(\partial^2F)(\partial x_(1)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(1)\partial x_(n)) \\ \frac(\partial^2F)(\partial x_(2)\partial x_1) & \frac(\partial^2F)(\partial x_(2)^(2)) & \frac(\partial^2F )(\partial x_(2)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(2)\partial x_(n))\\ \frac(\partial^2F )(\partial x_(3) \partial x_(1)) & \frac(\partial^2F)(\partial x_(3)\partial x_(2)) & \frac(\partial^2F)(\partial x_(3)^(2)) &\ldots & \frac(\partial^2F)(\partial x_(3)\partial x_(n))\\ \ldots & \ldots & \ldots &\ldots & \ ldots\\ \frac(\partial^2F)(\partial x_(n)\partial x_(1)) & \frac(\partial^2F)(\partial x_(n)\partial x_(2)) & \ frac(\partial^2F)(\partial x_(n)\partial x_(3)) &\ldots & \frac(\partial^2F)(\partial x_(n)^(2))\\ \end( array) \right|$, highlighted in red in the matrix $L$, is the Hessian of the Lagrange function. We use the following rule:

  • If the signs of the angular minors $H_(2m+1),\; H_(2m+2),\ldots,H_(m+n)$ matrices $L$ coincide with the sign of $(-1)^m$, then the stationary point under study is the conditional minimum point of the function $z=f(x_1,x_2 ,x_3,\ldots,x_n)$.
  • If the signs of the angular minors $H_(2m+1),\; H_(2m+2),\ldots,H_(m+n)$ alternate, and the sign of the minor $H_(2m+1)$ coincides with the sign of the number $(-1)^(m+1)$, then the stationary the point is the conditional maximum point of the function $z=f(x_1,x_2,x_3,\ldots,x_n)$.

Example No. 1

Find the conditional extremum of the function $z(x,y)=x+3y$ under the condition $x^2+y^2=10$.

The geometric interpretation of this problem is as follows: it is required to find the largest and smallest values ​​of the applicate of the plane $z=x+3y$ for the points of its intersection with the cylinder $x^2+y^2=10$.

It is somewhat difficult to express one variable through another from the coupling equation and substitute it into the function $z(x,y)=x+3y$, so we will use the Lagrange method.

Denoting $\varphi(x,y)=x^2+y^2-10$, we compose the Lagrange function:

$$ F(x,y)=z(x,y)+\lambda \varphi(x,y)=x+3y+\lambda(x^2+y^2-10);\\ \frac(\partial F)(\partial x)=1+2\lambda x; \frac(\partial F)(\partial y)=3+2\lambda y. $$

Let us write a system of equations to determine the stationary points of the Lagrange function:

$$ \left \( \begin(aligned) & 1+2\lambda x=0;\\ & 3+2\lambda y=0;\\ & x^2+y^2-10=0. \end (aligned)\right.$$

If we assume $\lambda=0$, then the first equation becomes: $1=0$. The resulting contradiction indicates that $\lambda\neq 0$. Under the condition $\lambda\neq 0$, from the first and second equations we have: $x=-\frac(1)(2\lambda)$, $y=-\frac(3)(2\lambda)$. Substituting the obtained values ​​into the third equation, we get:

$$ \left(-\frac(1)(2\lambda) \right)^2+\left(-\frac(3)(2\lambda) \right)^2-10=0;\\ \frac (1)(4\lambda^2)+\frac(9)(4\lambda^2)=10; \lambda^2=\frac(1)(4); \left[ \begin(aligned) & \lambda_1=-\frac(1)(2);\\ & \lambda_2=\frac(1)(2). \end(aligned) \right.\\ \begin(aligned) & \lambda_1=-\frac(1)(2); \; x_1=-\frac(1)(2\lambda_1)=1; \; y_1=-\frac(3)(2\lambda_1)=3;\\ & \lambda_2=\frac(1)(2); \; x_2=-\frac(1)(2\lambda_2)=-1; \; y_2=-\frac(3)(2\lambda_2)=-3.\end(aligned) $$

So, the system has two solutions: $x_1=1;\; y_1=3;\; \lambda_1=-\frac(1)(2)$ and $x_2=-1;\; y_2=-3;\; \lambda_2=\frac(1)(2)$. Let us find out the nature of the extremum at each stationary point: $M_1(1;3)$ and $M_2(-1;-3)$. To do this, we calculate the determinant of $H$ at each point.

$$ \varphi_(x)^(")=2x;\; \varphi_(y)^(")=2y;\; F_(xx)^("")=2\lambda;\; F_(xy)^("")=0;\; F_(yy)^("")=2\lambda.\\ H=\left| \begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_(xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \end(array) \right|= \left| \begin(array) (ccc) 0 & 2x & 2y\\ 2x & 2\lambda & 0 \\ 2y & 0 & 2\lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right| $$

At point $M_1(1;3)$ we get: $H=8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & 1 & 3\\ 1 & -1/2 & 0 \\ 3 & 0 & -1/2 \end(array) \right|=40 > 0$, so at the point The $M_1(1;3)$ function $z(x,y)=x+3y$ has a conditional maximum, $z_(\max)=z(1;3)=10$.

Similarly, at point $M_2(-1,-3)$ we find: $H=8\cdot\left| \begin(array) (ccc) 0 & x & y\\ x & \lambda & 0 \\ y & 0 & \lambda \end(array) \right|= 8\cdot\left| \begin(array) (ccc) 0 & -1 & -3\\ -1 & 1/2 & 0 \\ -3 & 0 & 1/2 \end(array) \right|=-40$. Since $H< 0$, то в точке $M_2(-1;-3)$ имеем условный минимум функции $z(x,y)=x+3y$, а именно: $z_{\min}=z(-1;-3)=-10$.

I note that instead of calculating the value of the determinant $H$ at each point, it is much more convenient to expand it in general view. In order not to clutter the text with details, I will hide this method under a note.

Writing the determinant $H$ in general form. show\hide

$$ H=8\cdot\left|\begin(array)(ccc)0&x&y\\x&\lambda&0\\y&0&\lambda\end(array)\right| =8\cdot\left(-\lambda(y^2)-\lambda(x^2)\right) =-8\lambda\cdot\left(y^2+x^2\right). $$

In principle, it is already obvious what sign $H$ has. Since none of the points $M_1$ or $M_2$ coincides with the origin, then $y^2+x^2>0$. Therefore, the sign of $H$ is opposite to the sign of $\lambda$. You can complete the calculations:

$$ \begin(aligned) &H(M_1)=-8\cdot\left(-\frac(1)(2)\right)\cdot\left(3^2+1^2\right)=40;\ \ &H(M_2)=-8\cdot\frac(1)(2)\cdot\left((-3)^2+(-1)^2\right)=-40. \end(aligned) $$

The question about the nature of the extremum at the stationary points $M_1(1;3)$ and $M_2(-1;-3)$ can be solved without using the determinant $H$. Let's find the sign of $d^2F$ at each stationary point:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=2\lambda \left( dx^2+dy^2\right) $$

Let me note that the notation $dx^2$ means exactly $dx$ raised to the second power, i.e. $\left(dx \right)^2$. Hence we have: $dx^2+dy^2>0$, therefore, with $\lambda_1=-\frac(1)(2)$ we get $d^2F< 0$. Следовательно, функция имеет в точке $M_1(1;3)$ условный максимум. Аналогично, в точке $M_2(-1;-3)$ получим условный минимум функции $z(x,y)=x+3y$. Отметим, что для определения знака $d^2F$ не пришлось учитывать связь между $dx$ и $dy$, ибо знак $d^2F$ очевиден без дополнительных преобразований. В следующем примере для определения знака $d^2F$ уже будет необходимо учесть связь между $dx$ и $dy$.

Answer: at point $(-1;-3)$ the function has a conditional minimum, $z_(\min)=-10$. At point $(1;3)$ the function has a conditional maximum, $z_(\max)=10$

Example No. 2

Find the conditional extremum of the function $z(x,y)=3y^3+4x^2-xy$ under the condition $x+y=0$.

First method (Lagrange multiplier method)

Denoting $\varphi(x,y)=x+y$, we compose the Lagrange function: $F(x,y)=z(x,y)+\lambda \varphi(x,y)=3y^3+4x^2 -xy+\lambda(x+y)$.

$$ \frac(\partial F)(\partial x)=8x-y+\lambda; \; \frac(\partial F)(\partial y)=9y^2-x+\lambda.\\ \left \( \begin(aligned) & 8x-y+\lambda=0;\\ & 9y^2-x+\ lambda=0; \\ & x+y=0. \end(aligned) \right. $$

Having solved the system, we get: $x_1=0$, $y_1=0$, $\lambda_1=0$ and $x_2=\frac(10)(9)$, $y_2=-\frac(10)(9)$ , $\lambda_2=-10$. We have two stationary points: $M_1(0;0)$ and $M_2 \left(\frac(10)(9);-\frac(10)(9) \right)$. Let us find out the nature of the extremum at each stationary point using the determinant $H$.

$$H=\left| \begin(array) (ccc) 0 & \varphi_(x)^(") & \varphi_(y)^(")\\ \varphi_(x)^(") & F_(xx)^("") & F_(xy)^("") \\ \varphi_(y)^(") & F_(xy)^("") & F_(yy)^("") \end(array) \right|= \left| \begin(array) (ccc) 0 & 1 & 1\\ 1 & 8 & -1 \\ 1 & -1 & 18y \end(array) \right|=-10-18y $$

At point $M_1(0;0)$ $H=-10-18\cdot 0=-10< 0$, поэтому $M_1(0;0)$ есть точка условного минимума функции $z(x,y)=3y^3+4x^2-xy$, $z_{\min}=0$. В точке $M_2\left(\frac{10}{9};-\frac{10}{9}\right)$ $H=10 >0$, therefore at this point the function has a conditional maximum, $z_(\max)=\frac(500)(243)$.

We investigate the nature of the extremum at each point using a different method, based on the sign of $d^2F$:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=8dx^2-2dxdy+ 18ydy^2 $$

From the connection equation $x+y=0$ we have: $d(x+y)=0$, $dx+dy=0$, $dy=-dx$.

$$ d^2 F=8dx^2-2dxdy+18ydy^2=8dx^2-2dx(-dx)+18y(-dx)^2=(10+18y)dx^2 $$

Since $ d^2F \Bigr|_(M_1)=10 dx^2 > 0$, then $M_1(0;0)$ is the conditional minimum point of the function $z(x,y)=3y^3+4x^ 2-xy$. Similarly, $d^2F \Bigr|_(M_2)=-10 dx^2< 0$, т.е. $M_2\left(\frac{10}{9}; -\frac{10}{9} \right)$ - точка условного максимума.

Second way

From the connection equation $x+y=0$ we get: $y=-x$. Substituting $y=-x$ into the function $z(x,y)=3y^3+4x^2-xy$, we obtain some function of the variable $x$. Let's denote this function as $u(x)$:

$$ u(x)=z(x,-x)=3\cdot(-x)^3+4x^2-x\cdot(-x)=-3x^3+5x^2. $$

Thus, we reduced the problem of finding the conditional extremum of a function of two variables to the problem of determining the extremum of a function of one variable.

$$ u_(x)^(")=-9x^2+10x;\\ -9x^2+10x=0; \; x\cdot(-9x+10)=0;\\ x_1=0; \ ; y_1=-x_1=0;\\ x_2=\frac(10)(9);\; y_2=-x_2=-\frac(10)(9). $$

We obtained points $M_1(0;0)$ and $M_2\left(\frac(10)(9); -\frac(10)(9)\right)$. Further research is known from the course of differential calculus of functions of one variable. By examining the sign of $u_(xx)^("")$ at each stationary point or checking the change in the sign of $u_(x)^(")$ at the found points, we obtain the same conclusions as when solving the first method. For example, we will check sign $u_(xx)^("")$:

$$u_(xx)^("")=-18x+10;\\ u_(xx)^("")(M_1)=10;\;u_(xx)^("")(M_2)=- 10.$$

Since $u_(xx)^("")(M_1)>0$, then $M_1$ is the minimum point of the function $u(x)$, and $u_(\min)=u(0)=0$ . Since $u_(xx)^("")(M_2)<0$, то $M_2$ - точка максимума функции $u(x)$, причём $u_{\max}=u\left(\frac{10}{9}\right)=\frac{500}{243}$.

The values ​​of the function $u(x)$ for a given connection condition coincide with the values ​​of the function $z(x,y)$, i.e. the found extrema of the function $u(x)$ are the sought conditional extrema of the function $z(x,y)$.

Answer: at the point $(0;0)$ the function has a conditional minimum, $z_(\min)=0$. At the point $\left(\frac(10)(9); -\frac(10)(9) \right)$ the function has a conditional maximum, $z_(\max)=\frac(500)(243)$.

Let's consider another example in which we will clarify the nature of the extremum by determining the sign of $d^2F$.

Example No. 3

Find the greatest and smallest value functions $z=5xy-4$, if the variables $x$ and $y$ are positive and satisfy the coupling equation $\frac(x^2)(8)+\frac(y^2)(2)-1=0$ .

Let's compose the Lagrange function: $F=5xy-4+\lambda \left(\frac(x^2)(8)+\frac(y^2)(2)-1 \right)$. Let's find the stationary points of the Lagrange function:

$$ F_(x)^(")=5y+\frac(\lambda x)(4); \; F_(y)^(")=5x+\lambda y.\\ \left \( \begin(aligned) & 5y+\frac(\lambda x)(4)=0;\\ & 5x+\lambda y=0;\\ & \frac(x^2)(8)+\frac(y^2)(2)- 1=0;\\ & x > 0; \;y > 0. \end(aligned) \right. $$

All further transformations are carried out taking into account $x > 0; \; y > 0$ (this is specified in the problem statement). From the second equation we express $\lambda=-\frac(5x)(y)$ and substitute the found value into the first equation: $5y-\frac(5x)(y)\cdot \frac(x)(4)=0$ , $4y^2-x^2=0$, $x=2y$. Substituting $x=2y$ into the third equation, we get: $\frac(4y^2)(8)+\frac(y^2)(2)-1=0$, $y^2=1$, $y =1$.

Since $y=1$, then $x=2$, $\lambda=-10$. We determine the nature of the extremum at the point $(2;1)$ based on the sign of $d^2F$.

$$ F_(xx)^("")=\frac(\lambda)(4); \; F_(xy)^("")=5; \; F_(yy)^("")=\lambda. $$

Since $\frac(x^2)(8)+\frac(y^2)(2)-1=0$, then:

$$ d\left(\frac(x^2)(8)+\frac(y^2)(2)-1\right)=0; \; d\left(\frac(x^2)(8) \right)+d\left(\frac(y^2)(2) \right)=0; \; \frac(x)(4)dx+ydy=0; \; dy=-\frac(xdx)(4y). $$

In principle, here you can immediately substitute the coordinates of the stationary point $x=2$, $y=1$ and the parameter $\lambda=-10$, obtaining:

$$ F_(xx)^("")=\frac(-5)(2); \; F_(xy)^("")=-10; \; dy=-\frac(dx)(2).\\ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^(" ")dy^2=-\frac(5)(2)dx^2+10dx\cdot \left(-\frac(dx)(2) \right)-10\cdot \left(-\frac(dx) (2) \right)^2=\\ =-\frac(5)(2)dx^2-5dx^2-\frac(5)(2)dx^2=-10dx^2. $$

However, in other problems on a conditional extremum there may be several stationary points. In such cases, it is better to represent $d^2F$ in general form, and then substitute the coordinates of each of the found stationary points into the resulting expression:

$$ d^2 F=F_(xx)^("")dx^2+2F_(xy)^("")dxdy+F_(yy)^("")dy^2=\frac(\lambda) (4)dx^2+10\cdot dx\cdot \frac(-xdx)(4y) +\lambda\cdot \left(-\frac(xdx)(4y) \right)^2=\\ =\frac (\lambda)(4)dx^2-\frac(5x)(2y)dx^2+\lambda \cdot \frac(x^2dx^2)(16y^2)=\left(\frac(\lambda )(4)-\frac(5x)(2y)+\frac(\lambda \cdot x^2)(16y^2) \right)\cdot dx^2 $$

Substituting $x=2$, $y=1$, $\lambda=-10$, we get:

$$ d^2 F=\left(\frac(-10)(4)-\frac(10)(2)-\frac(10 \cdot 4)(16) \right)\cdot dx^2=- 10dx^2. $$

Since $d^2F=-10\cdot dx^2< 0$, то точка $(2;1)$ есть точкой условного максимума функции $z=5xy-4$, причём $z_{\max}=10-4=6$.

Answer: at point $(2;1)$ the function has a conditional maximum, $z_(\max)=6$.

In the next part we will consider the application of the Lagrange method for functions of a larger number of variables.

Conditional extremum.

Extrema of a function of several variables

Least square method.

Local extremum of the FNP

Let the function be given And= f(P), РÎDÌR n and let point P 0 ( A 1 , A 2 , ..., a p) –internal point of set D.

Definition 9.4.

1) Point P 0 is called maximum point functions And= f(P), if there is a neighborhood of this point U(P 0) М D such that for any point P( X 1 , X 2 , ..., x n)О U(P 0) , Р¹Р 0 , the condition is satisfied f(P)£ f(P 0) . Meaning f(P 0) function at the maximum point is called maximum of the function and is designated f(P0) = max f(P) .

2) Point P 0 is called minimum point functions And= f(P), if there is a neighborhood of this point U(P 0)Ì D such that for any point P( X 1 , X 2 , ..., x n)ОU(P 0), Р¹Р 0 , the condition is satisfied f(P)³ f(P 0) . Meaning f(P 0) function at the minimum point is called minimum function and is designated f(P 0) = min f(P).

The minimum and maximum points of a function are called extrema points, the values ​​of the function at the extrema points are called extrema of the function.

As follows from the definition, the inequalities f(P)£ f(P 0) , f(P)³ f(P 0) must be satisfied only in a certain neighborhood of the point P 0, and not in the entire domain of definition of the function, which means that the function can have several extrema of the same type (several minima, several maxima). Therefore, the extrema defined above are called local(local) extremes.

Theorem 9.1.( necessary condition extremum of the FNP)

If the function And= f(X 1 , X 2 , ..., x n) has an extremum at the point P 0 , then its first-order partial derivatives at this point are either equal to zero or do not exist.

Proof. Let at point P 0 ( A 1 , A 2 , ..., a p) function And= f(P) has an extremum, for example, a maximum. Let's fix the arguments X 2 , ..., x n, putting X 2 =A 2 ,..., x n = a p. Then And= f(P) = f 1 ((X 1 , A 2 , ..., a p) is a function of one variable X 1 . Since this function has X 1 = A 1 extremum (maximum), then f 1 ¢=0or does not exist when X 1 =A 1 (a necessary condition for the existence of an extremum of a function of one variable). But, that means or does not exist at point P 0 - the extremum point. Similarly, we can consider partial derivatives with respect to other variables. CTD.

Points in the domain of a function at which first-order partial derivatives are equal to zero or do not exist are called critical points this function.

As follows from Theorem 9.1, the extremum points of the FNP should be sought among the critical points of the function. But, as for a function of one variable, not every critical point is an extremum point.

Theorem 9.2. (sufficient condition for the extremum of the FNP)

Let P 0 be the critical point of the function And= f(P) and is the second order differential of this function. Then

and if d 2 u(P 0) > 0 at , then P 0 is a point minimum functions And= f(P);

b) if d 2 u(P0)< 0 при , то Р 0 – точка maximum functions And= f(P);

c) if d 2 u(P 0) is not defined by sign, then P 0 is not an extremum point;

We will consider this theorem without proof.

Note that the theorem does not consider the case when d 2 u(P 0) = 0 or does not exist. This means that the question of the presence of an extremum at the point P 0 under such conditions remains open - additional research is needed, for example, a study of the increment of the function at this point.

In more detailed mathematics courses it is proven that, in particular for the function z = f(x,y) of two variables, the second order differential of which is a sum of the form

the study of the presence of an extremum at the critical point P 0 can be simplified.

Let's denote , , . Let's compose a determinant

.

Turns out:

d 2 z> 0 at point P 0, i.e. P 0 – minimum point, if A(P 0) > 0 and D(P 0) > 0;

d 2 z < 0 в точке Р 0 , т.е. Р 0 – точка максимума, если A(P0)< 0 , а D(Р 0) > 0;

if D(P 0)< 0, то d 2 z in the vicinity of point P 0 it changes sign and there is no extremum at point P 0;

if D(Р 0) = 0, then additional studies of the function in the vicinity of the critical point Р 0 are also required.

Thus, for the function z = f(x,y) of two variables we have the following algorithm (let’s call it “algorithm D”) for finding an extremum:

1) Find the domain of definition D( f) functions.

2) Find critical points, i.e. points from D( f), for which and are equal to zero or do not exist.

3) At each critical point P 0, check the sufficient conditions for the extremum. To do this, find , where , , and calculate D(P 0) and A(P 0).Then:

if D(P 0) >0, then at point P 0 there is an extremum, and if A(P 0) > 0 – then this is the minimum, and if A(P 0)< 0 – максимум;

if D(P 0)< 0, то в точке Р­ 0 нет экстремума;

If D(P 0) = 0, then additional research is needed.

4) At the found extremum points, calculate the value of the function.

Example 1.

Find the extremum of the function z = x 3 + 8y 3 – 3xy .

Solution. The domain of definition of this function is the entire coordinate plane. Let's find critical points.

, , Þ P 0 (0,0) , .

Let us check whether the sufficient conditions for the extremum are satisfied. We'll find

6X, = -3, = 48at And = 288xy – 9.

Then D(P 0) = 288×0×0 – 9 = -9< 0 , значит, в точке Р 0 экстремума нет.

D(Р 1) = 36-9>0 – at point Р 1 there is an extremum, and since A(P 1) = 3 >0, then this extremum is a minimum. So min z=z(P 1) = .

Example 2.

Find the extremum of the function .

Solution: D( f) =R 2 . Critical points: ; does not exist when at= 0, which means P 0 (0,0) is the critical point of this function.

2, = 0, = , = , but D(P 0) is not defined, so studying its sign is impossible.

For the same reason, it is impossible to apply Theorem 9.2 directly - d 2 z does not exist at this point.

Let's consider the increment of the function f(x, y) at point P 0. If D f =f(P) – f(P 0)>0 "P, then P 0 is the minimum point, but if D f < 0, то Р 0 – точка максимума.

In our case we have

D f = f(x, y) – f(0, 0) = f(0+D x,0+D y) – f(0, 0) = .

At D x= 0.1 and D y= -0.008 we get D f = 0,01 – 0,2 < 0, а при Dx= 0.1 and D y= 0.001 D f= 0.01 + 0.1 > 0, i.e. in the vicinity of point P 0 neither condition D is satisfied f <0 (т.е. f(x, y) < f(0, 0) and therefore P 0 is not a maximum point), nor condition D f>0 (i.e. f(x, y) > f(0, 0) and then P 0 is not a minimum point). So, by definition of an extremum, this function has no extremes.

Conditional extremum.

The considered extremum of the function is called unconditional, since no restrictions (conditions) are imposed on the function arguments.

Definition 9.2. Extremum of the function And = f(X 1 , X 2 , ... , x n), found under the condition that its arguments X 1 , X 2 , ... , x n satisfy the equations j 1 ( X 1 , X 2 , ... , x n) = 0, …, j T(X 1 , X 2 , ... , x n) = 0, where P ( X 1 , X 2 , ... , x n) О D( f), called conditional extremum .

Equations j k(X 1 , X 2 , ... , x n) = 0 , k = 1, 2,..., m, are called connection equations.

Let's look at the functions z = f(x,y) two variables. If the connection equation is one, i.e. , then finding a conditional extremum means that the extremum is sought not in the entire domain of definition of the function, but on some curve lying in D( f) (i.e., it is not the highest or lowest points of the surface that are sought z = f(x,y), and the highest or lowest points among the points of intersection of this surface with the cylinder, Fig. 5).


Conditional extremum of a function z = f(x,y) of two variables can be found in the following way( elimination method). From the equation, express one of the variables as a function of another (for example, write ) and, substituting this value of the variable into the function, write the latter as a function of one variable (in the case considered ). Find the extremum of the resulting function of one variable.

Example

Find the extremum of the function provided that X And at are related by the relation: . Geometrically, the problem means the following: on an ellipse
plane
.

This problem can be solved this way: from the equation
we find
X:


provided that
, reduced to the problem of finding the extremum of a function of one variable on the interval
.

Geometrically, the problem means the following: on an ellipse , obtained by crossing the cylinder
plane
, you need to find the maximum or minimum value of the applicate (Fig.9). This problem can be solved this way: from the equation
we find
. Substituting the found value of y into the equation of the plane, we obtain a function of one variable X:

Thus, the problem of finding the extremum of the function
provided that
, reduced to the problem of finding the extremum of a function of one variable on an interval.

So, the problem of finding a conditional extremum– this is the problem of finding the extremum of the objective function
, provided that the variables X And at subject to restriction
, called connection equation.

Let's say that dot
, satisfying the coupling equation, is the point of local conditional maximum (minimum), if there is a neighborhood
such that for any points
, whose coordinates satisfy the connection equation, the inequality is satisfied.

If from the coupling equation one can find an expression for at, then by substituting this expression into the original function, we turn the latter into a complex function of one variable X.

The general method for solving the conditional extremum problem is Lagrange multiplier method. Let's create an auxiliary function, where ─ some number. This function is called Lagrange function, A ─ Lagrange multiplier. Thus, the task of finding a conditional extremum has been reduced to finding local extremum points for the Lagrange function. To find possible extremum points, you need to solve a system of 3 equations with three unknowns x, y And.

Then you should use the following sufficient condition for an extremum.

THEOREM. Let the point be a possible extremum point for the Lagrange function. Let us assume that in the vicinity of the point
there are continuous partial derivatives of the second order of functions And . Let's denote

Then if
, That
─ conditional extremum point of the function
with the coupling equation
in this case, if
, That
─ conditional minimum point, if
, That
─ conditional maximum point.

§8. Gradient and directional derivative

Let the function
defined in some (open) region. Consider any point
this area and any directed straight line (axis) , passing through this point (Fig. 1). Let
- some other point on this axis,
– length of the segment between
And
, taken with a plus sign, if the direction
coincides with the direction of the axis , and with a minus sign if their directions are opposite.

Let
approaches indefinitely
. Limit

called derivative of a function
towards
(or along the axis ) and is denoted as follows:

.

This derivative characterizes the “rate of change” of the function at the point
towards . In particular, the ordinary partial derivatives ,can also be thought of as derivatives "with respect to direction".

Let us now assume that the function
has continuous partial derivatives in the region under consideration. Let the axis forms angles with the coordinate axes
And . Under the assumptions made, the directional derivative exists and is expressed by the formula

.

If the vector
given by its coordinates
, then the derivative of the function
in the direction of the vector
can be calculated using the formula:

.

Vector with coordinates
called gradient vector functions
at the point
. The gradient vector indicates the direction of the fastest increase in the function at a given point.

Example

Given a function, point A(1, 1) and vector
. Find: 1)grad z at point A; 2) derivative at point A in the direction of the vector .

Partial derivatives of a given function at a point
:

;
.

Then the gradient vector of the function at this point is:
. The gradient vector can also be written using vector decomposition And :

. Derivative of a function in the direction of the vector :

So,
,
.◄

Return

×
Join the “koon.ru” community!
In contact with:
I am already subscribed to the community “koon.ru”