<< Chapter < Page | Chapter >> Page > |
Example 2 Let ; solve
In words, we look for the point in a unit circle in that has the largest sum of its coordinates. It is easy to see by inspection that such point is given by . The feasible set can be given in terms of the single equality constrain
From previous work, we know that the gradient is given by We can also write the objective function as
which has gradient . Thus, we can write the tangent space in this case as
Therefore, we can write the tangent space as . It is easy to see at this point that for any such we will have , as stated by Theorem [link] .
In this section, we will develop a method to solve optimization problems with linear objective functions and linear equality constraints.
Lemma 1 Let be linear functionals on a Hilbert space X and suppose for all such that . Then there exists constants such that .
Since our functionals are linear we have . Define the subspace . Since is finite-dimensional, then is closed. We can therefore define its orthogonal complement:
Since Hilbert spaces are self-dual, then for each function there exists such that ; therefore, we can rewrite the space above as its dual equivalent
Now since , it follows that for all we have that , . Therefore, the Lemma implies that for all we have that . This implies that , due to being closed. Reversing to the dual space , this implies that , and so we can write .
Theorem [link] shows that we can apply Lemma [link] to the constrained optimization problem. Thus, at the extremum of the constrained program there exist constants such that for all ,
which is equivalent to . This can be written as the gradient of the Lagrangian function
Thus, we say that the extremum must provide a zero-valued directional derivative of the Lagrangian in all directions or, equivalently, a zero-valued gradient for the Lagrangian. The results obtained in this section can be collected into a proof for the following theorem.
Theorem 2 If is an extremum of a functional subject to constraints , then there exist scalars , such that the Lagrangian is stationary at , i.e., for all we have that , i.e., .
The constants are known as Lagrange multipliers.
Example 3 We want to minimize subject to . The constraint function is
From earlier we know that while we can rewrite the constraint as
so that . Therefore, the extremum's condition on the gradient of the Lagrangian results in the equation
the solution to this equation is . To solve for the value of the Lagrangian multiplier , we plug the solution into the constraint: Plug in the constraint function
which gives . Therefore, we end up with the solution .
Notification Switch
Would you like to follow the 'Signal theory' conversation and receive update notifications?