|
Size: 1790
Comment: Initial commit
|
← Revision 6 as of 2025-11-21 19:38:48 ⇥
Size: 2704
Comment: Rewrite
|
| Deletions are marked like this. | Additions are marked like this. |
| Line 3: | Line 3: |
| The '''Lagrangian method''' is an approach for solving constrained maximization systems. | The '''Lagrangian method''', sometimes the '''method of Lagrangian multipliers''', is an approach for solving constrained maximization systems. |
| Line 11: | Line 11: |
| == Univariate Formulation == | == Description == |
| Line 13: | Line 13: |
| Given a problem of maximizing ''f(x)'' given ''g(x) = 0'', the system can be rewritten as a '''Lagrangian function''': ''L(x, λ) = f(x) + λ g(x)''. ''λ'' is the '''Lagrange multiplier''' that enables this solution. | The method is used to identify the optimal point of a function subject to one or more constraints. |
| Line 15: | Line 15: |
| The solution exists at the point where (aside from the obvious: ''g(x) = 0'') the derivative of ''L'' with respect to ''x'' is 0 and the derivative of ''L'' with respect to ''λ'' is 0. Therefore, solve for: | The fundamental idea is that, at the optimal point, the derivatives of the function to optimize and the function of the constraint are similar. An equation can then be formulated by inserting a dummy multiplier (''λ'', the '''Lagrange multiplier'''). This is sometimes notated as formulating a '''Lagrangian function''': ''L(x, λ) = f(x) + λ g(x)''. === Univariate Formulation === Given a problem of maximizing ''f(x)'' given the constraint of ''g(x) = 0'', the system can be rewritten as: ''L = f(x) + λ g(x)''. This gives a system of equations which can be solved for the point where the derivative of ''L'' with respect to ''x'' is 0 and the derivative of ''L'' with respect to ''λ'' is 0. |
| Line 21: | Line 29: |
| The problem can also be solved if the constraint is some other constant, like ''g(x) = h''. This simply means that the Lagrangian function is ''L(x, λ) = f(x) + λ(g(x) - h)''. The first part of the solution is unchanged (as ''h'' is a constant and disappears after derivation) and the second part of the solution becomes: | The problem can also be solved if the constraint is some other constant, like ''g(x) = h''. The Lagrangian function becomes ''L = f(x) + λ(g(x) - h)'' and the second part of the system of equations becomes: |
| Line 25: | Line 33: |
| ---- | === Multivariate Formulation === Suppose the function to optimize takes three variables, as ''f(x, y, z)''. Suppose there are also two constraints, like ''g(x, y, z) = 0'' and ''h(x, y, z) = 1''. Expressed in terms of [[Calculus/UnitVector|unit vectors]], the [[Calculus/Gradient|gradients]] of these functions are: {{attachment:multi1.svg}} {{attachment:multi2.svg}} {{attachment:multi3.svg}} To formulate the similarity of the function to optimize and the constraints, try: {{attachment:multi4.svg}} Or more concretely: {{attachment:multi5.svg}} This gives the following system of equations: {{attachment:multi6.svg}} {{attachment:multi7.svg}} {{attachment:multi8.svg}} These three, plus the original constraint functions, create a system of 5 equations which can be solved for the optimal point. |
| Line 29: | Line 67: |
| == Bivariate Formulation == | === Higher Dimensions === |
| Line 31: | Line 69: |
| Given a problem of maximizing ''f(x, y)'' given ''g(x, y) = 0'', the system can be rewritten as: ''L(x, y, λ) = f(x, y) + λ g(x, y)''. Solve for: | In higher dimensions, the abstract formulation still holds: ''∇f = λ,,1,,∇g + λ,,2,,∇h'' |
| Line 33: | Line 71: |
| {{attachment:bivariate1.svg}} {{attachment:bivariate2.svg}} {{attachment:bivariate3.svg}} ---- == Multivariate Formulation == Given a problem of maximizing ''f(x,,1,,, x,,2,,, ... x,,n,,)'' given ''g(x,,1,,, x,,2,,, ... x,,n,,) = 0'': At the solution, the [[Calculus/GradientVector|gradient vectors]] are similar. In other words, while they may not be equal in degree, they can be set as equal if given a scalar multiplier. This is called the '''Lagrangian multiplier''' and is notated as ''λ''. |
The Lagrangian function formulation is also still useful in that the optimal point exists where ''∇L'' equals a vector of 0s. That is, given a formulation like ''L(x,,1,,, x,,2,,, ... x,,n,,, λ) = f(x,,1,,, x,,2,,, ... x,,n,,) + λ · g(x,,1,,, x,,2,,, ... x,,n,,)'', the optimal point exists where ''∇L'' equals a vector of 0s. |
Lagrangian Method
The Lagrangian method, sometimes the method of Lagrangian multipliers, is an approach for solving constrained maximization systems.
Contents
Description
The method is used to identify the optimal point of a function subject to one or more constraints.
The fundamental idea is that, at the optimal point, the derivatives of the function to optimize and the function of the constraint are similar. An equation can then be formulated by inserting a dummy multiplier (λ, the Lagrange multiplier).
This is sometimes notated as formulating a Lagrangian function: L(x, λ) = f(x) + λ g(x).
Univariate Formulation
Given a problem of maximizing f(x) given the constraint of g(x) = 0, the system can be rewritten as: L = f(x) + λ g(x). This gives a system of equations which can be solved for the point where the derivative of L with respect to x is 0 and the derivative of L with respect to λ is 0.
The problem can also be solved if the constraint is some other constant, like g(x) = h. The Lagrangian function becomes L = f(x) + λ(g(x) - h) and the second part of the system of equations becomes:
Multivariate Formulation
Suppose the function to optimize takes three variables, as f(x, y, z). Suppose there are also two constraints, like g(x, y, z) = 0 and h(x, y, z) = 1.
Expressed in terms of unit vectors, the gradients of these functions are:
To formulate the similarity of the function to optimize and the constraints, try:
Or more concretely:
This gives the following system of equations:
These three, plus the original constraint functions, create a system of 5 equations which can be solved for the optimal point.
Higher Dimensions
In higher dimensions, the abstract formulation still holds: ∇f = λ1∇g + λ2∇h
The Lagrangian function formulation is also still useful in that the optimal point exists where ∇L equals a vector of 0s. That is, given a formulation like L(x1, x2, ... xn, λ) = f(x1, x2, ... xn) + λ · g(x1, x2, ... xn), the optimal point exists where ∇L equals a vector of 0s.
