Lagrangian Method
The Lagrangian method, sometimes the method of Lagrangian multipliers, is an approach for solving constrained maximization systems.
Contents
Description
The method is used to identify the optimal point of a function subject to one or more constraints.
The fundamental idea is that, at the optimal point, the derivatives of the function to optimize and the function of the constraint are similar. An equation can then be formulated by inserting a dummy multiplier (λ, the Lagrange multiplier).
This is sometimes notated as formulating a Lagrangian function: L(x, λ) = f(x) + λ g(x).
Univariate Formulation
Given a problem of maximizing f(x) given the constraint of g(x) = 0, the system can be rewritten as: L = f(x) + λ g(x). This gives a system of equations which can be solved for the point where the derivative of L with respect to x is 0 and the derivative of L with respect to λ is 0.
The problem can also be solved if the constraint is some other constant, like g(x) = h. The Lagrangian function becomes L = f(x) + λ(g(x) - h) and the second part of the system of equations becomes:
Multivariate Formulation
Suppose the function to optimize takes three variables, as f(x, y, z). Suppose there are also two constraints, like g(x, y, z) = 0 and h(x, y, z) = 1.
Expressed in terms of unit vectors, the gradients of these functions are:
To formulate the similarity of the function to optimize and the constraints, try:
Or more concretely:
This gives the following system of equations:
These three, plus the original constraint functions, create a system of 5 equations which can be solved for the optimal point.
Higher Dimensions
In higher dimensions, the abstract formulation still holds: ∇f = λ1∇g + λ2∇h
The Lagrangian function formulation is also still useful in that the optimal point exists where ∇L equals a vector of 0s. That is, given a formulation like L(x1, x2, ... xn, λ) = f(x1, x2, ... xn) + λ · g(x1, x2, ... xn), the optimal point exists where ∇L equals a vector of 0s.
