During reading papers in convex optimization, I was puzzled by some tricks used by the authors that convert the complicated problem to a much easier one. I would like to understand why those tricks work and the reasons behind. This is the reason that I write the following comments.

The Original Problem

A standard form convex optimization problem writes as follows,

\min~f_{0}(x)

s.t.~f_{i}(x) \leq 0,~i=1,2,...,m

h_{i}(x) = 0, ~i=1,2,...,p

The equality and in-equality constrains define the feasible region of the solution.

About the In-equality Constrains

The in-equality constrains however, either vanishes or degrades to the equality constrains during the optimization process toward the optimal solution. This is due to the solution must lie on the boundaries in the constrained optimization. If the solution is not on the boundary given by the constrain, then we say the constrain is ‘in-active’. It means that the solution of the problem does not change regardless the constrain and essentially it can removed before optimization. Otherwise the constrain is said to be ‘active’, meaning the solution is on its boundary and it can be seen as a equality constrain. This is also called the KKT complimentary condition in convex optimization.

The Lagrangian Multipliers

Therefore, let us look at the Lagrangian function again, (the basic idea of Lagrangian can be found here).

L(x, \alpha, \mu) = f_{0}(x) + \sum \alpha_{i}f_{i}(x) + \sum \mu_{i} h_{i}(x)

Notice that the Lagrangian multipliers are closed related to the active/in-active of the constrains. The KKT complimentary condition can be expressed as follows,

\alpha_{i}f_{i}(x^{*})=0.

When \alpha_{i} = 0, then f_{i}(x) \neq = 0, and the constrain is inactive; otherwise, it is active.

Equivalent Problems

There are many ways of transforming the original problems, as pointed out by [1]. Here I list some.

Above discussion let us believe the following problem shares the same solution with the original one.

\min~f_{0}(x)

s.t.~f_{i}(x) = 0,~i \in \cal{M}

h_{i}(x) = 0, ~i=1,2,...,p

Note that \cal{M} is the set of active in-equality constrains.

Let take one step further. Recalling that Lagrangian transforms a constrained optimization into an un-constrained one, so the above problem can be written in the Lagrangian as a optimization free of constrains,

\min_{x, \alpha, \mu} f_{0}(x) + \sum \alpha_{i}f_{i}(x) + \sum \mu_{i} h_{i}(x).

Also, we can write part of the constrains into the objective, subject to part of the constrains.

\min_{x, \alpha} f_{0}(x) + \sum \alpha_{i}f_{i}(x)

s.t.~h_{i}(x) = 0, ~i=1,2,...p

 

Advertisements