Next: Standard Output
Up: DSDP5 User Guide -
Previous: Conic Programming
  Contents
  Index
This chapter summarizes the dual-scaling algorithm for conic optimization for
solving (P) and (D). It assumes that both of these programs
bounded and feasible.
Note that (PP) and (DD) are bounded and feasible, and they can be expressed in that form.
For simplicity, the notation will refer to semidefinite programming, but
the algorithm can be generalized to other cones.
Let
be the cone of positive semidefinite matrices.
The data
are given symmetric matrices,
is a given vector, and the variables
.
The symbol
means the matrix is positive (semi)definite.
Following conventional notation, let
For feasible
, let
for some feasible
and consider
the dual potential function
 |
(1) |
The dual-scaling algorithm reduces this function, whose
the first term decreases the duality gap and second
term keeps
in the interior of the
positive semidefinite matrix cone.
The gradient of this potential function is
 |
(2) |
Beginning with a strictly feasible point
and
upper bound
, each iteration solves the following problem.
 |
(3) |
where
is a positive constant. Denoting
, the step direction
solves the linear system
 |
(4) |
The left-hand side of this linear system is a Gram matrix
and is positive definite when
and the
s are linearly independent.
In this manuscript, it will sometimes be referred to as
.
Using the ellipsoidal constraint, the minimal solution,
,
of (3) is given by
Practically, the algorithm then selects a step size
such that
is positive definite and the potential function
is reduced.
To find a feasible point
, we solve the
least squares problem
 |
(5) |
The answer to (5) is given explicitly by
 |
(6) |
Computing this matrix at each iteration is expensive and unnecessary.
The next step direction only requires a new upper bound
,
which can be set to
if
.
Alternatively, consider
and
 |
(7) |
The first-order KKT conditions are
,
,
,
and the corresponding Newton equations are
 |
(8) |
The Schur complement of these equations is (4) if we set
, and
satisfies the constraints
.
Given a feasible starting point and appropriate choices for
step-length and
, convergence results in [6] show
that either the new point
or the new point
is
feasible and reduces the
Tanabe-Todd-Ye primal-dual potential function
enough to achieve linear convergence.
When
, the infimum of the potential function occurs at
an optimal solution.
A more
thorough explanation of the the dual-scaling algorithm
and its convergence properties, can be found in [6].
Next: Standard Output
Up: DSDP5 User Guide -
Previous: Conic Programming
  Contents
  Index
Steven Benson
2005-02-11