An Effective Algorithm for Quadratic Optimization with Non-Convex Inhomogeneous Quadratic Constraints ()
1. Introduction
Consider the following quadratic optimization problem with non-convex inhomogeneous quadratic constraints:
(1)
where
are symmetric positive semidefinite matrices and
. Note that if
then the problem (1) is easily solved, so assume
Generally, this problem is NP-hard [1] [2] [3] . It has a lot of applications in telecommunications, robust control, portfolio and so on. So it’s helpful to solve it.
By means of the work of Lovász and Schrijver [4] , Shor [5] , and others that certain NP-hard combinatorial optimization problems can be approximated by semidefinite programming (SDP) problems, for which efficient solution methods exist [6] [7] . So many mathematics workers with this motive put forward a lot of algorithms to solving quadratic optimization problem based on the semidefinite programming (SDP) relaxation. Recently, there were several results on solving different forms of quadratic problems. The results can be summarized as follows.
• If all
are symmetric positive semidefinite
matrices with positive definite sum and A is an arbitrary symmetric
matrix. A. Nemirov- ki [8] produce a feasible solution
such that, with constant probability,
(2)
• If
Luo et al. [9] showed that the ratio between the original optimal value and the SDP relaxation optimal value is bounded by
, have
(3)
• If all
are symmetric matrices, and two or more of them are indefinite. S. He et al. [10] compute a feasible solution
such that,
(4)
• Of special interest is the case of ellipsoid constraints
(5)
where
denotes the Euclidean norm, so
. Nemirovski [8] show that if all
and
are positive semidefinite. Then a feasible solution
can be generated from (SDP) satisfying
(6)
• In particular, if (1) has a ball constraints,
Ye and Zhang(Corollary 2.6 in [11] ) showed that a feasible x satisfying
(7)
can be found in polynomial time.
• Ye. [12] extended the above result to allow the ellipsoids not to have a common center but assuming
. Ye showed that a feasible solution
can be randomly generated such that
(8)
However, the existing algorithms are just for the problem of discrete problems or continuous problems, which is mostly based on homogeneous or inhomogeneous convex constraint problems. For this kind of quadratic optimization problem with non-convex inhomogeneous quadratic constraints, cannot find a very effective algorithm. This paper will propose a new effective algorithm to solve this problem.
This paper is organized as follows. In Section 2, we present a semidefinite pro- gramming (SDP) relaxation of (1). In Section 3, we propose a new effective algorithm to get the feasible solution of quadratic optimization problem (1) with non-convex inhomogeneous quadratic constraints. At last, some conclusions and the future works are given in Section 4.
Notations. Throughout this paper, we denote by
and
the n-dimen- sional real vector space and
positive semidefinite symmetric matrices space.
denotes that A is semidefinite.
represents the trace of a matrix. The inner product of two matrices A and B is denoted by
stands for the probability.
2. Semidefinite Programming (SDP) Relaxation
In this section, we present a semidefinite programming (SDP) relaxation of (1). To avoid trivial cases, we first make the following assumption.
Assumption Let
, has a solution,
Assume t is a constant, and satisfy
. (1) is equivalent to:
(9)
Let
be the global optimal solution of the above problem, the objective value is
. Assume
, it’s block structure like this:
(10)
where
(11)
By letting
and dropping the rank one constraint, the semidefinite programming relaxation of (9) can be drawn up as follows.
(SDP)
An optimal solution of SDP relaxation (SDP) can be computed efficiently using, say, interior-point mathods; see [13] and references therein.
3. Algorithm
In this section, we bring an effective algorithm for solving (1). The algorithm is divided into two parts. The main idea as follows: the first stage produces a solution which satisfies the first constraint of problem (9). Making a small change to the solution which obtained in the first stage, we can get the solution of (9) in the second stage. We will set the randomization algorithm as follows.
3.1. The First Stage
The first stage of the algorithm uses the randomization algorithm, which is proposed by Luo et al. [3] . At first, we need obtain an optimal solution
of (SDP), then construct a feasible solution for the first constraint of problem (9) using the following randomization procedure:
First, it can be easily verified that
satisfy the first constraint of problem (9).
(12)
(12) is equivalent to:
(13)
Lemma 1 For
generated in step 2, we have that
(14)
Proof. By the step 2, we first have
(15)
where
is a vector with the
element being 1 and all the other elements being 0. By denoting
we obtain that
(16)
By using the total probability formula for the last term in (16), we have
(17)
By Lemma 3.1 and Lemma 3.2 in [10] ,
(18)
Since
is feasible for (SDP), it follows that
for all
. Since
so
(19)
According to Lemma 1 in [9] , we have
(20)
Thus, it follows from (17), (18), (19) and (20) that:
(21)
The proof is completed by setting
Note that by Lemma 1 and (13), it can be concluded that
(22)
So there is a
, for any
satisfies:
(23)
3.2. The Second Stage
In this part, we make a change to the solution which constructed in the first stage in order to satisfy the problem (9). In this stage, we will by ways of the algorithm in [14] .
The procedure as follows:
Let
, so
can be seen as a quadratic function for
. The symmetry axis of
is:
(24)
Because the
can’t make
for all
set up. We introduce a parameter
, and construct a new solution
. It’s the feasible solution of (1).
When
, the symmetry axis of
satisfies:
(25)
So for all
can make
set up. It’s helpful for us to solve the problem, because we only need to find
satisfying
in the situation of
When
, because
are symmetric,
. To simplify the writing, we introduce the following notations:
(26)
So
(27)
According to the norm inequality:
(28)
we have
(29)
For
using (28), (29). The last term in (27) can be simplified to be
(30)
Whenever
it can be easily checked that
From (23), we can get
(31)
and we also have
(32)
Thus
(33)
Using (31) and (32), the last term in (33) can be simplified as
(34)
We will give the analysis of (34) as follows.
First, let
(35)
we can simplified
(36)
Since
, we know that
is an increasing function about
.
However,
is a function depend on
and the number of the constraints. According to simple calculations, we find, when satisfies
,
is an increasing function about
. In this situation, we also have
. When
,
becomes smaller with the increase of
.
where
(37)
we can write (34) as a piecewise function about
and
(38)
So it can be easily verified that
(39)
is a feasible solution of the original problem.
4. Conclusions
For the quadratic optimization problem with non-convex inhomogeneous quadratic constraints, it’s NP-hard. We can’t find an effective algorithm solving it. In this paper, we put forward an effective algorithm. According to it, many problems in life can be solved. Through the algorithm, we can get the feasible solution of (1). Transforming the original problem into (SDP) is a very important step in solving the problem. So we give the semidefinite programming (SDP) relaxation of (1) in Section 2, then propose an effective algorithm which given in Section 3 to construct the feasible solution of (1).
In the future, I will do the following work: discusses the quality of the feasible solution about (1), and gives some numerical experiments to verify it, we will consider the problem with inhomogeneous objective function. To this problem, we want to find an algorithm solve it by ways of the effective algorithm which put forward in this paper.