Quadratic constrained quadratic programming: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 3: Line 3:
==Algorithm Discussion==
==Algorithm Discussion==


==Numerical Example==
==Numerical Example 1 (KKT Approach)==


Consider the following '''Quadratically Constrained Quadratic Programming (QCQP)''' problem to gain a better understanding:
Consider the following '''Quadratically Constrained Quadratic Programming (QCQP)''' problem to gain a better understanding:
Line 86: Line 86:
</math>
</math>


=== Step 4: Solve for <math>x_1</math> Using Equation (1) ===
=== Step 4: Solve the problem Using Equation (1) ===


Substitute <math>x_2 = 0</math> into '''Equation (1)''':
Substitute <math>x_2 = 0</math> into '''Equation (1)''':
Line 151: Line 151:
   f_0^*(x) = (1 - 2)^2 + 0 = 1.
   f_0^*(x) = (1 - 2)^2 + 0 = 1.
   </math>
   </math>
== Numerical Example 2 (SDP-Based QCQP) ==
Consider the following '''Quadratically Constrained Quadratic Programming (QCQP)''' problem:
<math> \begin{aligned} \text{minimize} \quad & f_0(x) = x_1^2 + x_2^2 \\ \text{subject to} \quad & f_1(x) = x_1^2 + x_2^2 - 2 \leq 0, \\ & f_2(x) = -x_1 x_2 + 1 \leq 0. \end{aligned} </math>
Interpretation:
The objective <math>f_0(x) = x_1^2 + x_2^2</math> is the squared distance from the origin. We seek a point in the feasible region that is as close as possible to the origin.
The constraint <math>f_1(x) = x_1^2 + x_2^2 - 2 \leq 0</math> restricts <math>(x_1, x_2)</math> to lie inside or on a circle of radius <math>\sqrt{2}</math>.
The constraint <math>f_2(x) = -x_1 x_2 + 1 \leq 0 \implies x_1 x_2 \geq 1</math> defines a hyperbolic region. To satisfy <math>x_1 x_2 \geq 1</math>, both variables must be sufficiently large in magnitude and have the same sign.
=== Step 1: Lifting and Reformulation ===
Introduce the lifted variable:
<math> x = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}, \quad X = x x^T = \begin{pmatrix} x_1^2 & x_1 x_2 \\ x_1 x_2 & x_2^2 \end{pmatrix}. </math>
If <math>X = x x^T</math>, then <math>X \succeq 0</math> (positive semidefinite) and <math>X</math> is rank-1.
Rewrite the objective and constraints in terms of <math>X</math>:
Objective: <math>x_1^2 + x_2^2 = \langle I, X \rangle</math>, where <math>I</math> is the 2x2 identity matrix.
Constraint 1: <math>x_1^2 + x_2^2 - 2 \leq 0 \implies \langle I, X \rangle - 2 \leq 0.</math>
Constraint 2: <math>-x_1 x_2 + 1 \leq 0 \implies X_{12} \geq 1.</math>
=== Step 2: SDP Relaxation ===
The original QCQP is non-convex due to the rank-1 condition on <math>X</math>. Relax the rank constraint and consider only <math>X \succeq 0</math>:
<math> \begin{aligned} \text{minimize} \quad & \langle I, X \rangle \\ \text{subject to} \quad & \langle I, X \rangle - 2 \leq 0, \\ & X_{12} \geq 1, \\ & X \succeq 0. \end{aligned} </math>
This is now a Semidefinite Program (SDP), which is convex and can be solved using standard SDP solvers.
=== Step 3: Solving the SDP and Recovering the Solution ===
Solving the SDP, we find a feasible solution <math>X^*</math> that achieves the minimum:
<math> X^* = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}. </math>
Check that <math>X^*</math> is rank-1:
<math> X^* = \begin{pmatrix}1 \\ 1\end{pmatrix} \begin{pmatrix}1 & 1\end{pmatrix} = x^*(x^*)^T, </math>
with <math>x^* = (1, 1)</math>.
Thus, from <math> \text{Thus, from } X^*, \text{ we recover the solution } x^* = (1,1) \text{ for the original QCQP.} </math>
Check feasibility:
<math>x_1^2 + x_2^2 = 1 + 1 = 2 \implies f_1(x^*) = 0 \leq 0.</math>
<math>x_1 x_2 = 1 \implies f_2(x^*) = -1 + 1 = 0 \leq 0.</math>
All constraints are satisfied.
=== Step 4: Optimal Value ===
The optimal objective value is: <math>f_0^*(x) = x_1^{*2} + x_2^{*2} = 1 + 1 = 2.</math>
=== Conclusion ===
The SDP relaxation not only provides a lower bound but also recovers the global optimum for this particular QCQP. The optimal solution is <math>(x_1^, x_2^) = (1, 1)</math> with an objective value of <math>2</math>.
This example demonstrates how an SDP relaxation can be used effectively to solve a non-convex QCQP and, in some cases, recover the exact optimal solution.


==Application==
==Application==


==Conclusion==
==Conclusion==
==Reference==

Revision as of 01:08, 7 December 2024

Introduction

Algorithm Discussion

Numerical Example 1 (KKT Approach)

Consider the following Quadratically Constrained Quadratic Programming (QCQP) problem to gain a better understanding:

$ \begin{aligned} \text{minimize} \quad & f_0(x) = (x_1 - 2)^2 + x_2^2 \\ \text{subject to} \quad & f_1(x) = x_1^2 + x_2^2 - 1 \leq 0, \\ & f_2(x) = (x_1 - 1)^2 + x_2^2 - 1 \leq 0. \end{aligned} $

We will solve this QCQP problem using the Karush-Kuhn-Tucker (KKT) conditions, which are necessary conditions for a solution in nonlinear programming to be optimal, given certain regularity conditions.

Step 1: Formulate the Lagrangian

The Lagrangian $ L $ combines the objective function and the constraints, each multiplied by a Lagrange multiplier $ \lambda_i $:

$ L(x, \lambda_1, \lambda_2) = (x_1 - 2)^2 + x_2^2 + \lambda_1 (x_1^2 + x_2^2 - 1) + \lambda_2 \left( (x_1 - 1)^2 + x_2^2 - 1 \right). $

For each constraint:

- Complementary Slackness:

 $    \lambda_i \geq 0, \quad \lambda_i f_i(x) = 0, \quad \text{for } i = 1, 2.    $

- Primal Feasibility:

 $    f_i(x) \leq 0 \quad \text{for } i = 1, 2.    $

Step 2: Compute the Gradient of the Lagrangian

Compute the partial derivatives with respect to $ x_1 $ and $ x_2 $:

- Partial Derivative with respect to $ x_1 $:

 $    \frac{\partial L}{\partial x_1} = 2(x_1 - 2) + 2\lambda_1 x_1 + 2\lambda_2 (x_1 - 1).    $

- Partial Derivative with respect to $ x_2 $:

 $    \frac{\partial L}{\partial x_2} = 2x_2 + 2\lambda_1 x_2 + 2\lambda_2 x_2.    $

Step 3: Stationarity Conditions

Set the gradients to zero:

- Equation (1):

 $    2(x_1 - 2) + 2\lambda_1 x_1 + 2\lambda_2 (x_1 - 1) = 0.    $

- Equation (2):

 $    2x_2 + 2\lambda_1 x_2 + 2\lambda_2 x_2 = 0.    $

From Equation (2), since $ x_2 (1 + \lambda_1 + \lambda_2) = 0 $ and $ \lambda_i \geq 0 $ for $ i = 1, 2 $, it follows that:

$ x_2 = 0. $

Substitute $ x_2 = 0 $ into the constraints:

$ \begin{aligned} x_1^2 - 1 &\leq 0 \quad \Rightarrow \quad x_1 \in [-1, 1], \\ (x_1 - 1)^2 - 1 &\leq 0 \quad \Rightarrow \quad x_1 \in [0, 2]. \end{aligned} $

Combining both constraints:

$ x_1 \in [0, 1]. $

Step 4: Solve the problem Using Equation (1)

Substitute $ x_2 = 0 $ into Equation (1):

$ (x_1 - 2) + \lambda_1 x_1 + \lambda_2 (x_1 - 1) = 0. $

Assume $ \lambda_1 > 0 $ (since Constraint 1 is active):

$ x_1^2 - 1 = 0 \quad \Rightarrow \quad x_1 = \pm 1. $

But from the feasible range, $ x_1 = 1 $.

Substitute $ x_1 = 1 $ into the equation:

$ \lambda_1 = 1. $

This is acceptable.

Assume $ \lambda_2 = 0 $ because Constraint 2 is not active at $ x_1 = 1 $.

Step 5: Verify Complementary Slackness

- Constraint 1:

 $    \lambda_1 (x_1^2 - 1) = 1 \times (1 - 1) = 0.    $

- Constraint 2:

 $    \lambda_2 \left( (x_1 - 1)^2 + x_2^2 - 1 \right) = 0 \times (-1) = 0.    $

Step 6: Verify Primal Feasibility

- Constraint 1:

 $    x_1^2 - 1 = 1 - 1 = 0 \leq 0.    $

- Constraint 2:

 $    (x_1 - 1)^2 + x_2^2 - 1 = -1 \leq 0.    $

Step 7: Conclusion

- Optimal Solution:

 $    x_1^* = 1, \quad x_2^* = 0.    $

- Minimum Objective Value:

 $    f_0^*(x) = (1 - 2)^2 + 0 = 1.    $

Numerical Example 2 (SDP-Based QCQP)

Consider the following Quadratically Constrained Quadratic Programming (QCQP) problem:

$ \begin{aligned} \text{minimize} \quad & f_0(x) = x_1^2 + x_2^2 \\ \text{subject to} \quad & f_1(x) = x_1^2 + x_2^2 - 2 \leq 0, \\ & f_2(x) = -x_1 x_2 + 1 \leq 0. \end{aligned} $ Interpretation:

The objective $ f_0(x) = x_1^2 + x_2^2 $ is the squared distance from the origin. We seek a point in the feasible region that is as close as possible to the origin. The constraint $ f_1(x) = x_1^2 + x_2^2 - 2 \leq 0 $ restricts $ (x_1, x_2) $ to lie inside or on a circle of radius $ \sqrt{2} $. The constraint $ f_2(x) = -x_1 x_2 + 1 \leq 0 \implies x_1 x_2 \geq 1 $ defines a hyperbolic region. To satisfy $ x_1 x_2 \geq 1 $, both variables must be sufficiently large in magnitude and have the same sign.

Step 1: Lifting and Reformulation

Introduce the lifted variable:

$ x = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}, \quad X = x x^T = \begin{pmatrix} x_1^2 & x_1 x_2 \\ x_1 x_2 & x_2^2 \end{pmatrix}. $ If $ X = x x^T $, then $ X \succeq 0 $ (positive semidefinite) and $ X $ is rank-1.

Rewrite the objective and constraints in terms of $ X $:

Objective: $ x_1^2 + x_2^2 = \langle I, X \rangle $, where $ I $ is the 2x2 identity matrix.

Constraint 1: $ x_1^2 + x_2^2 - 2 \leq 0 \implies \langle I, X \rangle - 2 \leq 0. $

Constraint 2: $ -x_1 x_2 + 1 \leq 0 \implies X_{12} \geq 1. $

Step 2: SDP Relaxation

The original QCQP is non-convex due to the rank-1 condition on $ X $. Relax the rank constraint and consider only $ X \succeq 0 $:

$ \begin{aligned} \text{minimize} \quad & \langle I, X \rangle \\ \text{subject to} \quad & \langle I, X \rangle - 2 \leq 0, \\ & X_{12} \geq 1, \\ & X \succeq 0. \end{aligned} $ This is now a Semidefinite Program (SDP), which is convex and can be solved using standard SDP solvers.

Step 3: Solving the SDP and Recovering the Solution

Solving the SDP, we find a feasible solution $ X^* $ that achieves the minimum:

$ X^* = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}. $ Check that $ X^* $ is rank-1:

$ X^* = \begin{pmatrix}1 \\ 1\end{pmatrix} \begin{pmatrix}1 & 1\end{pmatrix} = x^*(x^*)^T, $ with $ x^* = (1, 1) $.

Thus, from $ \text{Thus, from } X^*, \text{ we recover the solution } x^* = (1,1) \text{ for the original QCQP.} $

Check feasibility:

$ x_1^2 + x_2^2 = 1 + 1 = 2 \implies f_1(x^*) = 0 \leq 0. $ $ x_1 x_2 = 1 \implies f_2(x^*) = -1 + 1 = 0 \leq 0. $ All constraints are satisfied.

Step 4: Optimal Value

The optimal objective value is: $ f_0^*(x) = x_1^{*2} + x_2^{*2} = 1 + 1 = 2. $

Conclusion

The SDP relaxation not only provides a lower bound but also recovers the global optimum for this particular QCQP. The optimal solution is $ (x_1^, x_2^) = (1, 1) $ with an objective value of $ 2 $.

This example demonstrates how an SDP relaxation can be used effectively to solve a non-convex QCQP and, in some cases, recover the exact optimal solution.

Application

Conclusion

Reference