Quadratic constrained quadratic programming: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 3: Line 3:
==Algorithm Discussion==
==Algorithm Discussion==


==Numerical Example==
==Numerical Example 1 (KKT Approach)==


Consider the following '''Quadratically Constrained Quadratic Programming (QCQP)''' problem to gain a better understanding:
Consider the following '''Quadratically Constrained Quadratic Programming (QCQP)''' problem to gain a better understanding:
Line 86: Line 86:
</math>
</math>


=== Step 4: Solve for <math>x_1</math> Using Equation (1) ===
=== Step 4: Solve the problem Using Equation (1) ===


Substitute <math>x_2 = 0</math> into '''Equation (1)''':
Substitute <math>x_2 = 0</math> into '''Equation (1)''':
Line 151: Line 151:
   f_0^*(x) = (1 - 2)^2 + 0 = 1.
   f_0^*(x) = (1 - 2)^2 + 0 = 1.
   </math>
   </math>
== Numerical Example 2 (SDP-Based QCQP) ==
Consider the following '''Quadratically Constrained Quadratic Programming (QCQP)''' problem:
<math> \begin{aligned} \text{minimize} \quad & f_0(x) = x_1^2 + x_2^2 \\ \text{subject to} \quad & f_1(x) = x_1^2 + x_2^2 - 2 \leq 0, \\ & f_2(x) = -x_1 x_2 + 1 \leq 0. \end{aligned} </math>
Interpretation:
The objective <math>f_0(x) = x_1^2 + x_2^2</math> is the squared distance from the origin. We seek a point in the feasible region that is as close as possible to the origin.
The constraint <math>f_1(x) = x_1^2 + x_2^2 - 2 \leq 0</math> restricts <math>(x_1, x_2)</math> to lie inside or on a circle of radius <math>\sqrt{2}</math>.
The constraint <math>f_2(x) = -x_1 x_2 + 1 \leq 0 \implies x_1 x_2 \geq 1</math> defines a hyperbolic region. To satisfy <math>x_1 x_2 \geq 1</math>, both variables must be sufficiently large in magnitude and have the same sign.
=== Step 1: Lifting and Reformulation ===
Introduce the lifted variable:
<math> x = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}, \quad X = x x^T = \begin{pmatrix} x_1^2 & x_1 x_2 \\ x_1 x_2 & x_2^2 \end{pmatrix}. </math>
If <math>X = x x^T</math>, then <math>X \succeq 0</math> (positive semidefinite) and <math>X</math> is rank-1.
Rewrite the objective and constraints in terms of <math>X</math>:
Objective: <math>x_1^2 + x_2^2 = \langle I, X \rangle</math>, where <math>I</math> is the 2x2 identity matrix.
Constraint 1: <math>x_1^2 + x_2^2 - 2 \leq 0 \implies \langle I, X \rangle - 2 \leq 0.</math>
Constraint 2: <math>-x_1 x_2 + 1 \leq 0 \implies X_{12} \geq 1.</math>
=== Step 2: SDP Relaxation ===
The original QCQP is non-convex due to the rank-1 condition on <math>X</math>. Relax the rank constraint and consider only <math>X \succeq 0</math>:
<math> \begin{aligned} \text{minimize} \quad & \langle I, X \rangle \\ \text{subject to} \quad & \langle I, X \rangle - 2 \leq 0, \\ & X_{12} \geq 1, \\ & X \succeq 0. \end{aligned} </math>
This is now a Semidefinite Program (SDP), which is convex and can be solved using standard SDP solvers.
=== Step 3: Solving the SDP and Recovering the Solution ===
Solving the SDP, we find a feasible solution <math>X^*</math> that achieves the minimum:
<math> X^* = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}. </math>
Check that <math>X^*</math> is rank-1:
<math> X^* = \begin{pmatrix}1 \\ 1\end{pmatrix} \begin{pmatrix}1 & 1\end{pmatrix} = x^*(x^*)^T, </math>
with <math>x^* = (1, 1)</math>.
Thus, from <math> \text{Thus, from } X^*, \text{ we recover the solution } x^* = (1,1) \text{ for the original QCQP.} </math>
Check feasibility:
<math>x_1^2 + x_2^2 = 1 + 1 = 2 \implies f_1(x^*) = 0 \leq 0.</math>
<math>x_1 x_2 = 1 \implies f_2(x^*) = -1 + 1 = 0 \leq 0.</math>
All constraints are satisfied.
=== Step 4: Optimal Value ===
The optimal objective value is: <math>f_0^*(x) = x_1^{*2} + x_2^{*2} = 1 + 1 = 2.</math>
=== Conclusion ===
The SDP relaxation not only provides a lower bound but also recovers the global optimum for this particular QCQP. The optimal solution is <math>(x_1^, x_2^) = (1, 1)</math> with an objective value of <math>2</math>.
This example demonstrates how an SDP relaxation can be used effectively to solve a non-convex QCQP and, in some cases, recover the exact optimal solution.


==Application==
==Application==


==Conclusion==
==Conclusion==
==Reference==

Revision as of 00:08, 7 December 2024

Introduction

Algorithm Discussion

Numerical Example 1 (KKT Approach)

Consider the following Quadratically Constrained Quadratic Programming (QCQP) problem to gain a better understanding:

We will solve this QCQP problem using the Karush-Kuhn-Tucker (KKT) conditions, which are necessary conditions for a solution in nonlinear programming to be optimal, given certain regularity conditions.

Step 1: Formulate the Lagrangian

The Lagrangian combines the objective function and the constraints, each multiplied by a Lagrange multiplier :

For each constraint:

- Complementary Slackness:

 

- Primal Feasibility:

 

Step 2: Compute the Gradient of the Lagrangian

Compute the partial derivatives with respect to and :

- Partial Derivative with respect to :

 

- Partial Derivative with respect to :

 

Step 3: Stationarity Conditions

Set the gradients to zero:

- Equation (1):

 

- Equation (2):

 

From Equation (2), since and for , it follows that:

Substitute into the constraints:

Combining both constraints:

Step 4: Solve the problem Using Equation (1)

Substitute into Equation (1):

Assume (since Constraint 1 is active):

But from the feasible range, .

Substitute into the equation:

This is acceptable.

Assume because Constraint 2 is not active at .

Step 5: Verify Complementary Slackness

- Constraint 1:

 

- Constraint 2:

 

Step 6: Verify Primal Feasibility

- Constraint 1:

 

- Constraint 2:

 

Step 7: Conclusion

- Optimal Solution:

 

- Minimum Objective Value:

 

Numerical Example 2 (SDP-Based QCQP)

Consider the following Quadratically Constrained Quadratic Programming (QCQP) problem:

Interpretation:

The objective is the squared distance from the origin. We seek a point in the feasible region that is as close as possible to the origin. The constraint restricts to lie inside or on a circle of radius . The constraint defines a hyperbolic region. To satisfy , both variables must be sufficiently large in magnitude and have the same sign.

Step 1: Lifting and Reformulation

Introduce the lifted variable:

If , then (positive semidefinite) and is rank-1.

Rewrite the objective and constraints in terms of :

Objective: , where is the 2x2 identity matrix.

Constraint 1:

Constraint 2:

Step 2: SDP Relaxation

The original QCQP is non-convex due to the rank-1 condition on . Relax the rank constraint and consider only :

This is now a Semidefinite Program (SDP), which is convex and can be solved using standard SDP solvers.

Step 3: Solving the SDP and Recovering the Solution

Solving the SDP, we find a feasible solution that achieves the minimum:

Check that is rank-1:

with .

Thus, from

Check feasibility:

All constraints are satisfied.

Step 4: Optimal Value

The optimal objective value is:

Conclusion

The SDP relaxation not only provides a lower bound but also recovers the global optimum for this particular QCQP. The optimal solution is with an objective value of .

This example demonstrates how an SDP relaxation can be used effectively to solve a non-convex QCQP and, in some cases, recover the exact optimal solution.

Application

Conclusion

Reference