Quadratic constrained quadratic programming: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 3: Line 3:
==Algorithm Discussion==
==Algorithm Discussion==


==Numerical Examples==
==Numerical Example==
 
Consider the following '''Quadratically Constrained Quadratic Programming (QCQP)''' problem to gain a better understanding:
 
<math>
\begin{aligned}
\text{minimize} \quad & f_0(x) = (x_1 - 2)^2 + x_2^2 \\
\text{subject to} \quad & f_1(x) = x_1^2 + x_2^2 - 1 \leq 0, \\
& f_2(x) = (x_1 - 1)^2 + x_2^2 - 1 \leq 0.
\end{aligned}
</math>
 
We will solve this QCQP problem using the '''Karush-Kuhn-Tucker (KKT) conditions''', which are necessary conditions for a solution in nonlinear programming to be optimal, given certain regularity conditions.
 
=== Step 1: Formulate the Lagrangian ===
 
The Lagrangian <math>L</math> combines the objective function and the constraints, each multiplied by a Lagrange multiplier <math>\lambda_i</math>:
 
<math>
L(x, \lambda_1, \lambda_2) = (x_1 - 2)^2 + x_2^2 + \lambda_1 (x_1^2 + x_2^2 - 1) + \lambda_2 \left( (x_1 - 1)^2 + x_2^2 - 1 \right).
</math>
 
For each constraint:
 
- '''Complementary Slackness''':
  <math>
  \lambda_i \geq 0, \quad \lambda_i f_i(x) = 0, \quad \text{for } i = 1, 2.
  </math>
 
- '''Primal Feasibility''':
  <math>
  f_i(x) \leq 0 \quad \text{for } i = 1, 2.
  </math>
 
=== Step 2: Compute the Gradient of the Lagrangian ===
 
Compute the partial derivatives with respect to <math>x_1</math> and <math>x_2</math>:
 
- '''Partial Derivative with respect to <math>x_1</math>''':
  <math>
  \frac{\partial L}{\partial x_1} = 2(x_1 - 2) + 2\lambda_1 x_1 + 2\lambda_2 (x_1 - 1).
  </math>
 
- '''Partial Derivative with respect to <math>x_2</math>''':
  <math>
  \frac{\partial L}{\partial x_2} = 2x_2 + 2\lambda_1 x_2 + 2\lambda_2 x_2.
  </math>
 
=== Step 3: Stationarity Conditions ===
 
Set the gradients to zero:
 
- '''Equation (1)''':
  <math>
  2(x_1 - 2) + 2\lambda_1 x_1 + 2\lambda_2 (x_1 - 1) = 0.
  </math>
 
- '''Equation (2)''':
  <math>
  2x_2 + 2\lambda_1 x_2 + 2\lambda_2 x_2 = 0.
  </math>
 
From '''Equation (2)''', since <math>x_2 (1 + \lambda_1 + \lambda_2) = 0</math> and <math>\lambda_i \geq 0</math> for <math>i = 1, 2</math>, it follows that:
 
<math>
x_2 = 0.
</math>
 
Substitute <math>x_2 = 0</math> into the constraints:
 
<math>
\begin{aligned}
x_1^2 - 1 &\leq 0 \quad \Rightarrow \quad x_1 \in [-1, 1], \\
(x_1 - 1)^2 - 1 &\leq 0 \quad \Rightarrow \quad x_1 \in [0, 2].
\end{aligned}
</math>
 
'''Combining both constraints''':
 
<math>
x_1 \in [0, 1].
</math>
 
=== Step 4: Solve for <math>x_1</math> Using Equation (1) ===
 
Substitute <math>x_2 = 0</math> into '''Equation (1)''':
 
<math>
(x_1 - 2) + \lambda_1 x_1 + \lambda_2 (x_1 - 1) = 0.
</math>
 
'''Assume <math>\lambda_1 > 0</math>''' (since Constraint 1 is active):
 
<math>
x_1^2 - 1 = 0 \quad \Rightarrow \quad x_1 = \pm 1.
</math>
 
But from the feasible range, <math>x_1 = 1</math>.
 
Substitute <math>x_1 = 1</math> into the equation:
 
<math>
\lambda_1 = 1.
</math>
 
This is acceptable.
 
'''Assume <math>\lambda_2 = 0</math>''' because Constraint 2 is not active at <math>x_1 = 1</math>.
 
=== Step 5: Verify Complementary Slackness ===
 
- '''Constraint 1''':
 
  <math>
  \lambda_1 (x_1^2 - 1) = 1 \times (1 - 1) = 0.
  </math>
 
- '''Constraint 2''':
 
  <math>
  \lambda_2 \left( (x_1 - 1)^2 + x_2^2 - 1 \right) = 0 \times (-1) = 0.
  </math>
 
=== Step 6: Verify Primal Feasibility ===
 
- '''Constraint 1''':
 
  <math>
  x_1^2 - 1 = 1 - 1 = 0 \leq 0.
  </math>
 
- '''Constraint 2''':
 
  <math>
  (x_1 - 1)^2 + x_2^2 - 1 = -1 \leq 0.
  </math>
 
=== Step 7: Conclusion ===
 
- '''Optimal Solution''':
  <math>
  x_1^* = 1, \quad x_2^* = 0.
  </math>
 
- '''Minimum Objective Value''':
  <math>
  f_0^*(x) = (1 - 2)^2 + 0 = 1.
  </math>


==Application==
==Application==


==Conclusion==
==Conclusion==

Revision as of 00:40, 6 December 2024

Introduction

Algorithm Discussion

Numerical Example

Consider the following Quadratically Constrained Quadratic Programming (QCQP) problem to gain a better understanding:

We will solve this QCQP problem using the Karush-Kuhn-Tucker (KKT) conditions, which are necessary conditions for a solution in nonlinear programming to be optimal, given certain regularity conditions.

Step 1: Formulate the Lagrangian

The Lagrangian combines the objective function and the constraints, each multiplied by a Lagrange multiplier :

For each constraint:

- Complementary Slackness:

 

- Primal Feasibility:

 

Step 2: Compute the Gradient of the Lagrangian

Compute the partial derivatives with respect to and :

- Partial Derivative with respect to :

 

- Partial Derivative with respect to :

 

Step 3: Stationarity Conditions

Set the gradients to zero:

- Equation (1):

 

- Equation (2):

 

From Equation (2), since and for , it follows that:

Substitute into the constraints:

Combining both constraints:

Step 4: Solve for Using Equation (1)

Substitute into Equation (1):

Assume (since Constraint 1 is active):

But from the feasible range, .

Substitute into the equation:

This is acceptable.

Assume because Constraint 2 is not active at .

Step 5: Verify Complementary Slackness

- Constraint 1:

 

- Constraint 2:

 

Step 6: Verify Primal Feasibility

- Constraint 1:

 

- Constraint 2:

 

Step 7: Conclusion

- Optimal Solution:

 

- Minimum Objective Value:

 

Application

Conclusion