Quadratic constrained quadratic programming
Introduction
Algorithm Discussion
Numerical Example
Consider the following Quadratically Constrained Quadratic Programming (QCQP) problem to gain a better understanding:
$ \begin{aligned} \text{minimize} \quad & f_0(x) = (x_1 - 2)^2 + x_2^2 \\ \text{subject to} \quad & f_1(x) = x_1^2 + x_2^2 - 1 \leq 0, \\ & f_2(x) = (x_1 - 1)^2 + x_2^2 - 1 \leq 0. \end{aligned} $
We will solve this QCQP problem using the Karush-Kuhn-Tucker (KKT) conditions, which are necessary conditions for a solution in nonlinear programming to be optimal, given certain regularity conditions.
Step 1: Formulate the Lagrangian
The Lagrangian $ L $ combines the objective function and the constraints, each multiplied by a Lagrange multiplier $ \lambda_i $:
$ L(x, \lambda_1, \lambda_2) = (x_1 - 2)^2 + x_2^2 + \lambda_1 (x_1^2 + x_2^2 - 1) + \lambda_2 \left( (x_1 - 1)^2 + x_2^2 - 1 \right). $
For each constraint:
- Complementary Slackness:
$ \lambda_i \geq 0, \quad \lambda_i f_i(x) = 0, \quad \text{for } i = 1, 2. $
- Primal Feasibility:
$ f_i(x) \leq 0 \quad \text{for } i = 1, 2. $
Step 2: Compute the Gradient of the Lagrangian
Compute the partial derivatives with respect to $ x_1 $ and $ x_2 $:
- Partial Derivative with respect to $ x_1 $:
$ \frac{\partial L}{\partial x_1} = 2(x_1 - 2) + 2\lambda_1 x_1 + 2\lambda_2 (x_1 - 1). $
- Partial Derivative with respect to $ x_2 $:
$ \frac{\partial L}{\partial x_2} = 2x_2 + 2\lambda_1 x_2 + 2\lambda_2 x_2. $
Step 3: Stationarity Conditions
Set the gradients to zero:
- Equation (1):
$ 2(x_1 - 2) + 2\lambda_1 x_1 + 2\lambda_2 (x_1 - 1) = 0. $
- Equation (2):
$ 2x_2 + 2\lambda_1 x_2 + 2\lambda_2 x_2 = 0. $
From Equation (2), since $ x_2 (1 + \lambda_1 + \lambda_2) = 0 $ and $ \lambda_i \geq 0 $ for $ i = 1, 2 $, it follows that:
$ x_2 = 0. $
Substitute $ x_2 = 0 $ into the constraints:
$ \begin{aligned} x_1^2 - 1 &\leq 0 \quad \Rightarrow \quad x_1 \in [-1, 1], \\ (x_1 - 1)^2 - 1 &\leq 0 \quad \Rightarrow \quad x_1 \in [0, 2]. \end{aligned} $
Combining both constraints:
$ x_1 \in [0, 1]. $
Step 4: Solve for $ x_1 $ Using Equation (1)
Substitute $ x_2 = 0 $ into Equation (1):
$ (x_1 - 2) + \lambda_1 x_1 + \lambda_2 (x_1 - 1) = 0. $
Assume $ \lambda_1 > 0 $ (since Constraint 1 is active):
$ x_1^2 - 1 = 0 \quad \Rightarrow \quad x_1 = \pm 1. $
But from the feasible range, $ x_1 = 1 $.
Substitute $ x_1 = 1 $ into the equation:
$ \lambda_1 = 1. $
This is acceptable.
Assume $ \lambda_2 = 0 $ because Constraint 2 is not active at $ x_1 = 1 $.
Step 5: Verify Complementary Slackness
- Constraint 1:
$ \lambda_1 (x_1^2 - 1) = 1 \times (1 - 1) = 0. $
- Constraint 2:
$ \lambda_2 \left( (x_1 - 1)^2 + x_2^2 - 1 \right) = 0 \times (-1) = 0. $
Step 6: Verify Primal Feasibility
- Constraint 1:
$ x_1^2 - 1 = 1 - 1 = 0 \leq 0. $
- Constraint 2:
$ (x_1 - 1)^2 + x_2^2 - 1 = -1 \leq 0. $
Step 7: Conclusion
- Optimal Solution:
$ x_1^* = 1, \quad x_2^* = 0. $
- Minimum Objective Value:
$ f_0^*(x) = (1 - 2)^2 + 0 = 1. $