Quadratic constrained quadratic programming: Difference between revisions
(20 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
Author: Jialiang Wang (jw2697), Jiaxin Zhang (jz2289), Wenke Du (wd275), Yufan Zhu (yz2899), David Berroa (deb336) (ChemE 6800 Fall 2024) | |||
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu | |||
==Introduction== | ==Introduction== | ||
'''[[wikipedia:Quadratic_programming|Quadratic programming (QP)]]''' is one of the oldest topics in the field of optimization that researchers have studied in the twentieth century. The basic QP, where the objective function is quadratic and constraints are linear, paved the way for other forms, such as QCQPs, which also have quadratic constraints (McCarl,Moskowitz et al,1977). | |||
Quadratic | A '''[[wikipedia:Quadratically_constrained_quadratic_program|Quadratically Constrained Quadratic Program (QCQP)]]''' can be defined as an optimization problem where the objective function and the constraints are quadratic. It emerged as [[wikipedia:Mathematical_optimization|optimisation theory]] grew to address more realistic, [https://zhengy09.github.io/ECE285/lectures/L6.pdf complex problems] of non-linear objectives and constraints. In particular, the issue involves optimizing (or minimizing) a [[wikipedia:Convex_function|convex quadratic function]] of decision variables with quadratic constraints. This class of problems is well suited to finance (Zenios,1993), engineering, machine learning, and agriculture because it is easier to model the relationship between variables using quadratic functions. | ||
The desire to study QCQPs stems from the fact that they can be used to model practical optimization problems that involve stochasticity in risk, resources, production, and decision-making. For example, in agriculture, using QCQPs can be useful in determining the best crop to grow based on the expected profits and the uncertainties of price changes and | * The desire to study QCQPs stems from the fact that they can be used to model practical optimization problems that involve stochasticity in risk, resources, production, and decision-making. For example, in agriculture, using QCQPs can be useful in determining the best crop to grow based on the expected profits and the uncertainties of price changes and unfavourable weather conditions (Floudas,1995). | ||
* In finance, the QCQPs are applied in the portfolio's construction to maximize the portfolio's expected returns and the covariance between the assets. It is crucial to comprehend QCQPs and the ways to solve them, such as '''[[wikipedia:Karush–Kuhn–Tucker_conditions|KKT (Karush-Kuhn Tucker)]]''' conditions and '''[[wikipedia:Software-defined_perimeter|SDP (Semidefinite Programming)]]''' relaxations, to solve problems that linear models cannot effectively solve (Bao & Sahinidis,2011). | |||
{| class="wikitable" | |||
|+ | |||
!Name | |||
!'''Brief info''' | |||
|- | |||
|'''KKT(Karush-Kuhn-Tucker)''' | |||
|KKT is a mathematical optimization method used to solve constrained optimization problems(Ghojogh, Karray et al,2021). | |||
It builds upon the method of Lagrange multipliers by introducing necessary conditions for optimality that incorporate primal and dual variables. | |||
The KKT conditions include stationarity, primal feasibility, dual feasibility, and complementary slackness, making it particularly effective for solving problems with nonlinear constraints. | |||
|- | |||
|'''SDP (Semidefinite Programming)''' | |||
|SDP reformulates a QCQP problem into a semidefinite programming relaxation (Freund,2004). | |||
By “lifting” the problem to a higher-dimensional space and applying SDP relaxation, this approach provides a tractable way to solve or approximate solutions to non-convex QCQP problems. | |||
It is widely used in areas where global optimization or approximations to non-convex issues are necessary. | |||
|} | |||
In general, analyzing QCQPs is important in order to apply knowledge-based decision-making and enhance the performance and stability of optimization methods in different fields (Zenios,1993). | |||
==Algorithm Discussion== | ==Algorithm Discussion== | ||
==Numerical Example | KKT: What is KKT +formulation | ||
SDP: What is SDP +formulation | |||
Selected Solvers: | |||
{| class="wikitable" | |||
|+ | |||
!Name | |||
!Brief Info | |||
|- | |||
| | |||
| | |||
|- | |||
| | |||
| | |||
|- | |||
| | |||
| | |||
|} | |||
== Numerical Example == | |||
Quadratically Constrained Quadratic Program (QCQP) always has the form: | |||
<math> | |||
\begin{array}{ll} | |||
\operatorname{minimize} & \frac{1}{2} x^{\mathrm{T}} P_0 x+q_0^{\mathrm{T}} x \\ | |||
\text { subject to } & \frac{1}{2} x^{\mathrm{T}} P_i x+q_i^{\mathrm{T}} x+r_i \leq 0 \quad \text { for } i=1, \ldots, m \\ | |||
& A x=b, | |||
\end{array} | |||
</math> | |||
where <math> | |||
P_0, \ldots, P_m | |||
</math> are n-by-n matrices and x <math> | |||
\in \mathbf{R}^n | |||
</math> is the optimization variable. If <math> | |||
P_0, \ldots, P_m | |||
</math> are all positive semidefinite, then the problem is convex. If these matrices are neither positive nor negative semidefinite, the problem is non-convex. If where <math> | |||
P_0, \ldots, P_m | |||
</math> are all zero, then the constraints are in fact linear and the problem is a quadratic program. | |||
=== Example 1: KKT Approach === | |||
Considering the following numerical example: | |||
<math> | <math> | ||
Line 20: | Line 81: | ||
</math> | </math> | ||
Steps: | |||
* Formulate the [[wikipedia:Lagrangian_mechanics|Lagrangian]] and computed the Gradients | |||
* Applied the Stationarity Conditions | |||
* Determined the active constraints using complementary slackness | |||
=== | ==== 1. 1 Lagrangian Formulation: ==== | ||
The '''Lagrangian formulation''' in optimization is a mathematical framework used to solve constrained optimization problems by incorporating both the objective function and the constraints into a single scalar function, called the '''Lagrangian <math>L</math>''' This formulation introduces '''Lagrange multipliers''' <math>\lambda_i</math>for each constraint, enabling the transformation of a constrained optimization problem into an unconstrained one as follows: | |||
<math>L(x, \lambda, \nu)=f_0(x)+\sum_{i=1}^m \lambda_i f_i(x)+\nu^T(A x-b)</math> | |||
where: | |||
<math>f_0(x)</math> is the objective function to be minimized, <math>f_i(x) \leq 0</math> are the inequality constraints | |||
<math>A x=b</math> represents the equality constraints,<math>\quad \lambda_i \geq 0</math> are the Lagrange multipliers associated with the inequality constraints,<math>\quad \nu</math> is the Lagrange multiplier vector for the equality constraints. | |||
Here the example is: | |||
<math> | |||
L(x, \lambda_1, \lambda_2) = (x_1 - 2)^2 + x_2^2 + \lambda_1 (x_1^2 + x_2^2 - 1) + \lambda_2 \left( (x_1 - 1)^2 + x_2^2 - 1 \right). | |||
</math> | |||
For each constraint, | |||
=== | * the complementary slackness is <math> \lambda_i \geq 0, \quad \lambda_i f_i(x)=0, \quad \text { for } i=1,2 </math> | ||
* the primal feasibility is <math> f_i(x) \leq 0 \quad \text { for } i=1,2</math> . | |||
The results for gradient computation are: | |||
* the partial derivatives with respect to <math> x_1</math>: <math> \frac{\partial L}{\partial x_1}=2\left(x_1-2\right)+2 \lambda_1 x_1+2 \lambda_2\left(x_1-1\right)</math> | |||
* the partial derivatives with respect to <math> x_2</math>:<math> | |||
2(x_1 - 2) + 2\lambda_1 x_1 + 2\lambda_2 (x_1 - 1) = | \frac{\partial L}{\partial x_2} = 2x_2 + 2\lambda_1 x_2 + 2\lambda_2 x_2. | ||
</math> | |||
==== 1. 2 Stationarity Condition Application: ==== | |||
===== 1.2.1 Setting the results to zero ===== | |||
<math> | * <math> | ||
2(x_1 - 2) + 2\lambda_1 x_1 + 2\lambda_2 (x_1 - 1) = 0 | |||
</math> | |||
* <math> | |||
2x_2 + 2\lambda_1 x_2 + 2\lambda_2 x_2 = 0 | |||
</math> | |||
** since <math>x_2 (1 + \lambda_1 + \lambda_2) = 0</math> and <math>\lambda_i \geq 0</math> for <math>i = 1, 2</math>, so <math> | |||
x_2 = 0. | x_2 = 0. | ||
</math> | |||
* with constrains <math> | |||
x_1 \in [0, 1]. | |||
</math> | </math> | ||
===== 1.2.2 Substituting ===== | |||
<math>x_2 = 0</math> into the constraints | |||
<math> | <math> | ||
Line 85: | Line 143: | ||
</math> | </math> | ||
===== 1.2.3 Problem Solving ===== | |||
<math> | * Substitute <math>x_2 = 0</math> into Equation (1): <math> | ||
x_1 \ | (x_1 - 2) + \lambda_1 x_1 + \lambda_2 (x_1 - 1) = 0. | ||
</math> | |||
** Assume <math>\lambda_1 > 0</math> (since Constraint 1 is active): <math> | |||
x_1^2 - 1 = 0 \quad \Rightarrow \quad x_1 = \pm 1. | |||
</math> | |||
* But from the feasible range, <math>x_1 = 1</math> | |||
** Substitute <math>x_1 = 1</math> into the equation:<math> | |||
\lambda_1 = 1. | |||
</math> | </math> | ||
*** This is acceptable. | |||
* Assume <math>\lambda_2 = 0</math> because Constraint 2 is not active at <math>x_1 = 1</math>. | |||
=== | ==== 1. 3 Verification ==== | ||
===== 1.3.1 Complementary Slackness Verification ===== | |||
* Constraint 1: <math> | |||
\lambda_1 (x_1^2 - 1) = 1 \times (1 - 1) = 0. | |||
</math> | |||
* Constraint 2: <math> | |||
\lambda_2 \left( (x_1 - 1)^2 + x_2^2 - 1 \right) = 0 \times (-1) = 0. | |||
</math> | |||
===== 1.3.2 Primal Feasibility Verification ===== | |||
* Constraint 1: <math> | |||
x_1^2 - 1 = 1 - 1 = 0 \leq 0 | |||
</math> | |||
* Constraint 2: <math> | |||
(x_1 - 1)^2 + x_2^2 - 1 = -1 \leq 0. | |||
</math> | |||
==== 1. 4 Conclusion ==== | |||
* Optimal Solution: <math> | |||
x_1^* = 1, \quad x_2^* = 0. | |||
</math> | |||
* Minimum Objective Value :<math> | |||
f_0^*(x) = (1 - 2)^2 + 0 = 1. | |||
</math> | |||
''' | === Example 2: SDP- Based QCQP === | ||
'''SDP (Semidefinite Programming)''' here is a convex optimization technique used by relaxing the original problem into a semidefinite form. | |||
The difference: | |||
* For a QCQP problem, the objective is typically: | |||
<math>\operatorname{minimize} f_0(x)=\frac{1}{2} x^T P_0 x+q_0^T x+r_0</math> | |||
== | <math>\text { subject to } f_i(x)=\frac{1}{2} x^T P_i x+q_i^T x+r_i \leq 0, \quad i=1, \ldots, m \text {, }</math> | ||
<math>A x=b </math> | |||
* SDP relaxes the problem by introducing a symmetric matrix $X=x x^T$ and reformulating the problem into the semidefinite cone (where <math>X \succeq 0</math> ensures Xis positive semidefinite): | |||
<math>\operatorname{minimize}\left\langle P_0, X\right\rangle+q_0^T x+r_0,</math> | |||
<math>\text { subject to }\left\langle P_i, X\right\rangle+q_i^T x+r_i \leq 0, \quad i=1, \ldots, m, </math> | |||
<math>X \succeq x x^T, \quad A x=b,</math> | |||
Considering the following numerical example: | |||
<math> \begin{aligned} \text{minimize} \quad & f_0(x) = x_1^2 + x_2^2 \\ \text{subject to} \quad & f_1(x) = x_1^2 + x_2^2 - 2 \leq 0, \\ & f_2(x) = -x_1 x_2 + 1 \leq 0. \end{aligned} </math> | <math> \begin{aligned} \text{minimize} \quad & f_0(x) = x_1^2 + x_2^2 \\ \text{subject to} \quad & f_1(x) = x_1^2 + x_2^2 - 2 \leq 0, \\ & f_2(x) = -x_1 x_2 + 1 \leq 0. \end{aligned} </math> | ||
Interpretation: | Interpretation: | ||
'''Objective:''' | |||
* <math>f_0(x) = x_1^2 + x_2^2</math> is the squared distance from the origin | |||
* A point is found in the feasible region that is as close as possible to the origin. | |||
'''Constraint:''' | |||
<math> x = | * <math>f_1(x) = x_1^2 + x_2^2 - 2 \leq 0</math> restricts <math>(x_1, x_2)</math> to lie inside or on a circle of radius <math>\sqrt{2}</math> | ||
* <math>f_2(x) = -x_1 x_2 + 1 \leq 0 \implies x_1 x_2 \geq 1</math> defines a hyperbolic region | |||
* To satisfy <math>x_1 x_2 \geq 1</math>, both variables must be sufficiently large in magnitude and have the same sign. | |||
'''Calculation Steps:''' | |||
* Lifting and Reformulation | |||
* SDP Relaxation | |||
* Soler Application and Recovering | |||
* Value Optimization | |||
==== 1. 1 Stationarity Condition Application: ==== | |||
Constraint 2: <math>-x_1 x_2 + 1 \leq 0 \implies X_{12} \geq 1.</math> | * Lifted variable introduction:<math> x = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}, \quad X = x x^T = \begin{pmatrix} x_1^2 & x_1 x_2 \\ x_1 x_2 & x_2^2 \end{pmatrix}. </math> | ||
** If <math>X = x x^T</math>, then <math>X \succeq 0</math> (positive semidefinite) and <math>X</math> is rank-1 | |||
* Objective and Constraints Rewrite in terms of <math>X</math>: | |||
** Objective: <math>x_1^2 + x_2^2 = \langle I, X \rangle</math>, where <math>I</math> is the 2x2 identity matrix. | |||
** Constraint 1: <math>x_1^2 + x_2^2 - 2 \leq 0 \implies \langle I, X \rangle - 2 \leq 0.</math> | |||
** Constraint 2: <math>-x_1 x_2 + 1 \leq 0 \implies X_{12} \geq 1.</math> | |||
=== | ==== 1. 2 SDP Relaxation: ==== | ||
The original QCQP is non-convex due to the rank-1 condition on <math>X</math>. | |||
Relax the rank constraint and consider only <math>X \succeq 0</math>: | |||
<math> \begin{aligned} \text{minimize} \quad & \langle I, X \rangle \\ \text{subject to} \quad & \langle I, X \rangle - 2 \leq 0, \\ & X_{12} \geq 1, \\ & X \succeq 0. \end{aligned} </math> | <math> \begin{aligned} \text{minimize} \quad & \langle I, X \rangle \\ \text{subject to} \quad & \langle I, X \rangle - 2 \leq 0, \\ & X_{12} \geq 1, \\ & X \succeq 0. \end{aligned} </math> | ||
=== | ==== 1. 2 Solver: ==== | ||
Solving the SDP, the feasible solution <math>X^*</math> found that achieves the minimum: | |||
<math> X^* = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}. </math> | |||
* Check that <math>X^*</math> is rank-1: | |||
Check that <math>X^*</math> is rank-1: | |||
<math> X^* = \begin{pmatrix}1 \\ 1\end{pmatrix} \begin{pmatrix}1 & 1\end{pmatrix} = x^*(x^*)^T, </math> | <math> X^* = \begin{pmatrix}1 \\ 1\end{pmatrix} \begin{pmatrix}1 & 1\end{pmatrix} = x^*(x^*)^T, </math> | ||
* with <math>x^* = (1, 1)</math>. | |||
==== 1. 3 Value Optimization: ==== | |||
<math> \text{ The orignial QCQP's optimal value is } x^* = (1,1) </math> | |||
<math>x_1^2 + x_2^2 = 1 + 1 = 2 \implies f_1(x^*) = 0 \leq 0.</math> | * Check feasibility | ||
<math>x_1 x_2 = 1 \implies f_2(x^*) = -1 + 1 = 0 \leq 0.</math> | ** <math>x_1^2 + x_2^2 = 1 + 1 = 2 \implies f_1(x^*) = 0 \leq 0.</math> | ||
** <math>x_1 x_2 = 1 \implies f_2(x^*) = -1 + 1 = 0 \leq 0.</math> | |||
** Results“”All constraints are satisfied. | |||
The optimal objective value is: <math>f_0^*(x) = x_1^{*2} + x_2^{*2} = 1 + 1 = 2.</math> | The optimal objective value is: <math>f_0^*(x) = x_1^{*2} + x_2^{*2} = 1 + 1 = 2.</math> | ||
=== | === Comparasion between Two Examples: === | ||
* '''Accuracy''': Both KKT and SDP methods yielded the exact solution for this convex problem. However, SDP relaxation has the added advantage of handling certain non-convexities under specific conditions, where KKT may fail. | |||
* '''Efficiency''': KKT conditions are computationally faster, making them suitable for real-time applications. In contrast, SDP relaxations are resource-intensive, limiting their use in high-dimensional problems. | |||
* '''Scalability''': The performance of SDP relaxations deteriorates as the problem size increases due to the reliance on matrix computations. | |||
==Application== | |||
==Conclusion== | ==Conclusion== | ||
In conclusion, Quadratically Constrained Quadratic Programs (QCQPs) are a significant class of optimization problems extending quadratic programming by incorporating quadratic constraints (Bao,Sahinidis,2011). These problems are essential for modeling complex real-world scenarios where both the objective function and the constraints are quadratic. QCQPs are widely applicable in areas such as agriculture, finance, production planning, and machine learning, where they help optimize decisions by balancing competing factors such as profitability, risk, and resource constraints. | |||
The study and solution of QCQPs are critical due to their ability to capture complex relationships and non-linearities, offering a more realistic representation of many practical problems than simpler linear models. Techniques such as '''Karush-Kuhn Tucker (KKT) conditions and semidefinite programming (SDP) relaxations''' provide effective tools for solving QCQPs(Elloumi & Lambert,2019), offering both exact and approximate solutions depending on the problem’s structure. These methods allow for efficient handling of the challenges posed by quadratic constraints and non-linearities. | |||
Looking forward, there are several potential areas for improvement in QCQP algorithms. One direction is the development of more efficient relaxation techniques for solving non-convex QCQPs, especially in large-scale problems where computational efficiency becomes critical. Additionally, there is ongoing research into hybrid methods that combine the strengths of different optimization techniques, such as SDP and machine learning, to improve the robustness and speed of solving QCQPs in dynamic environments. As optimization problems become increasingly complex and data-rich, advancements in QCQP algorithms will continue to play a crucial role in making informed, optimal decisions in diverse applications. | |||
==Reference== | ==Reference== | ||
[1] Agarwal, D., Singh, P., & El Sayed, M. A. (2023). The Karush–Kuhn–Tucker (KKT) optimality conditions for fuzzy-valued fractional optimization problems. Mathematics and Computers in Simulation, 205, 861-877, DOI:[https://www.researchgate.net/publication/365290227_The_Karush-Kuhn-Tucker_KKT_optimality_conditions_for_fuzzy-valued_fractional_optimization_problems 10.1016/j.matcom.2022.10.024] | |||
[2] Bao, X., Sahinidis, N. V., & Tawarmalani, M. (2011). [https://link.springer.com/article/10.1007/s10107-011-0462-2 Semidefinite relaxations for quadratically constrained quadratic programming: A review and comparisons] (PDF). ''Mathematical programming'', ''129'', 129-157. | |||
[3] Bose, S., Gayme, D. F., Chandy, K. M., & Low, S. H. (2015). Quadratically constrained quadratic programs on acyclic graphs with application to power flow. IEEE Transactions on Control of Network Systems, 2(3), 278-287,[https://ieeexplore.ieee.org/document/7035094 DOI: 10.1109/TCNS.2015.2401172] | |||
[4] Elloumi, S., & Lambert, A. (2019). Global solution of non-convex quadratically constrained quadratic programs. ''Optimization methods and software'', ''34''(1), 98-114,doi: https://doi.org/10.1080/10556788.2017.1350675 | |||
[5] Freund, R. M. (2004). [https://ocw.mit.edu/courses/15-084j-nonlinear-programming-spring-2004/a632b565602fd2eb3be574c537eea095_lec23_semidef_opt.pdf Introduction to semidefinite programming (SDP)] (PDF). Massachusetts Institute of Technology, 8-11. | |||
[6] Ghojogh, B., Ghodsi, A., Karray, F., & Crowley, M. (2021). KKT conditions, first-order and second-order optimization, and distributed optimization: tutorial and survey. arXiv preprint arXiv:2110.01858. | |||
[7] McCarl, B. A., Moskowitz, H., & Furtan, H. (1977). Quadratic programming applications. ''Omega'', ''5''(1), 43-55. | |||
[8] Zenios, S. A. (Ed.). (1993). Financial optimization. Cambridge university press,doi:https://doi.org/10.1017/CBO9780511522130 | |||
==Further Reading== | |||
In Algorithm: | |||
* [1] Floudas, C. A., & Visweswaran, V. (1995). [https://link.springer.com/chapter/10.1007/978-1-4615-2025-2_5 Quadratic optimization. ''Handbook of global optimization''] ''(PDF)'', 217-269. | |||
* [2] Xu, H. K. (2003). [https://link.springer.com/article/10.1023/A:1023073621589 An iterative approach to quadratic optimization. ''Journal of Optimization Theory and Applications'']''(PDF)'', ''116'', 659-678. | |||
In Finance Portfolio Construction: | |||
* [1] [https://link.springer.com/article/10.1007/s10462-022-10273-7 Gunjan, A., & Bhattacharyya, S. (2023). A brief review of portfolio optimization techniques](PDF). ''Artificial Intelligence Review'', ''56''(5), 3847-3886,doi: https://doi.org/10.1007/s10462-022-10273-7 | |||
* [2] Xu, H. K. (2003). An iterative approach to quadratic optimization. ''Journal of Optimization Theory and Applications'', ''116'', 659-678. | |||
==External Links== | |||
* [[wikipedia:Convex_function|Convext Quatratic Function]] | |||
* [[wikipedia:Lagrangian_mechanics|Lagrangian mechanics]] | |||
* [[wikipedia:Karush–Kuhn–Tucker_conditions|Karush–Kuhn–Tucker conditions]] | |||
* [[wikipedia:Software-defined_perimeter|Software-defined perimeter]] | |||
* [[wikipedia:Quadratically_constrained_quadratic_program|Quadratically constrained quadratic program]] | |||
* [https://link.springer.com/article/10.1007/s10107-020-01589-9 Semidefinite Programming] | |||
* [https://nag.com/solving-quadratically-constrained-quadratic-programming-qcqp-problems/ Solving quadratically constrained quadratic programming (QCQP) problems] | |||
{| class="wikitable" | |||
|+ | |||
!Category: '''[[NonLinear Programming (NLP)]] - [[Quadratic programming]]''' | |||
|} | |||
<references /> |
Latest revision as of 12:02, 11 December 2024
Author: Jialiang Wang (jw2697), Jiaxin Zhang (jz2289), Wenke Du (wd275), Yufan Zhu (yz2899), David Berroa (deb336) (ChemE 6800 Fall 2024)
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu
Introduction
Quadratic programming (QP) is one of the oldest topics in the field of optimization that researchers have studied in the twentieth century. The basic QP, where the objective function is quadratic and constraints are linear, paved the way for other forms, such as QCQPs, which also have quadratic constraints (McCarl,Moskowitz et al,1977).
A Quadratically Constrained Quadratic Program (QCQP) can be defined as an optimization problem where the objective function and the constraints are quadratic. It emerged as optimisation theory grew to address more realistic, complex problems of non-linear objectives and constraints. In particular, the issue involves optimizing (or minimizing) a convex quadratic function of decision variables with quadratic constraints. This class of problems is well suited to finance (Zenios,1993), engineering, machine learning, and agriculture because it is easier to model the relationship between variables using quadratic functions.
- The desire to study QCQPs stems from the fact that they can be used to model practical optimization problems that involve stochasticity in risk, resources, production, and decision-making. For example, in agriculture, using QCQPs can be useful in determining the best crop to grow based on the expected profits and the uncertainties of price changes and unfavourable weather conditions (Floudas,1995).
- In finance, the QCQPs are applied in the portfolio's construction to maximize the portfolio's expected returns and the covariance between the assets. It is crucial to comprehend QCQPs and the ways to solve them, such as KKT (Karush-Kuhn Tucker) conditions and SDP (Semidefinite Programming) relaxations, to solve problems that linear models cannot effectively solve (Bao & Sahinidis,2011).
Name | Brief info |
---|---|
KKT(Karush-Kuhn-Tucker) | KKT is a mathematical optimization method used to solve constrained optimization problems(Ghojogh, Karray et al,2021).
It builds upon the method of Lagrange multipliers by introducing necessary conditions for optimality that incorporate primal and dual variables. The KKT conditions include stationarity, primal feasibility, dual feasibility, and complementary slackness, making it particularly effective for solving problems with nonlinear constraints. |
SDP (Semidefinite Programming) | SDP reformulates a QCQP problem into a semidefinite programming relaxation (Freund,2004).
By “lifting” the problem to a higher-dimensional space and applying SDP relaxation, this approach provides a tractable way to solve or approximate solutions to non-convex QCQP problems. It is widely used in areas where global optimization or approximations to non-convex issues are necessary. |
In general, analyzing QCQPs is important in order to apply knowledge-based decision-making and enhance the performance and stability of optimization methods in different fields (Zenios,1993).
Algorithm Discussion
KKT: What is KKT +formulation
SDP: What is SDP +formulation
Selected Solvers:
Name | Brief Info |
---|---|
Numerical Example
Quadratically Constrained Quadratic Program (QCQP) always has the form:
where are n-by-n matrices and x is the optimization variable. If are all positive semidefinite, then the problem is convex. If these matrices are neither positive nor negative semidefinite, the problem is non-convex. If where are all zero, then the constraints are in fact linear and the problem is a quadratic program.
Example 1: KKT Approach
Considering the following numerical example:
Steps:
- Formulate the Lagrangian and computed the Gradients
- Applied the Stationarity Conditions
- Determined the active constraints using complementary slackness
1. 1 Lagrangian Formulation:
The Lagrangian formulation in optimization is a mathematical framework used to solve constrained optimization problems by incorporating both the objective function and the constraints into a single scalar function, called the Lagrangian This formulation introduces Lagrange multipliers for each constraint, enabling the transformation of a constrained optimization problem into an unconstrained one as follows:
where:
is the objective function to be minimized, are the inequality constraints
represents the equality constraints, are the Lagrange multipliers associated with the inequality constraints, is the Lagrange multiplier vector for the equality constraints.
Here the example is:
For each constraint,
- the complementary slackness is
- the primal feasibility is .
The results for gradient computation are:
- the partial derivatives with respect to :
- the partial derivatives with respect to :
1. 2 Stationarity Condition Application:
1.2.1 Setting the results to zero
-
- since and for , so
- with constrains
1.2.2 Substituting
into the constraints
1.2.3 Problem Solving
- Substitute into Equation (1):
- Assume (since Constraint 1 is active):
- But from the feasible range,
- Substitute into the equation:
- This is acceptable.
- Substitute into the equation:
- Assume because Constraint 2 is not active at .
1. 3 Verification
1.3.1 Complementary Slackness Verification
- Constraint 1:
- Constraint 2:
1.3.2 Primal Feasibility Verification
- Constraint 1:
- Constraint 2:
1. 4 Conclusion
- Optimal Solution:
- Minimum Objective Value :
Example 2: SDP- Based QCQP
SDP (Semidefinite Programming) here is a convex optimization technique used by relaxing the original problem into a semidefinite form.
The difference:
- For a QCQP problem, the objective is typically:
- SDP relaxes the problem by introducing a symmetric matrix $X=x x^T$ and reformulating the problem into the semidefinite cone (where ensures Xis positive semidefinite):
Considering the following numerical example:
Interpretation:
Objective:
- is the squared distance from the origin
- A point is found in the feasible region that is as close as possible to the origin.
Constraint:
- restricts to lie inside or on a circle of radius
- defines a hyperbolic region
- To satisfy , both variables must be sufficiently large in magnitude and have the same sign.
Calculation Steps:
- Lifting and Reformulation
- SDP Relaxation
- Soler Application and Recovering
- Value Optimization
1. 1 Stationarity Condition Application:
- Lifted variable introduction:
- If , then (positive semidefinite) and is rank-1
- Objective and Constraints Rewrite in terms of :
- Objective: , where is the 2x2 identity matrix.
- Constraint 1:
- Constraint 2:
1. 2 SDP Relaxation:
The original QCQP is non-convex due to the rank-1 condition on .
Relax the rank constraint and consider only :
1. 2 Solver:
Solving the SDP, the feasible solution found that achieves the minimum:
- Check that is rank-1:
- with .
1. 3 Value Optimization:
- Check feasibility
- Results“”All constraints are satisfied.
The optimal objective value is:
Comparasion between Two Examples:
- Accuracy: Both KKT and SDP methods yielded the exact solution for this convex problem. However, SDP relaxation has the added advantage of handling certain non-convexities under specific conditions, where KKT may fail.
- Efficiency: KKT conditions are computationally faster, making them suitable for real-time applications. In contrast, SDP relaxations are resource-intensive, limiting their use in high-dimensional problems.
- Scalability: The performance of SDP relaxations deteriorates as the problem size increases due to the reliance on matrix computations.
Application
Conclusion
In conclusion, Quadratically Constrained Quadratic Programs (QCQPs) are a significant class of optimization problems extending quadratic programming by incorporating quadratic constraints (Bao,Sahinidis,2011). These problems are essential for modeling complex real-world scenarios where both the objective function and the constraints are quadratic. QCQPs are widely applicable in areas such as agriculture, finance, production planning, and machine learning, where they help optimize decisions by balancing competing factors such as profitability, risk, and resource constraints.
The study and solution of QCQPs are critical due to their ability to capture complex relationships and non-linearities, offering a more realistic representation of many practical problems than simpler linear models. Techniques such as Karush-Kuhn Tucker (KKT) conditions and semidefinite programming (SDP) relaxations provide effective tools for solving QCQPs(Elloumi & Lambert,2019), offering both exact and approximate solutions depending on the problem’s structure. These methods allow for efficient handling of the challenges posed by quadratic constraints and non-linearities.
Looking forward, there are several potential areas for improvement in QCQP algorithms. One direction is the development of more efficient relaxation techniques for solving non-convex QCQPs, especially in large-scale problems where computational efficiency becomes critical. Additionally, there is ongoing research into hybrid methods that combine the strengths of different optimization techniques, such as SDP and machine learning, to improve the robustness and speed of solving QCQPs in dynamic environments. As optimization problems become increasingly complex and data-rich, advancements in QCQP algorithms will continue to play a crucial role in making informed, optimal decisions in diverse applications.
Reference
[1] Agarwal, D., Singh, P., & El Sayed, M. A. (2023). The Karush–Kuhn–Tucker (KKT) optimality conditions for fuzzy-valued fractional optimization problems. Mathematics and Computers in Simulation, 205, 861-877, DOI:10.1016/j.matcom.2022.10.024
[2] Bao, X., Sahinidis, N. V., & Tawarmalani, M. (2011). Semidefinite relaxations for quadratically constrained quadratic programming: A review and comparisons (PDF). Mathematical programming, 129, 129-157.
[3] Bose, S., Gayme, D. F., Chandy, K. M., & Low, S. H. (2015). Quadratically constrained quadratic programs on acyclic graphs with application to power flow. IEEE Transactions on Control of Network Systems, 2(3), 278-287,DOI: 10.1109/TCNS.2015.2401172
[4] Elloumi, S., & Lambert, A. (2019). Global solution of non-convex quadratically constrained quadratic programs. Optimization methods and software, 34(1), 98-114,doi: https://doi.org/10.1080/10556788.2017.1350675
[5] Freund, R. M. (2004). Introduction to semidefinite programming (SDP) (PDF). Massachusetts Institute of Technology, 8-11.
[6] Ghojogh, B., Ghodsi, A., Karray, F., & Crowley, M. (2021). KKT conditions, first-order and second-order optimization, and distributed optimization: tutorial and survey. arXiv preprint arXiv:2110.01858.
[7] McCarl, B. A., Moskowitz, H., & Furtan, H. (1977). Quadratic programming applications. Omega, 5(1), 43-55.
[8] Zenios, S. A. (Ed.). (1993). Financial optimization. Cambridge university press,doi:https://doi.org/10.1017/CBO9780511522130
Further Reading
In Algorithm:
- [1] Floudas, C. A., & Visweswaran, V. (1995). Quadratic optimization. Handbook of global optimization (PDF), 217-269.
- [2] Xu, H. K. (2003). An iterative approach to quadratic optimization. Journal of Optimization Theory and Applications(PDF), 116, 659-678.
In Finance Portfolio Construction:
- [1] Gunjan, A., & Bhattacharyya, S. (2023). A brief review of portfolio optimization techniques(PDF). Artificial Intelligence Review, 56(5), 3847-3886,doi: https://doi.org/10.1007/s10462-022-10273-7
- [2] Xu, H. K. (2003). An iterative approach to quadratic optimization. Journal of Optimization Theory and Applications, 116, 659-678.
External Links
- Convext Quatratic Function
- Lagrangian mechanics
- Karush–Kuhn–Tucker conditions
- Software-defined perimeter
- Quadratically constrained quadratic program
- Semidefinite Programming
- Solving quadratically constrained quadratic programming (QCQP) problems
Category: NonLinear Programming (NLP) - Quadratic programming |
---|