Quadratic programming: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
mNo edit summary
No edit summary
(33 intermediate revisions by 3 users not shown)
Line 1: Line 1:
This web page is a duplicate of https://optimization.mccormick.northwestern.edu/index.php/Quadratic_programming
Authors: Kayleigh Calder (kjc263), Colton Jacobucci (cdj59), Carolyn Johnson (cj456), Caleb McKinney (cdm235), Olivia Thomas (oat9) (ChemE 6800 Fall 2024)<br />
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu


Author: Jack Heider (ChE 345 Spring 2015) <br/>
=Introduction=
Steward: Dajun Yue, Fengqi You
A quadratic program is an optimization problem that comprises a quadratic objective function bound to linear constraints.<sup>1</sup> Quadratic Programming (QP) is a common type of non-linear programming (NLP) used to optimize such problems.
 
One of the earliest known theories for QP was documented in 1943 by Columbia University’s H.B. Mann<sup>2,3</sup>, but many are given credit for their early contributions to the field such as E. W. Barankin and R. Dorfman in their naval research throughout the 1950s<sup>4</sup> and Princeton University’s Wolfe and Frank for their research in 1956.<sup>5</sup> The field has since made other prominent strides, such as when Harry Markowitz famously received the Nobel Prize in Economics in 1990 for his application of QP in optimizing his portfolio’s financial risk and reward.<sup>6</sup>
 
QP is essential to the field of optimization for multiple reasons. Firstly, quadratic problems can often be applied to real world applications due to the quadratic nature of variance, the sum of squares deviation used to represent uncertainty.<sup>6</sup> QP can also be applied to a wide variety of real-world problems such as scheduling, planning, flow computations, engineering modeling, design and control, game theory, and economics.<sup>7</sup> Secondly, QP is commonly used as a steppingstone for more general optimization problems such as sequential quadratic programming and augmented Lagrangian methods.<sup>1</sup>
 
=Algorithm Discussion=
 
=== General Problem Fomulation ===
Quadratic programming problems are typically formatted as minimization problems, and the general mathematical formulation is:
 
'''minimize '''<math>q(x)=\frac{1}{2}x^TQx+c^Tx</math>
 
'''subject to:'''
 
<math>Ax \geq b</math>
 
<math>x \geq 0</math>
 
Where:
 
* <math>x\in R^n</math> is the '''decision variable''' vector.
* <math>Q\in R^{n\times n}</math> is a symmetric, positive semi-definite <math>n \times n</math> matrix representing the quadratic coefficients.
* <math>A\in R^{m\times n}</math> is the inequality constraint matrix.
* <math>b \in R^m</math> is an <math>m</math> dimensional vector representing a constraint boundary.
* <math>c \in R^n</math> is a linear coefficient vector.
 
=== Dual Formulation ===
The general mathematical formulation for the QP dual is:
 
'''maximize '''<math>q(x,y) = b^Ty-\frac{1}{2}x^TQx</math>
 
'''subject to:'''
 
<math>A^T y - Qx + s =c</math>
 
<math>y, s\geq 0</math>
 
Where:
* <math>y\in R^m</math> is an m-dimensional vector dual variable.
* <math>x\in R^n</math> is the prime decision variable.
* <math>s\in R^n</math> is the slack variable.
* <math>Q\in R^{n\times n}</math> is a symmetric, positive semi-definite <math>n \times n</math> matrix representing the quadratic coefficients.
* <math>A\in R^{m\times n}</math> is the inequality constraint matrix.
* <math>b \in R^m</math> is an <math>m</math> dimensional vector representing a constraint boundary.
* <math>c \in R^n</math> is a linear coefficient for the dual target.
 
The general conditions for using QP for an optimization problem begin with having a quadratic objective function accompanied by linear constraints. For convex problems, Q defined in the equation above must be positive semi-definite; if not, there may be multiple local solutions meeting minimization criteria and deemed non-convex. As later sections in this paper will discuss, problem dimensions may vary in size, but it is not an issue as certain quadratic algorithms are tailored to meet computational demand. Finally, the feasibility region must be non-empty- otherwise there will be no solution.
 
==Active Set Methods==
The active set algorithm is a method used to solve quadratic programming problems by iteratively identifying and working with the set of constraints that are most likely to influence the solution, called the active set. More specifically, the algorithm maintains a subset of constraints that are treated as equalities at each step, solving a simplified quadratic programming problem for this subset. Constraints can be added or removed from the active set as the solution progresses until optimality conditions are satisfied.
 
=== When to Use ===
Active set methods are best suited for most linear programming problems, particularly those with manageable dimensions, as they exploit the problem's structure and update estimates of active constraints iteratively. However, for problems where nonlinearity or degeneracy complicates the constraint structure, active set methods, as a broader class, are useful since they generalize the simplex approach to handle quadratic or nonlinear constraints. In cases with large-scale problems or poor simplex performance due to its exponential worst-case complexity, alternative methods like interior-point techniques may be more appropriate.
 
=== Implementation Steps ===
 
# Start with an Initial Solution and Active Set
#* Begin with a feasible point that satisfies all constraints.
#* Identify which constraints are active (equality constraints or inequality constraints that are tight).
# Iterative Process:
#* Solve a Reduced Problem: Fix the active constraints as equalities and solve the resulting smaller QP problem.
#* Check Optimality: Verify if the current solution satisfies the Karush-Kuhn-Tucker conditions.
#* Update the Active Set:
#** Add violated constraints to the active set if they are not satisfied.
#** Remove constraints from the active set if they are no longer binding.
# Repeat Until Convergence:
#* Iterate through the process until the optimal solution is found, ensuring all constraints are satisfied.
 
=== Pseudocode ===
 
This is the algorithm for Active-Set Method for Convex QP where:
 
* <math>x\in R^n</math> is the '''decision variable''' vector.
* <math>G\in R^{n\times n}</math> is a symmetric, positive semi-definite <math>n \times n</math> matrix representing the quadratic coefficients.
* <math>c\in R^n</math> is a linear coefficient vector.
* <math>W</math> represents the active constraints.
 
 
<code> Compute a feasible starting point <math>x_0</math>;
 
Set <math>W_0</math> to be a subset of the active constraints at <math>x_0</math>;
 
for <math>k=0,1,2...</math>
 
Solve for <math>p_k=\frac{1}{2}x_k^TGx_k+c^Tx_k</math>
 
'''if''' <math>p_k=0</math>
 
Compute Lagrange multipliers <math>\hat{\lambda}_i</math> that satisfy <math>\Sigma_{i\epsilon\hat{W}}a_i\hat{\lambda}_i=g=G\hat{x}+c,</math>
 
with <math>\hat{W}=W_k</math>
 
'''if''' <math>\hat{\lambda}_i\geq0</math> for all <math>i \in W_k \cap \jmath </math>
 
'''stop''' with solution <math>x^*=x_k; </math>
 
'''else'''
 
<math>j \longleftarrow </math> arg <math>min_{i \in W_k \cap \jmath} \hat{\lambda}_j; </math>
 
<math>x_{k+1} \longleftarrow x_k; W_{k+1} \longleftarrow W_k \ \{j\}; </math>
 
'''Else''' <math>(p_k \neq 0) </math>
 
Compute <math>a_k = min (1, min(\frac{b_i-a_i^Tx_k}{a_i^Tp_k})) </math>
 
<math>x_{k+1} \longleftarrow x_k + a_kp_k; </math>
 
'''if''' there are blocking constraints:  
 
obtain <math>W_{k+1} </math> by adding one of the blocking constraints to <math>W_k; </math>
 
'''else'''
 
<math>W_{k+1}\longleftarrow W_k; </math>
 
'''end (for)''' </code>
 
==Interior Point Methods==
Interior-point methods, developed in the 1980s, are a class of algorithms for solving large-scale linear programs using concepts from nonlinear programming. Unlike the active set methods, which navigate along the boundaries of the feasible region by testing vertices, interior-point methods approach the solution from within or near the interior, avoiding boundary constraints. These methods were inspired by the simplex method's poor worst-case complexity and the desire for algorithms with stronger theoretical guarantees, such as the ellipsoid method and Karmarkar's algorithm.
 
=== When to Use ===
Interior-point methods are particularly effective for large-scale optimization problems where the simplex method's boundary-focused approach may struggle. They require fewer iterations than simplex, although each iteration is computationally more intensive. Modern primal-dual interior-point methods are efficient, robust, and form the foundation of many current optimization software tools, making them the preferred choice for solving large or complex linear programs.
 
=== Implementation Steps (Convex Quadratic Program with a Positive Semi-Definite Matrix) ===
 
# Initial
#* Analyze the optimization problem and constraints
#* Add slack variables to problem to account for inequalities (if applicable) and write problem in general form and take barrier approach by adding in logarithmic component
# Iterative Process
#* Define the KKT conditions
#* Define a complementary measure
#* Define the perturbed KKT condition
#* Apply Newton’s method to formulate a system of linear equations
# Solve
#* Solve and iterate until convergence.
 
=== Pseudocode ===
This is the psuedocode for the Basic Interior-Point Algorithm where:
 
* <math>A</math> is a matrix of constraint coefficients.
* <math>b</math> is a vector of constraints.
* <math>l</math> represents the lower bounds on the decision variables.
* <math>u</math> represents the upper bounds on the decision variables.
* <math>\mu</math> is the barrier parameter; a small positive scalar controlling the size of the barrier term added to enforce inequality constraints.
* <math>\in</math> is the convergance tolerance for stopping criteria.
* <math>\alpha</math> is the step size for the line search during iterations.
* <math>k</math> is the iteration number.
* <math>Q</math> is the quadratic term coefficient.
* <math>c</math> is the linear term coefficient.
 
 
Choose <math>x_0</math> and <math>s_0\geq 0 </math>, and compute initial values for the multipliers <math>y_0</math> and <math>z_0 \geq 0 </math>.
 
Select an initial barrier parameter <math>\mu_0>0</math> and parameters <math>\sigma, \tau \in (0,1)</math>. Set <math>k\leftarrow 0</math>.
 
'''repeat''' until a stopping test for the nonlinear program is satisfied.
 
'''repeat''' unitl <math>E(x_k, s_k, y_k, z_k; \mu_k) \leq \mu_k</math>
 
Solve <math>\begin{bmatrix} \bigtriangledown_{xx}^2 L & 0 & -A_E^T(x) & -A_i^T(x) \\ 0 & Z & 0 & S \\ A_E(x) & 0 & 0 & 0 \\ A_i(x) & -I & 0 & 0 \end{bmatrix}</math><math>\begin{bmatrix} p_x  \\ p_s \\ p_y \\ p_z \end{bmatrix}</math><math>=-\begin{bmatrix} \bigtriangledown f(x) - A_E^T (x) y - A_i^T(x)z \\ Sz-\mu e \\ c_E(x) \\ c_i(x) - s \end{bmatrix}</math>to obtain the search direction <math>p=(p_x, p_s, p_y, p_z);</math>
 
Compute <math>\alpha_s^{max}, \alpha_z^{Max}</math> using:
 
<math>\alpha_s^{max} = max \{ \alpha \in (0,1): s + \alpha p_s \geq (1- \tau) s</math>
 
<math>\alpha_z^{max} = max \{ \alpha \in (0,1): z + \alpha p_s \geq (1- \tau) z</math>
 
Compute <math>(x_{k+1}, s_{k+1}, y_{k+1}, z_{k+1})</math> using:
 
<math>x^+ = x + a_s^{max} p_x, s^+ = s + a_s^{max} p_s , </math>
 
<math>y^+ = x + a_z^{max} p_y, z^+ = z + a_z^{max} p_z , </math>
 
Set <math>\mu_{k+1} \leftarrow \mu_k  </math> and <math>k\leftarrow k + 1; </math>
 
'''end'''
 
Choose <math>\mu_k \in (0, \mu_k); </math>
 
'''end'''
 
==Gradient Projection Methods==
The gradient projection method is an optimization technique used to solve constrained problems, particularly those with simple bound or inequality constraints. It combines gradient descent with a projection step that ensures iterates remain within the feasible region by projecting them back onto the constraint set. The method is iterative and adjusts the search direction based on the negative gradient of the objective function, while ensuring feasibility at each step. This approach is widely used for convex problems and has extensions for handling more complex constraint sets.
 
=== When to Use ===
The gradient projection method is best suited for problems with simple constraints, such as box constraints or convex inequality constraints, where projection onto the feasible region can be performed efficiently. It is effective for convex optimization problems where gradient-based methods converge to global optima. This method is often employed in large-scale optimization, machine learning, and signal processing applications due to its simplicity and ease of implementation. However, it may be less efficient for problems with complex or nonconvex constraints. Instead, other methods, such as active-set or interior-point methods, may be more appropriate.
 
=== Implementation Steps ===
 
# Initial
## Start by analyzing and visualizing the constraints. Pay special attention to constraints that are box-defined (e.g., variable bounds) or convex inequalities, as these are essential for the efficient execution of the gradient projection method. In this case, the box-defined constraints establish the feasibility region, referred to as the "box."
## Choose an initial feasible point that is within the constraints of the problem, this will be the starting point of the soon to be developed piecewise-linear path
## Find the gradient of the objective function
# Develop Piecewise linear path          
## Insert the initial feasible point into the gradient of the objective function.
## Define the descent direction and project onto the feasible region with initial feasible point. This line will define the piecewise-linear path.
# Find the Cauchy point
## Analyze the path where the line segment breaks or changes its projected path on the feasibility region
## Compute the value of each breakpoint point set by substituting them into the piecewise-linear path equation
## Write the quadratic of this line segment with respect to these breakpoints
## Differentiate the quadratic with respect to the new breakpoint
## Identify if minimizer was found, if not, analyze the next breakpoint set
# Refine and execute
## Update the piecewise-linear path decision variable to include the minimizer
## Update the active set based on the new solution found with the minimizer
## Find the new gradient value with this point, if converged then the minimum has been found. If not, iterate and repeat steps 1.2-4.3.
 
=== Pseudocode ===
 
This is the pseudocode for the gradient projection method algorithm where:
 
* <math>A </math> is a matrix of constraint coefficients.
* <math>x \in R^n </math> is the '''decision variable''' vector.
* <math>G \in R^{n \times n} </math> is a symmetric, positive semi-definite ''n'' x ''n'' matrix representing the quadratic coefficients.
* <math>c \in R^n </math> is a linear coefficient vector.
* <math>l </math> represents the lower bounds on the decision variables.
* <math>u </math> represents the upper bounds on the decision variables.
 
Compute a feasible starting point <math>x_0; </math>
 
'''for''' <math>k = 0, 1, 2,... </math>
 
'''if''' <math>x^k </math> satisfies the KKT conditions for the quadratic programming problem:
 
minimize <math>q = \frac{1}{2}x^TGx+x^Tc </math>
 
subject to <math>l \leq x \leq u </math>
 
'''stop''' with solution <math>x^* = x^k; </math>
 
Set <math>x = x^k </math> and find the Cauchy point <math>x^c; </math>
 
Find an approximate solution <math>x^+  </math> of
 
minimize <math>q = \frac{1}{2}x^TGx + x^Tc </math>
 
subject to <math>x_i = x_i^c, i \in A(x^c), </math>
 
<math>l_i \leq x_i \leq \mu_i, i \not\in A(x^c) </math>
 
such that <math>q(x^+) \leq q(x^+) \leq q(x^c)  </math> and <math>x^+ </math> is feasible;
 
<math>x^{k+1} \leftarrow x^+; </math>
 
'''end(for)'''
 
==Convexity in Quadratic Programming==
Convex quadratic programming problems occur when Q is positive semi-definite guarantee that the problem has a global minimum. Convex quadratic programming problems occur when Q is not positive semi-definite and the solutions may involve local minima, requiring more advanced techniques for global optimization (e.g., branch-and-bound, global optimization methods).
 
=Numerical Examples=
Examples are provided for solving convex and non-convex quadratic program optimizations using Active Set, Simplex-Like, and Branch and Bound methods.
 
==Active Set Method Example==
Step-by-step instructions for determining the optimal solution of the following quadratic programming problem are provided:
 
'''minimize '''<math>f(x_1, x_2) = 5x_1^2+8x_2^2-10x_1x_2-50x_1-80x_2</math>
 
'''subject to the constraints:'''
 
# <math>g_1(x)=x_1+x_2-10\leq0</math>
# <math>g_2(x)=-x_1-x_2+5\leq0</math>
# <math>g_3(x)=x_1-3x_2+15\leq0</math>
 
=== Step 1: Reformat Constraints as Necessary ===
Constraints should be reformatted to be <math>\leq0</math>. All constraints are currently in this format; therefore, no changes are needed.
 
=== Step 2: Construct the Gradients ===
Compute the gradients for the objective function and constraints.
 
* <math>\bigtriangledown f = \begin{bmatrix} 10x_1-10x_2-50 \\ 16x_2-10x_1-80 \end{bmatrix}</math>
* <math>\bigtriangledown g_1 = \begin{bmatrix} 1  \\ 1 \end{bmatrix}</math>
* <math>\bigtriangledown g_2 = \begin{bmatrix} -1  \\ -1 \end{bmatrix}</math>
* <math>\bigtriangledown g_3 = \begin{bmatrix} 1  \\ -3 \end{bmatrix}</math>
 
=== Step 3: Determine if There is a Feasible Solution with All Constraints Set to Active ===
The system of equations can be solved to determine if there is a feasible solution with all 3 constraints set to active. First, solve for  using constraint (1) and substitute it into constraint (2).
 
From <math>g_1(x) = 0, x_1 + x_2 = 10 \rightarrow x_1 = -x_2 + 10</math>
 
<math>-x_1-x_2+5 = 0</math>
 
<math>-(-x_2+10)-x_2+5=0</math>
 
<math>x_2-10-x_2+5=0</math>
 
<math>-10+5=0</math>
 
<math>-5=0</math>
 
Solving the system with all constraints active is infeasible. This is because the rows of the coefficient matrix are linearly dependent. The system does not have a unique solution. Thus, the next steps will be to reduce the number of active constraints to determine a feasible solution.
 
=== Step 4: Determine Solution with Constraints (1) and (2) Set to Active ===
As determined in Step 3, solving with constraints (1) and (2) result in a contradiction. Thus, this solution is infeasible.
 
=== Step 5: Determine Solution with Constraints (1) and (3) Set to Active ===
Substitute <math>x_1</math> using constraint (1) into constraint (3).
 
From <math>g_1(x) = 0, x_1 + x_2 = 10 \rightarrow x_1 = -x_2 + 10</math>
 
<math>x_1-3x_2 + 15 = 0</math>
 
<math>(-x_2 + 10) - 3x_2 + 15 = 0</math>
 
<math>-4x_2 + 25 = 0</math>
 
<math>-4x_2 = -25</math>
 
<math>x_2 = \frac{25}{4} = 6.25</math>
 
 
Substitute <math>x_2 = 6.25</math> into constraint (1).
 
<math>x_1 = -x_2 + 10</math>
 
<math>x_1 = -(6.25) + 10</math>
 
<math>x_1 = 3.75</math>
 
 
Check whether the solution satisfies primal feasability for constraint (2), meaning <math>g_2 \leq0</math>:
 
<math>g_2(3.75, 6.25) = -(3.75) - (6.25) + 5 = -5</math>
 
 
Primal feasibility is satisfied since <math>g_2(3.75, 6.25) \leq 0</math>. Thus, the solution <math>(x_1, x_2) = (3.75, 6.25)</math> is feasible.
 
=== Step 6: Determine Solution with Constraints (2) and (3) Set to Active ===
Substitute <math>x_1</math> using constraint (2) into constraint (3).
 
From <math>g_2(x) = 0, -x_1-x_2+5=0 \rightarrow x_1 = -x_2 + 5</math>
 
<math>x_1 - 3x_2 + 15 = 0</math>
 
<math>(-x_2 +5) - 3x_2 + 15 = 0</math>
 
<math>-4x_2 + 20 = 0</math>
 
<math>-4x_2 = -20</math>
 
<math>x_2 = \frac{20}{4} = 5</math>
 
 
Substitute <math>x_2=5</math> into constraint (2).
 
<math>x_1 = -x_2 + 5</math>
 
<math>x_1 = -(5) + 5</math> = 0
 
 
Check whether the solution satisfies primal feasibility for constraint (1), meaning <math>g_1 \leq0</math>:
 
<math>g_1(0, 5) = 0 = 5 - 10 = -5</math>
 
 
Primal feasibility is satisfied since <math>g_1(0,5) \leq 0</math>. Thus, the solution <math>(x_1, x_2) = (0, 5)</math> is feasible.
 
=== Step 7: Solve Objective Function Using Feasible Solutions ===
<math>f(x_1, x_2) = f(3.75, 6.25)</math>
 
<math>f(3.75, 6.25) = 5(3.75)^2 + 8(6.25)^2 - 10(3.75) - 50(3.75) - 80(6.25) = -539.0625</math>
 
<math>f(x_1, x_2) = f(0, 5)</math>
 
<math>f(0,5) = 5(0)^2 + 8(5)^2-10(0)-50(0)-80(5) = -200</math>
 
=== Step 8: Determine Optimal Solution ===
The feasible solution with the minimum value is at <math>f(3.75, 6.25) = -539.0625</math> . Thus, this is the optimal solution.
==Simplex-Like Method Example==
Step-by-step instructions for determining the optimal solution of the following quadratic programming problem are provided:
 
'''Minimize''' <math>f(x_, x_2) = \frac{1}{2} (x_1^2 + 3x_2^2) + 2x_1 + 6x_2 </math>
 
Subject to the constraints:
# <math> x_1 + x_2 \leq 5 </math>
# <math> 2x_1 + x_2 \leq 8 </math>
# <math> x_1 \geq0 </math>
# <math> x_2 \geq0 </math>
 
=== Step 1: Reformat Constraints as Necessary ===
Equation constraints should be reformatted to be <math>\leq0</math> to solve for standard form. All constraints are currently in this format; therefore, no changes are needed. Non-negativity constraints should remain <math>\geq0</math>.
 
=== Step 2: Determine if the Objective Function and Constraints are Convex ===
To use the Simplex-like Method, both the objective function and constraints must be convex. For the objective function, convexity is proven by identifying if the eigenvalues of the Hessian matrix are <math>\geq0</math>. For the constraints, convexity is proven by identifying if the constraints are linear.
 
==== Step 2.1 Find the Eigenvalues of the Hessian Matrix for the Objective Function ====
Solve for the first order partial derivatives.
 
<math> f(x_1, x_2) = \frac{1}{2} (x_1^2 + 3x_2^2) + 2x_1 + 6x_2 </math>
 
<math>{\partial f\over\partial x_1}= \frac{1}{2} \times 2x_1 + 2 = x_1 + 2</math>
 
Solve for the second order partial derivatives.
 
<math>{\partial^2 f\over\partial x_1^2}= 1</math>
 
<math>{\partial^2 f\over\partial x_2^2}= 3</math>
 
<math>{\partial^2 f\over\partial x_1 \partial x_2}=0</math>
 
<math>{\partial^2 f\over\partial x_2 \partial x_1}=0</math>
 
Construct the Hessian Matrix
 
<math>H = \begin{bmatrix} 1 & 0 \\ 0 & 3 \end{bmatrix}</math>
 
Determine the eigenvalues
 
<math>\lambda_1 = 1</math>
 
<math>\lambda_2 = 3</math>
 
The objective function is convex as <math>\lambda_1 \lambda_2 \geq0</math>.
 
==== Step 2.2 Determine if the Constraints are Convex ====
# <math>x_1 + x_2 \leq5</math>
# <math>2x_2 + x_2 \leq8</math>
# <math>x_1 \geq0</math>
# <math>x_2 \geq0</math>
 
All constraints are linear and therefore convex. This method can be applied to solve as both the objective function and constraints are convex.
 
=== Step 3: Determine the Feasible Region ===
Change the inequalities to equalities.
# <math>x_1 + x_2 = 5</math>
# <math>2x_2 + x_2 = 8</math>
# <math>x_1 = 0</math>
# <math>x_2 = 0</math>
 
Solve for the intersection points using all combinations of the constraints.
 
* <math> x_1 = 0 </math> and <math>x_1 + x_2 = 5</math>
** <math> 0 + x_2 = 5 </math>
** Interesection at (0,5)
* <math> x_1 = 0 </math> and <math> 2x_1 + x_2 = 8 </math>
** <math> 2(0) + x_2 = 8 </math>
** Intersection at (0,8)
* <math> x_1 + x_2 = 5 </math> and <math> x_2 = 0 </math>
** <math> x_1 + 0 = 5 </math>
** Intersection at (5,0)
* <math> 2x_1 + x_2 = 8 </math> and <math> x_2 = 0 </math>
** <math> 2x_1 + 0 = 8 </math>
** <math> x_1 = \frac{8}{2} = 4 </math>
** Intersection at (4,0)
* <math> x_1 + x_2 = 5 </math> and <math> 2x_1 + x_2 = 8 </math>
** <math>2x_1 + x_2 - x_1 + x_2 = 8 - 5 </math>
** <math> x_1 = 3 </math>
* Substitute <math> x_1 = 3 </math> into the first equation
** <math> 3 + x_2 = 5 </math>
** <math> x_2 = 2 </math>
** Interesection at (3,2)
 
The feasible region is bounded & convex in the intersection points (0,5), (0,8), (5,0), (4,0), (3,2).
 
=== Step 4: Solve Objective Function using Feasible Solutions ===
<math> f(x_1, x_2) = \frac{1}{2}(x_1^2 + 3x_2^2) + 2x_1 + 6x_2 </math>
* Solve at (0,5)
** <math> f(0,5) = \frac{1}{2} (0^2 + 3(5)^2) + 2(0) + 6(5)</math>
** <math> f(0,5) = \frac{1}{2} (75) + 0 + 30 </math>
** <math> f(0,5) = 67.5 </math>
* Solve at (0,8)
** <math> f(0,8) = \frac{1}{2} (0^2 + 3(8)^2) + 2(0) + 6(8) </math>
** <math> f(0,8) = \frac{1}{2} (192) + 0 + 48 </math>
** <math> f(0,8) = 144 </math>
* Solve at (5,0)
** <math> f(5,0) = \frac{1}{2} (5^2 + 3(0)^2) + 2(5) + 6(0) </math>
** <math> f(5,0) = \frac{1}{2} (25) + 10 + 0 </math>
** <math> f(5,0) = 22.5 </math>
* Solve at (4,0)
** <math> f(4,0) = \frac{1}{2} (4^2 + 3(0)^2) + 2(4) + 6(0) </math>
** <math> f(4,0) = \frac{1}{2} (16) + 8 + 0 </math>
** <math> f(4,0) = 16 </math>
* Solve at (3,2)
** <math> f(3,2) = \frac{1}{2} (3^2 + 3(2)^2) + 2(3) + 6(2) </math>
** <math> f(3,2) = \frac{1}{2} (21) + 6 + 12 </math>
** <math> f(3,2) = 28.5 </math>
 
=== Step 5: Identify Edge Solutions along the Feasible Region ===
* <math> P_1 = (3,2) </math> and <math> P_2 = (5,0) </math>
* Parameterize the edge to represent the points along each edge
** <math> x_1(t) = x_{1P1} + t(x_{1P2} - x_{1P1}) = 3 + t(5-3) </math>
** <math> x_2(t) = x_{2P1} + t(x_{2P2} - x_{2P1}) = 2 + t(0-2) </math>
*** <math> x_1(t) = 3 + 2t </math>
*** <math> x_2(t) = 2 - 2t </math>
*** <math> 0 \leq t \leq 1 </math>
** Use the parameterization in the objective function
** <math> f(x_1(t), x_2(t)) = \frac{1}{2} (x_1(t)^2 + 3x_2(t)^2) + 2x_1(t) + 6x_2(t) </math>
*** <math> f(x_1(t), x_2(t)) = \frac{1}{2} ((3 + 2t)^2 + 3(2 - 2t)^2) + 2(3 + 2t) + 6(2 - 2t) </math>
*** <math> = \frac{1}{2} (9 + 12t + 4t^2 + 12 -24t + 12t^2) + 6 + 4t + 12 - 12t </math>
*** <math> = \frac{1}{2} (21 - 12t + 16t^2) + 18 - 8t </math>
*** <math> = 8t^2 - 14t + 28.5 </math>
** Solve for <math>t</math> using the parameterization objective function
** <math> f(x_1(t), x_2(t)) = 8t^2 - 14t + 28.5 </math>
*** <math> \frac{\partial f}{\partial t} = 16t - 14 = 0 </math>
*** <math> t = \frac{7}{8} </math> which is within [0,1] and therefore a solution.
** Solve for <math> x_1 </math> and <math> x_2 </math> using the parameterization equations and <math>t</math> value
*** <math> x_1 = 3 + 2 (\frac{7}{8}) = 4.75 </math>
*** <math> x_2 = 2 - 2 (\frac{7}{8}) = 0.25 </math>
** Solve the objective function for the edge solution
*** <math> f(4.75, 0.25) = \frac {1}{2} (4.75^2 + 3(0.25)^2) + 2(4.75) + 6(0.25) = 22.375 </math>
** Verify the minimum objective solution
*** At <math> t=0, f(3,2) = 28.5 </math>
*** At <math> t=1, f(5,0) = 22.5 </math>
*** At <math> t= \frac{7}{8}, f(4.75,0.25) = 22.375 </math>
*** The minimum solution is at <math> f(4.75,0.25) = 22.375 </math>
 
Repeat process for other edges
* <math> P_1 = (3,2) </math> and <math> P_2 = (0,5) </math>
** <math> x_1(t) = 3 - 3t </math>
** <math> x_2(t) = 2 + 3t </math>
** <math> 0 \leq t \leq 1 </math>
** <math> f(x_1(t), x_2(t)) = \frac{1}{2} ((3 - 3t)^2 + 3(2 + 3t)^2) + 2(3 - 3t) + 6(2 + 3t) </math>
*** <math> = \frac{1}{2} (9 - 18t + 9t^2 + 12 + 36t + 27t^2) + 6 - 6t + 12 + 18t </math>
*** <math> = \frac{1}{2} (21 + 18t + 36t^2) + 18 + 12t </math>
** <math> f(x_1(t), x_2(t)) = 18t^2 + 21t + 28.5 </math>
*** <math> \frac{\partial f}{\partial t} = 36t + 21 = 0 </math>
*** <math> t = - \frac{7}{12} </math> which is not within [0,1] and therefore not a solution
** At <math> t = 0, f(3,2) = 28.5 </math>
** At <math> t = 1, f(0,5) = 67.5 </math>
** The minimum solution is at <math> f(3,2) = 28.5 </math>
* <math> p_1 = (0,8) </math> and <math> p_2 = (4,0) </math>
** <math> x_1(t) = 0 + 4t </math>
** <math> x_2(t) = 8 - 8t </math>
** <math> 0 \leq t \leq 1 </math>
** <math> f(x_1(t), x_2(t)) = \frac{1}{2} ((4t)^2 + 3(8 - 8t)^2) + 2(4t) + 6(8 - 8t) </math>
*** <math> = \frac{1}{2} (16t^2 + 192 - 384t + 192t^2) + 8t + 48 - 48t </math>
*** <math> = \frac{1}{2} (208t^2 - 384t + 192) + 8t + 48 - 48t </math>
** <math> f(x_1(t), x_2(t)) = 104t^2 - 232t + 144 </math>
*** <math> \frac{\partial f}{\partial t} = 208t - 232 = 0 </math>
*** <math> t = \frac{232}{208} </math> which is not within [0,1] and therefore not a solution
** At <math> t = 0, f(0,8) = 144 </math>
** At <math> t - 1, f(4,0) = 16 </math>
** The minimum solution is at <math> f(4,0) = 16 </math>
* <math> p_1 = (0,5) </math> and <math> p_2 = (0,8) </math>
* Since <math> x_1 = 0 </math> is fixed in both <math> p_1 </math> and <math> p_2 </math>, solve objective function directly
** <math> f(0, x_2) = \frac{1}{2} (0^2 + 3x_2^2) + 2(0) + 6x_2 = \frac{3}{2} x_2^2 + 6x_2</math>
** <math> \frac{\partial f}{\partial x_2} = 3x_2 + 6 </math>
*** <math> x_2 = -2 </math> which is not within [5,8] and therefore not a solution
** At <math> x_2 = 5, f(0,5) = 67.5 </math>
** At <math> x_2 = 8, f(0,8) = 144 </math>
*** The minimum solution is at <math> f(0,5) = 67.5 </math>
* <math> p_1 = (4,0) </math> and <math> p_2 = (5,0) </math>
* Since <math> x_2 = 0 </math> is fixed in both <math> p_1 </math> and <math> p_2 </math>, solve objective function directly
** <math> f(x_1, 0) = \frac{1}{2} (x_1^2 + 0) + 2x_1 + 0 = \frac{1}{2} x_1^2 + 2x_1 </math>
** <math> \frac{\partial f}{\partial x_1} = x_1 + 2 </math>
*** <math> x_1 = -2 </math> which is not within [4,5] and therefore not a solution
** At <math> x_1 = 4, f(4,0) = 16 </math>
** At <math> x_1 = 5, f(5,0) = 22.5 </math>
** The minimum solution is at <math> f(4,0) = 16 </math>
 
=== Step 6: Determine the Optimal Solution ===
Compare all intersection and edge solutions to minimize the objective function.
<math> P_1 = (3,2) </math> and <math> p_2 = (5,0) </math>
* The minimum solution is at <math> f(4.75, 0.25) = 22.375 </math>
<math> P_1 = (3,2) </math> and <math> P_2 = (0,5) </math>
* The minimum solution is at f(3,2) = 28.5
<math> P_1 = (0,8) </math> and <math> p_2 = (4,0) </math>
* The minimum solution is at <math> f(4,0) = 16 </math>
<math> P_1 = (0,5) </math> and <math> P_2 = (0,8) </math>
* The minimum solution is at <math> f(0,5) = 67.5 </math>
<math> p_1 = (4,0) </math> and <math> p_2 = (5,0) </math>
* The minimum solution is at <math> f(4,0) = 16 </math>
The optimal solution occurs at <math> f(4,0) = 16 </math>
 
==Non-Convex Example Using Branch and Bound==
Step-by-step instructions for determining the optimal solution of the following quadratic programming problem using branch and bound are provided:
 
'''maximize '''<math>f(x_1, x_2) = -x_1^2-x_2^2+5x_1+4x_2</math>
 
'''subject to the constraints:'''
 
# <math>x_1 \in \{0, 1, 2\}</math>
# <math>0 \leq x_2 \leq3</math>
 
=== Step 1: Determine if the Objective Function and Constraints are Convex ===
To determine if the objective function is convex, its eigenvalues are determined. The constraints are then analyzed for linearity.


=Introduction=
==== Step 1.1 Find the Eigenvalues of the Hessian Matrix for the Objective Function ====
[[File:QPPic.jpg|frame|Optimizating of a quadratic function.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">12</span>]]
Solve for the first order partial derivatives.
 
<math>f(x_1, x_2) = -x_1^2-x_2^2+5x_1+4x_2</math>
 
<math>{\partial f\over\partial x_1}=-2x_1+5</math>
 
<math>{\partial f\over\partial x_2}=-2x_2+4</math>
 
 
Solve for the second order partial derivatives.
 
<math>{\partial^2 f\over\partial x_1^2}=-2</math>
 
<math>{\partial^2 f\over\partial x_2^2}=-2</math>
 
<math>{\partial^2 f\over\partial x_1 \partial x_2}=0</math>
 
<math>{\partial^2 f\over\partial x_2 \partial x_1}=0</math>
 
 
Construct the Hessian Matrix.
 
<math>H = \begin{bmatrix} -2 & 0 \\ 0 & -2 \end{bmatrix}</math>
 
 
Determime the eigenvalues.
 
<math>\lambda_1 = -2</math>
 
<math>\lambda_2 = -2</math>
 
The objective function is negative semi-definite since both eigenvalues are negative. This makes the objective function non-convex.
 
==== Step 1.2 Determine if the Constraints are Convex ====
The non-binary integer constraint for this given quadratic programming problem makes this example non-convex.
 
=== Step 2: Determine the Relaxed Solution ===
To help determine whether to prune future branches, the optimization will be solved by relaxing the integer constraint from <math>x_1\in\{0,1,2\}</math>  to <math>0\leq x_1 \leq2</math>. Thus, the relaxed problem formulation is as follows:
 
'''maximize '''<math>f(x_1, x_2) = -x_1^2-x_2^2+5x_1+4x_2</math>
 
'''subject to the constraints:'''
 
# <math>0 \leq x_2 \leq 2</math>
# <math>0 \leq x_2 \leq3</math>
 
To solve the relaxed formulation the gradients are set to 0 and the system of equations is solved.
 
From <math>g_1(x) = 0, -2x_1 = -5 \rightarrow x_1 = 2.5</math>
 
From <math>g_2(x) = 0, -2x_2 = -4 \rightarrow x_2 = 2</math>
 
<math>f(2.5, 2) = -(2.5)^2 - (2)^2 + 5(2.5) + 4(2)</math>
 
<math>f(2.5, 2) = -6.25 - 4 + 12.5 + 8 = 10.25</math>
 
The relaxed solution results in an optimal value of 10.25. However, <math>x_1</math> was a non-integer value. Therefore, the branch and bound method will be applied to find a solution that satisfies all constraints. The optimal value must be no greater than 10.25 since this was the solution found by relaxing constraints.
 
=== Step 3: Apply Branch and Bound ===
Since <math>x_1</math> is an integer, the selected branches are <math>x_1=0, x_1=1,</math> and <math>x_1=2</math>.
 
==== Step 3.1: Branch at <math>x_1 = 0</math> ====
Substitute <math>x_1 = 0</math> into the objective function.
 
<math>f(0, x_2) = -(0)^2 - x_2^2 + 5(0) + 4x_2</math>
 
<math>f(0, x_2) = -x_2^2 + 4x_2</math>
 
 
Determine the critical points by setting the first derivative to 0.
 
<math>{d \over dx_2}(-x_2^2+4x_2) = -2x_2 + 4 =0</math>
 
<math>-2x_2=-4</math>
 
<math>x_2=2</math>
 
 
Evaluate the objective function at the critical point <math>(x_2 = 2)</math> and boundary points <math>(x_2 = 0, x_2 = 3)</math>.  
 
<math>f(0, 0) = -(0)^2 - (0)^2 + 5(0) + 4(0) = 0</math>
<math>f(0,2) = -(0)^2 - (2)^2 + 5(0) +4(2) = 4</math>
<math>f(0, 3) = -(0)^2 - (3)^2 + 5(0) + 4(3) = 3</math>
 
The maximum value of these solutions is 4, which happens when <math>x_1 = 0</math> and <math>x_2 = 2</math>.
 
 
'''Step 3.2: Branch at''' <math>x_1 = 1</math>
 
Substitute <math>x_1 = 1</math> into the objective function.  
 
<math>f(1, x_2) = -(1)^2 - x_2^2 + 5(1) + 4x_2</math>
 
<math>f(1, x_2) = -x_2^2 + 4x_2 + 4</math>
 
 
 
Determine the critical points by setting the first derivative to 0.


Quadratic programming (QP) is the problem of optimizing a quadratic objective function and is one of the simplests form of non-linear programming.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">1</span>  The objective function can contain bilinear or up to second order polynomial terms,<span style="font-size: 8pt; position:relative; bottom: 0.3em;">2</span> and the constraints are linear and can be both equalities and inequalities.  QP is widely used in image and signal processing, to optimize financial portfolios, to perform the least-squares method of regression, to control scheduling in chemical plants, and in [https://optimization.mccormick.northwestern.edu/index.php?title=Sequential_quadratic_programming&action=edit&redlink=1 sequential quadratic programming], a technique for solving more complex non-linear programming problems.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">3,4</span>  The problem was first explored in the early 1950s, most notably by Princeton University's [http://en.wikipedia.org/wiki/Philip_Wolfe_(mathematician) Wolfe] and [http://en.wikipedia.org/wiki/Marguerite_Frank Frank], who developed its theoretical background,<span style="font-size: 8pt; position:relative; bottom: 0.3em;">1</span> and by [http://en.wikipedia.org/wiki/Harry_Markowitz Harry Markowitz], who applied it to portfolio optimization, a subfield of finance.
<math>{\operatorname{d}\!\over\operatorname{d}\!x_2} (-x_2^2 + 4x_2 + 4) = -2x_2 + 4 = 0</math>


=Problem formulation=
<math>-2x_2 = -4</math>
A general quadratic programming formulation contains a quadratic objective function and linear equality and inequality constraints:<span style="font-size: 8pt; position:relative; bottom: 0.3em;">2,5,6</span>


<math>\text{min} f(x)=q^{T}x+\frac{1}{2}x^{T}Qx</math> <br/>
<math>x_2 = 2</math>
<math>s.t. Ax=a</math> <br/>
<math>Bx\le b</math> <br/>
<math>x\ge0</math>  


The objective function is arranged such that the vector <math>q</math> contains all of the (singly-differentiated) linear terms and <math>Q</math> contains all of the (twice-differentiated) quadratic terms.  Put more simply, <math>Q</math> is the Hessian matrix of the objective function and <math>q</math> is its gradient.  By convention, any constants contained in the objective function are left out of the general formulation.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">6</span> The one-half in front of the quadratic term is included to remove the coefficient (2) that results from taking the derivative of a second-order polynomial.


As for the constraints, the matrix equation <math>Ax=a</math> contains all of the linear equality constraints, and <math>Bx \le b</math> are the linear inequality constraints.
Evaluate the objective function at the critical point <math>(x_2 = 2)</math> and boundary points <math>(x_2 = 0, x_2 = 3)</math>.


When there are only inequality constraints (<math>Bx\le b</math>), the Lagrangean is:<span style="font-size: 8pt; position:relative; bottom: 0.3em;">6</span> <br/>
<math>f(1,0) = -(1)^2 - (0)^2 + 5(1) + 4(0) = 4</math>
<math>L(x,\lambda,\mu)=q^{T}x+\frac{1}{2}x^{T}Qx+{\lambda}^T(Ax-a)+{\mu}^T(Bx-b)=0</math><br/>


==Global solutions==
<math>f(1,2) = -(1)^2 - (2)^2 + 5(1) + 4(2) = 8</math>
If the objective function is convex, then any local minimum found is also the sole global minimum.  To analyze the function’s convexity, one can compute its Hessian matrix and verify that all eigenvalues are positive, or, equivalently, one can verify that the matrix Q is positive definite.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">6</span> This is a sufficient condition, meaning that it is not required to be true in order for a local minimum to be the unique global minimum, but will guarantee this property holds if true.


==KKT conditions==
<math>f(1,3) = -(1)^2-(3)^2+5(1)+4(3)=7</math>
Solutions can be tested for optimality using Karush-Kuhn-Tucker conditions just as is done for other nonlinear problems:<span style="font-size: 8pt; position:relative; bottom: 0.3em;">5</span> <br/>


Condition 1: sum of gradients is zero: <br/>
The maximum value of these solutions is 8, which happens when <math>x_1 = 1</math> and <math>x_2 = 2</math>.
<math>\triangledown L({x}^*,{\lambda}^*,{\mu}^*)=q^{T}+\frac{1}{2}x^{T}Q+{\lambda}^{*T}A+{\mu}^{*T}B</math> <br/>


Condition 2: all constraints satisfied: <br/>
<math>Ax^*-a=0</math> <br/>
<math>Bx^*-b\le0</math>


Condition 3: complementary conditions: <br/>
'''Step 3.3: Branch at''' <math>x_1 = 2</math>
<math>{\mu}^TBx-b=0</math> <br/>
<math>{x}^{T},{\lambda}^{T},{\mu}^T\ge0</math>


==Solution strategies==
Substitute <math>x_1 = 2</math> into the objective function.
Because quadratic programming problems are a simple form of nonlinear problem, they can be solved in the same manner as other non-linear programming problems.
An unconstrained quadratic programming problem is most straightforward to solve: simply set the derivative (gradient) of the objective function equal to zero and solve.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">7</span> More practical (constrained) formulations are more difficult to solve.  The typical solution technique when the objective function is strictly convex and there are only equality constraints is the [https://optimization.mccormick.northwestern.edu/index.php?title=Conjugate_gradient_methods&action=edit&redlink=1 conjugate gradient method].  If there are inequality constraints (<math>Bx\le b</math>), then the [https://optimization.mccormick.northwestern.edu/index.php/Interior-point_method_for_NLP interior point] and active set methods are the preferred solution methods.  When there is a range on the allowable values of <math>x</math> (in the form <math>x^L\le x\le x^U</math>, which is the case for image and signal processing applications,  [https://optimization.mccormick.northwestern.edu/index.php/Trust-region_methods trust-region methods] are most frequently used.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">4</span> For all convex cases, an NLP solver in the optimization utility GAMS, such as KNITRO, MINOS, or CONOPT, can find solutions for quadratic programming problems.


=Numerical example=
<math>f(2, x_2) = -(2)^2 - x_2^2 + 5(2) + 4x_2</math>
[[File:ChE345 Wiki Mathematica.png|frame|Plot of the unconstrained objective function. Figure generated using Wolfram Mathematica.]]


This example demonstrates how to determine the KKT point of a specific QP problem:
<math>f(2, x_2) = -x_2^2+4x_2+6</math>


<math>\text{min}</math>
<math>f(x)=3{x_1}^2+{x_2}^2+2{x_1}{x_2}+x_1+6{x_2}+2</math> <br/>
<math>\text{s.t.}</math>
<math>2{x_1}+3{x_2} \ge 4</math> <br/>
<math>{x_1},{x_2} \ge 0</math>


Rewrite the constraints. <br/>
<math>g_1=-2{x_1}-3{x_2}+4\le 0</math> <br/>
<math>g_2=-{x_1} \le 0</math> <br/>
<math>g_3=-{x_2} \le 0</math> <br/> 


Construct the gradients. <br/>
Determine the critical points by setting the first derivative to 0.
<math>\triangledown f= \begin{bmatrix} 6{x_1}+2{x_2}+1 \\ 2{x_1}+2{x_2}+6 \end{bmatrix}, \triangledown {g_1}= \begin{bmatrix} -2 \\ -3 \end{bmatrix}, \triangledown {g_2}= \begin{bmatrix} -1 \\ 0 \end{bmatrix}, \triangledown {g_3}= \begin{bmatrix} 0 \\ -1 \end{bmatrix} </math>


Assuming all constraints are satisfied, set the gradient equal to zero to attempt to find an optima. <br/>
<math>{\operatorname{d}\!\over\operatorname{d}\!x_2} (-x_2^2+4x_2+6) = -2x_2+4=0</math>
<math>6{x_1}+2{x_2}+1=0</math> <br/>
<math>2{x_1}+2{x_2}+6=0</math> <br/> <br/>


<math>{x_1}=\frac{5}{4}</math>,
<math>-2x_2=-4</math>
<math>{x_2}=-\frac{17}{4}</math> <br/>


Make constraints <math>g_1</math> and <math>g_3</math>, which are violated, active.  Set both equal to zero. <br/>
<math>x_2=2</math>
<math>{g_1} = -2{x_1}-3{x_2}+4=0</math> <br/>
<math>{g_3} = -{x_2}=0</math> <br/>


Solve the resulting system of equations. <br/>
<math>\triangledown f+ {\mu_1} \triangledown {g_1}+ {\mu_3} \triangledown {g_3} =0</math> <br/>
<math>{g_1}=0</math> <br/>
<math>{g_3}=0</math> <br/> <br/>


<math> \begin{bmatrix} 6{x_1}+2{x_2}+1 \\ 2{x_1}+2{x_2}+6 \end{bmatrix} +{\mu_1} \begin{bmatrix} -2 \\ -3 \end{bmatrix} +{\mu_3} \begin{bmatrix} 0 \\ -1 \end{bmatrix} </math> <br/>
Evaluate the objective function at the critical point <math>(x_2=2)</math> and boundary points <math>(x_2=0, x_2 = 3)</math>.
<math>-2{x_1}-3{x_2}+4=0</math> <br/>
<math>-{x_2}=0</math> <br/>


Formulating the system as one matrix and row reducing is one of the simplest ways to solve.  Doing so yields: <br/>
<math>f(2,0) = -(2)^2-(0)^2+5(2)+4(0)=6</math>
<math>{x_1}=2, {x_2}=0, {\mu_1}=6.5, {\mu_3}=9.5</math><br/>


Drop constraint <math>g_3</math> because <math>{\mu_3}</math> is negative and resolve the system.  Doing so yields: <br/>
<math>f(2,2) = -(2)^2-(2)^2+5(2)+4(2)=10</math>
<math>{x_1}=0.5, {x_2}=1, {\mu_1}=3</math> <br/>


Which yields an objective function value of <math>f=11.25</math> <br/>
<math>f(2,3) = -(2)^2-(3)^2+5(2)+4(3) =9</math>


Verify linear dependence of the gradient: <br/>
The maximum value of these solutions is 10, which happens when <math>x_1=2</math> and <math>x_2=2</math>.
<math>\triangledown f+ {\mu_1} \triangledown {g_1}</math> <br/>
<math> \begin{bmatrix} 6 \\ 9 \end{bmatrix} +3 \begin{bmatrix} -2 \\ -3 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \therefore </math> <br/>


Verify the uniqueness of the solution: <br/>
<math>{\triangledown}^2 f(x)= \begin{bmatrix} 6 & 2 \\ 2 & 1 \end{bmatrix} \rightarrow \begin{bmatrix} 6-\lambda & 2 \\ 2 & 1-\lambda \end{bmatrix} =0 \rightarrow \lambda = 0.298, 6.702</math>


Because both eigenvalues are positive, the Hessian matrix is positive determinant, and this local minimum is the global minimum of the objective function given these constraints. <br/>


=Applications=
'''Step 3.4: Compare Branches'''


A few of the many quadratic programming applications are discussed in more detail and accompanied with general models below, and a list of other areas in which QP is important is presented as well.
The best solution across all branches was <math>x_1 = 2</math> and <math>x_2 = 2</math>, resulting in an optimal value of 10.  


*<b>Quadratic knapsack problem (QKP)</b>
=Application=
In the standard knapsack problem, there are a number of items with different weights and values, and the items are selected based on which combination yields the highest overall value without exceeding the overall weight limit of the knapsack.
In this section, we will explore the wide applications of quadratic programming in fields ranging from finance, logistics management, data science, and various engineering fields.


In the quadratic knapsack problem, the objective function is quadratic or, more specifically, bilinear, and the constraints are the same as in the typical knapsack problem.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">8</span> QKP's are used in designing email servers and to optimize the locations of "nodes" in applications such as positioning transportation hubs like airports and train stations.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">8</span>  Additionally, the problem can model situations in which a limited number of people are assigned to complete specific objectives that require them to interact.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">9</span>  One formulation is presented below:<span style="font-size: 8pt; position:relative; bottom: 0.3em;">8</span> <br/>
== Finance-Portfolio Optimization ==
<math>\text{max or min}</math>
In finance, quadratic programming is commonly used when formulating an investment portfolio that balances returns and risk to meet the investor's goal. Nobel Prize recipient Harry Markowitz developed a portfolio model using quadratic programming to maximize returns while minimizing risks, a model that would later become a cornerstone in Modern Portfolio Theory.<sup>10</sup> Markowitz defined risk as a variance of the portfolio’s return, which captures an essence of a portfolio’s symmetric return volatility. Markowitz's method involved estimating stock returns based on historical data, analyzing variances from historical projections and its realized returns to formulate risk. When provided a budget and risk tolerance, the model would now be able to select a combination of stocks that would maximize returns within the defined constraints.<sup>10,11</sup>
<math>x^{T}Qx</math> <br/>
<math>Ax \le a</math> <br/>
<math>x \in {B}^{n} </math> <br/>


The quadratic knapsack problem, although it looks very simple, is NP-hard and is thus difficult to solve. To make obtaining solutions easier, these problems are often linearized.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">8</span>
==Quadratic Knapsack Problem (QKP)==
The traditional knapsack optimization problem considers items (i) with associated values (v) and weights (w) and has the objective function to maximize the value of the items held in the knapsack while not exceeding its weight constraint. The decision variables in this scenario define items to include and not to include in the knapsack. This problem has proven to be beneficial when optimizing cargo loads and establishing transportation routes.


*<b>Model predictive control</b>
The QKP is an augmentation of the classic knapsack problem, where the objective function is quadratic and contains pairwise variables but still contains the same objective of maximizing value of items contained in the knapsack. The extension of this knapsack, initially proposed by Gallo in 1977, introduces the ability to optimize values with items that may have a unique relationship affecting their values.<sup>12</sup> QKP has applications in solving more complicated cargo load problems, telecommunications, and electrical engineering.
Quadratic programming also has important applications in chemical engineering.  Model predictive control (MPC) is a group of algorithms that help manage production in chemical plants by dictating production in each batch.  While often formulated as linear programs because the resulting models are more stable, robust and easier to solve, MPC models are sometimes made with quadratic programming.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">11</span>  As an example of its utility, quadratic programming was used by Di Ruscio in an MPC algorithm for a thermomechanical pulping process, which a method for making paper.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">11</span>  


*<b>Least squares</b>
To expand on an electrical engineering application, in microchip design, quadratic programming has been used to optimize chip component placement to minimize costs and component connections. This is similar in nature to the QKP.<sup>13</sup>
Least squares regression is one of the most common types of regression, and works by minimizing the sum of the squares of the difference between data points and a proposed fit. Quadratic optimization is one method that can be used to perform a least squares regression and is more flexible than most linear methods. One formulation for a quadratic programming regression model is as follows:<span style="font-size: 8pt; position:relative; bottom: 0.3em;">3</span>


<math>\text{min}</math>
==Support Vector Machines (SVMs)==
<math>e'Ie</math> <br/>
In machine learning, SVMs are learning models tasked with classification and regression demands. Use case examples of SVMs would include facial and text recognition. SVMs utilize quadratic programing to optimize a hyperplane used in a recognition task where the objective is to maximize a margin between classes and minimize classification errors and are commonly subject to linear constraints. Algorithms like sequential minimal optimization (SMO) have emerged to help solution growing data sets used in SVMs to make the computation more achievable by breaking the problem into sub quadratic programming problems.<sup>14</sup>
<math>e = Y-X \beta</math>


In this model, <math>\beta</math> and <math>e</math> are the unknown regression parameters, <math>I</math> is an identity matrix, and <math>X</math> and <math>Y</math> contain data about the independent and dependent variables respectively.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">3</span>
==Plug in Hybrid Electric Vehicle Power Management==
Plug-in hybrid electric vehicles (PHEVs) rely on having an optimized power management program to operate efficiently. PHEVs will draw power from batteries until they are depleted before switching to its hybrid combustion-ev power cycle. The goal for PHEVs becomes a quadratic programming problem where the objective is to minimize a quadratic function associated with costs for fuel which will optimize the power cycles and meet performance demands. .<sup>15</sup>


*<b>Other applications</b>
Quadratic programming is used in a wide range of applications not touched upon in the sample presented above.  Other major areas in which QP's are relied upon include signal and image processing<span style="font-size: 8pt; position:relative; bottom: 0.3em;">12</span> and a subfield of optimization called partial differential constrained optimization.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">3</span>  QP's are also extensively used in finance, as variance, which is used to measure risk, is a function containing squares.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">13,14,15</span>  More specifically, Markowitz won the 1990 Nobel Prize in Economics for his widely-used model that employs quadratic programming to optimizes the amount of risk taken on based on variances.<span style="font-size: 8pt; position:relative; bottom: 0.3em;">14</span>


Additionally, [https://optimization.mccormick.northwestern.edu/index.php?title=Sequential_quadratic_programming&action=edit&redlink=1 Sequential quadratic programming], an algorithm for solving more complicated NLP's that uses QP subproblems, is one of the most important applications.
{| class = "wikitable" style="margin:auto"
|+ Software tools that can be used in Quadratic Programming
|-
! Tool !! Description
|-
| Python || Python has multiple packages available to handle quadratic programming such as Pyomo, SciPY, Quadprong, and CVXPY with each library bringing its own strengths to a problem.
|-
| MATLAB || MathWorks MATLAB Optimization Toolbox is an add on package suitable to solve problems requiring quadratic programming with a either a problem based or solver based approach. If solver based, there are three algorithms available with quadprong: Interior-point-convex, trust-region-reflective,  and active set.
|-
| GAMS || GAMS Gurobi suite contains several algorithms that are suitable for quadratic programming. For example, to solve a convex QCP, the algorithm used is the parallel barrier algorithm.
|-
| Excel Solver || The data package is suitable for solving linear and nonlinear problems which can include quadratic programming problems.
|-
| WOLFRAM || Wolfram Language & System solver is a versatile tool and can solve non linear quadratic problems.
|-
| SNOPT || Uses sequential quadratic programing which solves multiple smaller quadratic sub problems.
|-
|Julia || Julia offer's JuMP and Convex.jl packages that leverage solvers like Gurobi and MOSEK to support quadratic programming problems.
|}


=Conclusion=
=Conclusion=  
Quadratic programming, the problem of optimizing a quadratic function, have been widely used since its development in the 1950s because it is a simple type of non-linear programming that can accurately model many real world systems, notably ones dependent on two variables.  Problems formulated this way are straightforward to optimize when the objective function is convex. QP has applications in finance, various types of computer systems, statistics, chemical production, and in algorithms to solve more complex NLP's.
Quadratic programming and its ability to optimize quadratic functions is a vastly useful tool for solving real-world problems which are often depicted as quadratic functions. For this reason, QP has been used to model a variety of problems from signal processing to game theory to economics. As one of the simplest non-linear programming types, QP can be used to both solve quadratic functions with linear bounds, made especially straightforward with a convex objective function, and can be used as a step in more complicated optimization problems such as sequential quadratic programming. Though only first theorized in the 1950s, QP application has broadened to many fields including finance, scheduling, and engineering modeling.


=References=
=References=
1. Frank, Marguerite, and Philip Wolfe. "An Algorithm for Quadratic Programming." Naval Research Logistics Quarterly 3 (1956): 95-110. Web. 4 June 2015. <br/>
1. J. Nocedal and S. J. Wright. Numerical Optimization. ''Springer'' (1999): Web. 2 December 2024.<br/>
2. Floudas, Christodoulos A., and V. Visweswaran. "Quadratic Optimization." Nonconvex Optimization and Its Applications, 2 (1995): 217-69. Web. 23 May 2015. <br/>
2. H. B. Mann. Quadratic forms with linear constraints. ''The American Mathematical Monthly'' (1943): Web. 2 December 2024.<br/>
3. McCarl, Bruce A., Moskowitz, Herbert, and Harley Furtan. "Quadratic Programming Applications." The International Journal of Management Science, 5 (1977): 43-55. <br/>
3. N. I. M. Gould and P. L. Toint. A Quadratic Programming Bibliography. ''RAL Numerical Analysis Group Internal Report'' (2012): Web. 2 December 2024.<br/>
4. Geletu, Abele. "Quadratic programming problems." Ilmenau University of Technology. Web. 23 May 2015. <br/>
4. E. W. Barankin and R. Dorfman. Toward quadratic programming. Report to the logistics branch, Office of Naval Research (1955): Web. 2 December 2024.<br/>
5. Bradley, Hax, and Magnanti. Applied Mathematical Programming. Boston: Addison-Wesley, 1997. 421-40. Web. 23 May 2015. <br/>
5. M. Frank and P. Wolfe. An Algorithm for Quadratic Programming. ''Naval Research Logistics Quarterly 3'' (1956): Web. 2 December 2024.<br/>
6. Jensen, Paul A., and Jonathan F. Bard. "Quadratic Programming." Operations Research Models and Methods. The University of Texas at Austin. Web. 23 May 2015. <br/>
6. R. J. Vanderbei. Linear Programming: Foundations and Extensions. ''Springer International Publishing (''2008): Web. 2 December 2024.<br/>
7. Binner, David. "Quadratic programming example - no constraints." Miscellaneous mathematical utilities. AKiTi. Web. 23 May 2015. <br/>
7. C. Floudas and V. Visweswaran. Quadratic Optimization. ''Nonconvex Optimization and Its Applications, 2'' (1995): Web. 2 December 2024.<br/>
8. Pisinger, David. "The Quadratic Knapsack Problem – A Survey." Discrete Applied Mathematics, 155 (2007): 623 – 648. Web. 23 May 2015. <br/>
8. R. J. Vanderbei. Linear Programming: International Series in Operations Research & Management Sceince. ''Springer International Publishing (''2008): Web. 2 December 2024.<br/>
9. "Quadratic Multiple Knapsack Problem." Optimization of Complex System. Optiscom Project. 2012. Web. 6 June 2015. <br/>
9. T. V.Mikosch, S. Resnick, B. Zwart and T. Dieker. Numerical Optimization: Springer Series in Operations Research and Financial Engineering. ''Springer International Publishing'' (2006): Web. 2 December 2024.<br/>
10. Gallo, G., P. L. Hammer, and B. Simeone. "Quadratic Knapsack Problems." Mathematical Programming 12 (1980): 132-149. Web. 24 May 2015. <br/>
10. B. McCarl, et al. Quadratic Programming Applications. ''Omega'', vol. 5, no. 1 (1977): Web. 2 December 2024<br/>
11. Di Ruscio, David. "Model Predictive Control and Optimization." Telemark University College. Mar. 2001. Web. 24 May 2015. <br/>
11. S. Boyd and L. Vandenberghe. Convex Optimization. ''Cambridge University Press (''2004): Web. 24 November 2024.<br/>
12. Ma, W. K. "Signal Processing Optimization Techniques." The Chinese University of Hong Kong. Web. 24 May 2015. <br/>
12. G. Gallo, P. L. Hammer and B. Simeone. Quadratic knapsack problems. In: Padberg, M.W. (eds) Combinatorial Optimization. Mathematical Programming Studies, vol 12. ''Springer, Berlin, Heidelberg'' (1980): Web. 24 November 2024.<br/>
13. Tütüncü, Reha H. "Optimization in Finance." Tokyo Institute of Technology. Spring 2003. Web. 23 May 2015. <br/>
13. N.A. Sherwani. ''Algorithms for VLSI Physical Design Automation'' (1993): Web. 24 November 2024.<br/>
14. Van Slyke, R. "Portfolio Optimization." NYU Polytechnic School of Engineering. 16 Nov. 2007. Web. 24 May 2015. <br/>
14. J. Platt. Fast Training of Support Vector Machines Using Sequential Minimal Optimization. ''Microsoft Research'', 17 Oct. 2018, www.microsoft.com/en-us/research/publication/fast-training-of-support-vector-machines-using-sequential-minimal-optimization/.<br/>
15. "Portfolio Optimization." SAS/OR(R) 9.2 User's Guide: Mathematical Programming. SAS Institute. Web. 23 May 2015. <br/>
15. Z. Zhou and C. Mi. ‘Power management of PHEV using quadratic programming’, Int. J. Electric and Hybrid Vehicles, Vol. 3, No. 3, (2011): Web. 24 November 2024.

Revision as of 13:38, 10 December 2024

Authors: Kayleigh Calder (kjc263), Colton Jacobucci (cdj59), Carolyn Johnson (cj456), Caleb McKinney (cdm235), Olivia Thomas (oat9) (ChemE 6800 Fall 2024)
Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu

Introduction

A quadratic program is an optimization problem that comprises a quadratic objective function bound to linear constraints.1 Quadratic Programming (QP) is a common type of non-linear programming (NLP) used to optimize such problems.

One of the earliest known theories for QP was documented in 1943 by Columbia University’s H.B. Mann2,3, but many are given credit for their early contributions to the field such as E. W. Barankin and R. Dorfman in their naval research throughout the 1950s4 and Princeton University’s Wolfe and Frank for their research in 1956.5 The field has since made other prominent strides, such as when Harry Markowitz famously received the Nobel Prize in Economics in 1990 for his application of QP in optimizing his portfolio’s financial risk and reward.6

QP is essential to the field of optimization for multiple reasons. Firstly, quadratic problems can often be applied to real world applications due to the quadratic nature of variance, the sum of squares deviation used to represent uncertainty.6 QP can also be applied to a wide variety of real-world problems such as scheduling, planning, flow computations, engineering modeling, design and control, game theory, and economics.7 Secondly, QP is commonly used as a steppingstone for more general optimization problems such as sequential quadratic programming and augmented Lagrangian methods.1

Algorithm Discussion

General Problem Fomulation

Quadratic programming problems are typically formatted as minimization problems, and the general mathematical formulation is:

minimize

subject to:

Where:

  • is the decision variable vector.
  • is a symmetric, positive semi-definite matrix representing the quadratic coefficients.
  • is the inequality constraint matrix.
  • is an dimensional vector representing a constraint boundary.
  • is a linear coefficient vector.

Dual Formulation

The general mathematical formulation for the QP dual is:

maximize

subject to:

Where:

  • is an m-dimensional vector dual variable.
  • is the prime decision variable.
  • is the slack variable.
  • is a symmetric, positive semi-definite matrix representing the quadratic coefficients.
  • is the inequality constraint matrix.
  • is an dimensional vector representing a constraint boundary.
  • is a linear coefficient for the dual target.

The general conditions for using QP for an optimization problem begin with having a quadratic objective function accompanied by linear constraints. For convex problems, Q defined in the equation above must be positive semi-definite; if not, there may be multiple local solutions meeting minimization criteria and deemed non-convex. As later sections in this paper will discuss, problem dimensions may vary in size, but it is not an issue as certain quadratic algorithms are tailored to meet computational demand. Finally, the feasibility region must be non-empty- otherwise there will be no solution.

Active Set Methods

The active set algorithm is a method used to solve quadratic programming problems by iteratively identifying and working with the set of constraints that are most likely to influence the solution, called the active set. More specifically, the algorithm maintains a subset of constraints that are treated as equalities at each step, solving a simplified quadratic programming problem for this subset. Constraints can be added or removed from the active set as the solution progresses until optimality conditions are satisfied.

When to Use

Active set methods are best suited for most linear programming problems, particularly those with manageable dimensions, as they exploit the problem's structure and update estimates of active constraints iteratively. However, for problems where nonlinearity or degeneracy complicates the constraint structure, active set methods, as a broader class, are useful since they generalize the simplex approach to handle quadratic or nonlinear constraints. In cases with large-scale problems or poor simplex performance due to its exponential worst-case complexity, alternative methods like interior-point techniques may be more appropriate.

Implementation Steps

  1. Start with an Initial Solution and Active Set
    • Begin with a feasible point that satisfies all constraints.
    • Identify which constraints are active (equality constraints or inequality constraints that are tight).
  2. Iterative Process:
    • Solve a Reduced Problem: Fix the active constraints as equalities and solve the resulting smaller QP problem.
    • Check Optimality: Verify if the current solution satisfies the Karush-Kuhn-Tucker conditions.
    • Update the Active Set:
      • Add violated constraints to the active set if they are not satisfied.
      • Remove constraints from the active set if they are no longer binding.
  3. Repeat Until Convergence:
    • Iterate through the process until the optimal solution is found, ensuring all constraints are satisfied.

Pseudocode

This is the algorithm for Active-Set Method for Convex QP where:

  • is the decision variable vector.
  • is a symmetric, positive semi-definite matrix representing the quadratic coefficients.
  • is a linear coefficient vector.
  • represents the active constraints.


Compute a feasible starting point ;

Set  to be a subset of the active constraints at ;

for

Solve for

if

Compute Lagrange multipliers that satisfy

with

if for all

stop with solution

else

arg

Else

Compute

if there are blocking constraints:

obtain by adding one of the blocking constraints to

else

end (for)

Interior Point Methods

Interior-point methods, developed in the 1980s, are a class of algorithms for solving large-scale linear programs using concepts from nonlinear programming. Unlike the active set methods, which navigate along the boundaries of the feasible region by testing vertices, interior-point methods approach the solution from within or near the interior, avoiding boundary constraints. These methods were inspired by the simplex method's poor worst-case complexity and the desire for algorithms with stronger theoretical guarantees, such as the ellipsoid method and Karmarkar's algorithm.

When to Use

Interior-point methods are particularly effective for large-scale optimization problems where the simplex method's boundary-focused approach may struggle. They require fewer iterations than simplex, although each iteration is computationally more intensive. Modern primal-dual interior-point methods are efficient, robust, and form the foundation of many current optimization software tools, making them the preferred choice for solving large or complex linear programs.

Implementation Steps (Convex Quadratic Program with a Positive Semi-Definite Matrix)

  1. Initial
    • Analyze the optimization problem and constraints
    • Add slack variables to problem to account for inequalities (if applicable) and write problem in general form and take barrier approach by adding in logarithmic component
  2. Iterative Process
    • Define the KKT conditions
    • Define a complementary measure
    • Define the perturbed KKT condition
    • Apply Newton’s method to formulate a system of linear equations
  3. Solve
    • Solve and iterate until convergence.

Pseudocode

This is the psuedocode for the Basic Interior-Point Algorithm where:

  • is a matrix of constraint coefficients.
  • is a vector of constraints.
  • represents the lower bounds on the decision variables.
  • represents the upper bounds on the decision variables.
  • is the barrier parameter; a small positive scalar controlling the size of the barrier term added to enforce inequality constraints.
  • is the convergance tolerance for stopping criteria.
  • is the step size for the line search during iterations.
  • is the iteration number.
  • is the quadratic term coefficient.
  • is the linear term coefficient.


Choose and , and compute initial values for the multipliers and .

Select an initial barrier parameter and parameters . Set .

repeat until a stopping test for the nonlinear program is satisfied.

repeat unitl

Solve to obtain the search direction

Compute using:

Compute using:

Set and

end

Choose

end

Gradient Projection Methods

The gradient projection method is an optimization technique used to solve constrained problems, particularly those with simple bound or inequality constraints. It combines gradient descent with a projection step that ensures iterates remain within the feasible region by projecting them back onto the constraint set. The method is iterative and adjusts the search direction based on the negative gradient of the objective function, while ensuring feasibility at each step. This approach is widely used for convex problems and has extensions for handling more complex constraint sets.

When to Use

The gradient projection method is best suited for problems with simple constraints, such as box constraints or convex inequality constraints, where projection onto the feasible region can be performed efficiently. It is effective for convex optimization problems where gradient-based methods converge to global optima. This method is often employed in large-scale optimization, machine learning, and signal processing applications due to its simplicity and ease of implementation. However, it may be less efficient for problems with complex or nonconvex constraints. Instead, other methods, such as active-set or interior-point methods, may be more appropriate.

Implementation Steps

  1. Initial
    1. Start by analyzing and visualizing the constraints. Pay special attention to constraints that are box-defined (e.g., variable bounds) or convex inequalities, as these are essential for the efficient execution of the gradient projection method. In this case, the box-defined constraints establish the feasibility region, referred to as the "box."
    2. Choose an initial feasible point that is within the constraints of the problem, this will be the starting point of the soon to be developed piecewise-linear path
    3. Find the gradient of the objective function
  2. Develop Piecewise linear path          
    1. Insert the initial feasible point into the gradient of the objective function.
    2. Define the descent direction and project onto the feasible region with initial feasible point. This line will define the piecewise-linear path.
  3. Find the Cauchy point
    1. Analyze the path where the line segment breaks or changes its projected path on the feasibility region
    2. Compute the value of each breakpoint point set by substituting them into the piecewise-linear path equation
    3. Write the quadratic of this line segment with respect to these breakpoints
    4. Differentiate the quadratic with respect to the new breakpoint
    5. Identify if minimizer was found, if not, analyze the next breakpoint set
  4. Refine and execute
    1. Update the piecewise-linear path decision variable to include the minimizer
    2. Update the active set based on the new solution found with the minimizer
    3. Find the new gradient value with this point, if converged then the minimum has been found. If not, iterate and repeat steps 1.2-4.3.

Pseudocode

This is the pseudocode for the gradient projection method algorithm where:

  • is a matrix of constraint coefficients.
  • is the decision variable vector.
  • is a symmetric, positive semi-definite n x n matrix representing the quadratic coefficients.
  • is a linear coefficient vector.
  • represents the lower bounds on the decision variables.
  • represents the upper bounds on the decision variables.

Compute a feasible starting point

for

if satisfies the KKT conditions for the quadratic programming problem:

minimize

subject to

stop with solution

Set and find the Cauchy point

Find an approximate solution of

minimize

subject to

such that and is feasible;

end(for)

Convexity in Quadratic Programming

Convex quadratic programming problems occur when Q is positive semi-definite guarantee that the problem has a global minimum. Convex quadratic programming problems occur when Q is not positive semi-definite and the solutions may involve local minima, requiring more advanced techniques for global optimization (e.g., branch-and-bound, global optimization methods).

Numerical Examples

Examples are provided for solving convex and non-convex quadratic program optimizations using Active Set, Simplex-Like, and Branch and Bound methods.

Active Set Method Example

Step-by-step instructions for determining the optimal solution of the following quadratic programming problem are provided:

minimize

subject to the constraints:

Step 1: Reformat Constraints as Necessary

Constraints should be reformatted to be . All constraints are currently in this format; therefore, no changes are needed.

Step 2: Construct the Gradients

Compute the gradients for the objective function and constraints.

Step 3: Determine if There is a Feasible Solution with All Constraints Set to Active

The system of equations can be solved to determine if there is a feasible solution with all 3 constraints set to active. First, solve for  using constraint (1) and substitute it into constraint (2).

From

Solving the system with all constraints active is infeasible. This is because the rows of the coefficient matrix are linearly dependent. The system does not have a unique solution. Thus, the next steps will be to reduce the number of active constraints to determine a feasible solution.

Step 4: Determine Solution with Constraints (1) and (2) Set to Active

As determined in Step 3, solving with constraints (1) and (2) result in a contradiction. Thus, this solution is infeasible.

Step 5: Determine Solution with Constraints (1) and (3) Set to Active

Substitute  using constraint (1) into constraint (3).

From


Substitute into constraint (1).


Check whether the solution satisfies primal feasability for constraint (2), meaning :


Primal feasibility is satisfied since . Thus, the solution is feasible.

Step 6: Determine Solution with Constraints (2) and (3) Set to Active

Substitute  using constraint (2) into constraint (3).

From


Substitute into constraint (2).

= 0


Check whether the solution satisfies primal feasibility for constraint (1), meaning :


Primal feasibility is satisfied since . Thus, the solution is feasible.

Step 7: Solve Objective Function Using Feasible Solutions

Step 8: Determine Optimal Solution

The feasible solution with the minimum value is at . Thus, this is the optimal solution.

Simplex-Like Method Example

Step-by-step instructions for determining the optimal solution of the following quadratic programming problem are provided:

Minimize

Subject to the constraints:

Step 1: Reformat Constraints as Necessary

Equation constraints should be reformatted to be  to solve for standard form. All constraints are currently in this format; therefore, no changes are needed. Non-negativity constraints should remain .

Step 2: Determine if the Objective Function and Constraints are Convex

To use the Simplex-like Method, both the objective function and constraints must be convex. For the objective function, convexity is proven by identifying if the eigenvalues of the Hessian matrix are . For the constraints, convexity is proven by identifying if the constraints are linear.

Step 2.1 Find the Eigenvalues of the Hessian Matrix for the Objective Function

Solve for the first order partial derivatives.

Solve for the second order partial derivatives.

Construct the Hessian Matrix

Determine the eigenvalues

The objective function is convex as .

Step 2.2 Determine if the Constraints are Convex

All constraints are linear and therefore convex. This method can be applied to solve as both the objective function and constraints are convex.

Step 3: Determine the Feasible Region

Change the inequalities to equalities.

Solve for the intersection points using all combinations of the constraints.

  • and
    • Interesection at (0,5)
  • and
    • Intersection at (0,8)
  • and
    • Intersection at (5,0)
  • and
    • Intersection at (4,0)
  • and
  • Substitute into the first equation
    • Interesection at (3,2)

The feasible region is bounded & convex in the intersection points (0,5), (0,8), (5,0), (4,0), (3,2).

Step 4: Solve Objective Function using Feasible Solutions

  • Solve at (0,5)
  • Solve at (0,8)
  • Solve at (5,0)
  • Solve at (4,0)
  • Solve at (3,2)

Step 5: Identify Edge Solutions along the Feasible Region

  • and
  • Parameterize the edge to represent the points along each edge
    • Use the parameterization in the objective function
    • Solve for using the parameterization objective function
      • which is within [0,1] and therefore a solution.
    • Solve for and using the parameterization equations and value
    • Solve the objective function for the edge solution
    • Verify the minimum objective solution
      • At
      • At
      • At
      • The minimum solution is at

Repeat process for other edges

  • and
      • which is not within [0,1] and therefore not a solution
    • At
    • At
    • The minimum solution is at
  • and
      • which is not within [0,1] and therefore not a solution
    • At
    • At
    • The minimum solution is at
  • and
  • Since is fixed in both and , solve objective function directly
      • which is not within [5,8] and therefore not a solution
    • At
    • At
      • The minimum solution is at
  • and
  • Since is fixed in both and , solve objective function directly
      • which is not within [4,5] and therefore not a solution
    • At
    • At
    • The minimum solution is at

Step 6: Determine the Optimal Solution

Compare all intersection and edge solutions to minimize the objective function. and

  • The minimum solution is at

and

  • The minimum solution is at f(3,2) = 28.5

and

  • The minimum solution is at

and

  • The minimum solution is at

and

  • The minimum solution is at

The optimal solution occurs at

Non-Convex Example Using Branch and Bound

Step-by-step instructions for determining the optimal solution of the following quadratic programming problem using branch and bound are provided:

maximize

subject to the constraints:

Step 1: Determine if the Objective Function and Constraints are Convex

To determine if the objective function is convex, its eigenvalues are determined. The constraints are then analyzed for linearity.

Step 1.1 Find the Eigenvalues of the Hessian Matrix for the Objective Function

Solve for the first order partial derivatives.


Solve for the second order partial derivatives.


Construct the Hessian Matrix.


Determime the eigenvalues.

The objective function is negative semi-definite since both eigenvalues are negative. This makes the objective function non-convex.

Step 1.2 Determine if the Constraints are Convex

The non-binary integer constraint for this given quadratic programming problem makes this example non-convex.

Step 2: Determine the Relaxed Solution

To help determine whether to prune future branches, the optimization will be solved by relaxing the integer constraint from   to . Thus, the relaxed problem formulation is as follows:

maximize

subject to the constraints:

To solve the relaxed formulation the gradients are set to 0 and the system of equations is solved.

From

From

The relaxed solution results in an optimal value of 10.25. However,  was a non-integer value. Therefore, the branch and bound method will be applied to find a solution that satisfies all constraints. The optimal value must be no greater than 10.25 since this was the solution found by relaxing constraints.

Step 3: Apply Branch and Bound

Since  is an integer, the selected branches are and .

Step 3.1: Branch at

Substitute into the objective function.


Determine the critical points by setting the first derivative to 0.


Evaluate the objective function at the critical point and boundary points .

The maximum value of these solutions is 4, which happens when and .


Step 3.2: Branch at

Substitute into the objective function.


Determine the critical points by setting the first derivative to 0.


Evaluate the objective function at the critical point and boundary points .

The maximum value of these solutions is 8, which happens when and .


Step 3.3: Branch at

Substitute into the objective function.


Determine the critical points by setting the first derivative to 0.


Evaluate the objective function at the critical point and boundary points .

The maximum value of these solutions is 10, which happens when and .


Step 3.4: Compare Branches

The best solution across all branches was and , resulting in an optimal value of 10.

Application

In this section, we will explore the wide applications of quadratic programming in fields ranging from finance, logistics management, data science, and various engineering fields.

Finance-Portfolio Optimization

In finance, quadratic programming is commonly used when formulating an investment portfolio that balances returns and risk to meet the investor's goal. Nobel Prize recipient Harry Markowitz developed a portfolio model using quadratic programming to maximize returns while minimizing risks, a model that would later become a cornerstone in Modern Portfolio Theory.10 Markowitz defined risk as a variance of the portfolio’s return, which captures an essence of a portfolio’s symmetric return volatility. Markowitz's method involved estimating stock returns based on historical data, analyzing variances from historical projections and its realized returns to formulate risk. When provided a budget and risk tolerance, the model would now be able to select a combination of stocks that would maximize returns within the defined constraints.10,11

Quadratic Knapsack Problem (QKP)

The traditional knapsack optimization problem considers items (i) with associated values (v) and weights (w) and has the objective function to maximize the value of the items held in the knapsack while not exceeding its weight constraint. The decision variables in this scenario define items to include and not to include in the knapsack. This problem has proven to be beneficial when optimizing cargo loads and establishing transportation routes.

The QKP is an augmentation of the classic knapsack problem, where the objective function is quadratic and contains pairwise variables but still contains the same objective of maximizing value of items contained in the knapsack. The extension of this knapsack, initially proposed by Gallo in 1977, introduces the ability to optimize values with items that may have a unique relationship affecting their values.12 QKP has applications in solving more complicated cargo load problems, telecommunications, and electrical engineering.

To expand on an electrical engineering application, in microchip design, quadratic programming has been used to optimize chip component placement to minimize costs and component connections. This is similar in nature to the QKP.13

Support Vector Machines (SVMs)

In machine learning, SVMs are learning models tasked with classification and regression demands. Use case examples of SVMs would include facial and text recognition. SVMs utilize quadratic programing to optimize a hyperplane used in a recognition task where the objective is to maximize a margin between classes and minimize classification errors and are commonly subject to linear constraints. Algorithms like sequential minimal optimization (SMO) have emerged to help solution growing data sets used in SVMs to make the computation more achievable by breaking the problem into sub quadratic programming problems.14

Plug in Hybrid Electric Vehicle Power Management

Plug-in hybrid electric vehicles (PHEVs) rely on having an optimized power management program to operate efficiently. PHEVs will draw power from batteries until they are depleted before switching to its hybrid combustion-ev power cycle. The goal for PHEVs becomes a quadratic programming problem where the objective is to minimize a quadratic function associated with costs for fuel which will optimize the power cycles and meet performance demands. .15


Software tools that can be used in Quadratic Programming
Tool Description
Python Python has multiple packages available to handle quadratic programming such as Pyomo, SciPY, Quadprong, and CVXPY with each library bringing its own strengths to a problem.
MATLAB MathWorks MATLAB Optimization Toolbox is an add on package suitable to solve problems requiring quadratic programming with a either a problem based or solver based approach. If solver based, there are three algorithms available with quadprong: Interior-point-convex, trust-region-reflective, and active set.
GAMS GAMS Gurobi suite contains several algorithms that are suitable for quadratic programming. For example, to solve a convex QCP, the algorithm used is the parallel barrier algorithm.
Excel Solver The data package is suitable for solving linear and nonlinear problems which can include quadratic programming problems.
WOLFRAM Wolfram Language & System solver is a versatile tool and can solve non linear quadratic problems.
SNOPT Uses sequential quadratic programing which solves multiple smaller quadratic sub problems.
Julia Julia offer's JuMP and Convex.jl packages that leverage solvers like Gurobi and MOSEK to support quadratic programming problems.

Conclusion

Quadratic programming and its ability to optimize quadratic functions is a vastly useful tool for solving real-world problems which are often depicted as quadratic functions. For this reason, QP has been used to model a variety of problems from signal processing to game theory to economics. As one of the simplest non-linear programming types, QP can be used to both solve quadratic functions with linear bounds, made especially straightforward with a convex objective function, and can be used as a step in more complicated optimization problems such as sequential quadratic programming. Though only first theorized in the 1950s, QP application has broadened to many fields including finance, scheduling, and engineering modeling.

References

1. J. Nocedal and S. J. Wright. Numerical Optimization. Springer (1999): Web. 2 December 2024.
2. H. B. Mann. Quadratic forms with linear constraints. The American Mathematical Monthly (1943): Web. 2 December 2024.
3. N. I. M. Gould and P. L. Toint. A Quadratic Programming Bibliography. RAL Numerical Analysis Group Internal Report (2012): Web. 2 December 2024.
4. E. W. Barankin and R. Dorfman. Toward quadratic programming. Report to the logistics branch, Office of Naval Research (1955): Web. 2 December 2024.
5. M. Frank and P. Wolfe. An Algorithm for Quadratic Programming. Naval Research Logistics Quarterly 3 (1956): Web. 2 December 2024.
6. R. J. Vanderbei. Linear Programming: Foundations and Extensions. Springer International Publishing (2008): Web. 2 December 2024.
7. C. Floudas and V. Visweswaran. Quadratic Optimization. Nonconvex Optimization and Its Applications, 2 (1995): Web. 2 December 2024.
8. R. J. Vanderbei. Linear Programming: International Series in Operations Research & Management Sceince. Springer International Publishing (2008): Web. 2 December 2024.
9. T. V.Mikosch, S. Resnick, B. Zwart and T. Dieker. Numerical Optimization: Springer Series in Operations Research and Financial Engineering. Springer International Publishing (2006): Web. 2 December 2024.
10. B. McCarl, et al. Quadratic Programming Applications. Omega, vol. 5, no. 1 (1977): Web. 2 December 2024
11. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press (2004): Web. 24 November 2024.
12. G. Gallo, P. L. Hammer and B. Simeone. Quadratic knapsack problems. In: Padberg, M.W. (eds) Combinatorial Optimization. Mathematical Programming Studies, vol 12. Springer, Berlin, Heidelberg (1980): Web. 24 November 2024.
13. N.A. Sherwani. Algorithms for VLSI Physical Design Automation (1993): Web. 24 November 2024.
14. J. Platt. Fast Training of Support Vector Machines Using Sequential Minimal Optimization. Microsoft Research, 17 Oct. 2018, www.microsoft.com/en-us/research/publication/fast-training-of-support-vector-machines-using-sequential-minimal-optimization/.
15. Z. Zhou and C. Mi. ‘Power management of PHEV using quadratic programming’, Int. J. Electric and Hybrid Vehicles, Vol. 3, No. 3, (2011): Web. 24 November 2024.