Quadratic programming
Authors: Kayleigh Calder (kjc263), Colton Jacobucci (cdj59), Carolyn Johnson (cj456), Caleb McKinney (cdm235), Olivia Thomas (oat9)
Steward: Fengqi You
Introduction
A quadratic program is an optimization problem that comprises a quadratic objective function bound to linear constraints.1 Quadratic Programming (QP) is a common type of non-linear programming (NLP) used to optimize such problems.
One of the earliest known theories for QP was documented in 1943 by Columbia University’s H.B. Mann2,3, but many are given credit for their early contributions to the field such as E. W. Barankin and R. Dorfman in their naval research throughout the 1950s4 and Princeton University’s Wolfe and Frank for their research in 1956.5 The field has since made other prominent strides, such as when Harry Markowitz famously received the Nobel Prize in Economics in 1990 for his application of QP in optimizing his portfolio’s financial risk and reward.6
QP is essential to the field of optimization for multiple reasons. Firstly, quadratic problems can often be applied to real world applications due to the quadratic nature of variance, the sum of squares deviation used to represent uncertainty.6 QP can also be applied to a wide variety of real-world problems such as scheduling, planning, flow computations, engineering modeling, design and control, game theory, and economics.7 Secondly, QP is commonly used as a steppingstone for more general optimization problems such as sequential quadratic programming and augmented Lagrangian methods.1
Algorithm Discussion
Quadratic programming problems are typically formatted as minimization problems, and the general mathematical formulation is:
The general conditions for using QP for an optimization problem begin with having a quadratic objective function accompanied by linear constraints. For convex problems, Q defined in the equation above must be positive semi-definite; if not, there may be multiple local solutions meeting minimization criteria and deemed non-convex. As later sections in this paper will discuss, problem dimensions may vary in size, but it is not an issue as certain quadratic algorithms are tailored to meet computational demand. Finally, the feasibility region must be non-empty- otherwise there will be no solution.
Active Set Methods
The active set algorithm is a method used to solve quadratic programming problems by iteratively identifying and working with the set of constraints that are most likely to influence the solution, called the active set. More specifically, the algorithm maintains a subset of constraints that are treated as equalities at each step, solving a simplified quadratic programming problem for this subset. Constraints can be added or removed from the active set as the solution progresses until optimality conditions are satisfied.
When to Use
Active set methods are best suited for most linear programming problems, particularly those with manageable dimensions, as they exploit the problem's structure and update estimates of active constraints iteratively. However, for problems where nonlinearity or degeneracy complicates the constraint structure, active set methods, as a broader class, are useful since they generalize the simplex approach to handle quadratic or nonlinear constraints. In cases with large-scale problems or poor simplex performance due to its exponential worst-case complexity, alternative methods like interior-point techniques may be more appropriate.
Implementation Steps
- Start with an Initial Solution and Active Set
- Begin with a feasible point that satisfies all constraints.
- Identify which constraints are active (equality constraints or inequality constraints that are tight).
- Iterative Process:
- Solve a Reduced Problem: Fix the active constraints as equalities and solve the resulting smaller QP problem.
- Check Optimality: Verify if the current solution satisfies the Karush-Kuhn-Tucker conditions.
- Update the Active Set:
- Add violated constraints to the active set if they are not satisfied.
- Remove constraints from the active set if they are no longer binding.
- Repeat Until Convergence:
- Iterate through the process until the optimal solution is found, ensuring all constraints are satisfied.
Pseudocode
Interior Point Methods
Interior-point methods, developed in the 1980s, are a class of algorithms for solving large-scale linear programs using concepts from nonlinear programming. Unlike the active set methods, which navigate along the boundaries of the feasible region by testing vertices, interior-point methods approach the solution from within or near the interior, avoiding boundary constraints. These methods were inspired by the simplex method's poor worst-case complexity and the desire for algorithms with stronger theoretical guarantees, such as the ellipsoid method and Karmarkar's algorithm.
When to Use
Interior-point methods are particularly effective for large-scale optimization problems where the simplex method's boundary-focused approach may struggle. They require fewer iterations than simplex, although each iteration is computationally more intensive. Modern primal-dual interior-point methods are efficient, robust, and form the foundation of many current optimization software tools, making them the preferred choice for solving large or complex linear programs.
Implementation Steps (Convex Quadratic Program with a Positive Semi-Definite Matrix)
- Initial
- Analyze the optimization problem and constraints
- Add slack variables to problem to account for inequalities (if applicable) and write problem in general form and take barrier approach by adding in logarithmic component
- Iterative Process
- Define the KKT conditions
- Define a complementary measure
- Define the perturbed KKT condition
- Apply Newton’s method to formulate a system of linear equations
- Solve
- Solve and iterate until convergence.
Pseudocode
Gradient Projection Methods
The gradient projection method is an optimization technique used to solve constrained problems, particularly those with simple bound or inequality constraints. It combines gradient descent with a projection step that ensures iterates remain within the feasible region by projecting them back onto the constraint set. The method is iterative and adjusts the search direction based on the negative gradient of the objective function, while ensuring feasibility at each step. This approach is widely used for convex problems and has extensions for handling more complex constraint sets.
When to Use
The gradient projection method is best suited for problems with simple constraints, such as box constraints or convex inequality constraints, where projection onto the feasible region can be performed efficiently. It is effective for convex optimization problems where gradient-based methods converge to global optima. This method is often employed in large-scale optimization, machine learning, and signal processing applications due to its simplicity and ease of implementation. However, it may be less efficient for problems with complex or nonconvex constraints. Instead, other methods, such as active-set or interior-point methods, may be more appropriate.
Implementation Steps
- Initial
- Start by analyzing and visualizing the constraints. Pay special attention to constraints that are box-defined (e.g., variable bounds) or convex inequalities, as these are essential for the efficient execution of the gradient projection method. In this case, the box-defined constraints establish the feasibility region, referred to as the "box."
- Choose an initial feasible point that is within the constraints of the problem, this will be the starting point of the soon to be developed piecewise-linear path
- Find the gradient of the objective function
- Develop Piecewise linear path
- Insert the initial feasible point into the gradient of the objective function.
- Define the descent direction and project onto the feasible region with initial feasible point. This line will define the piecewise-linear path.
- Find the Cauchy point
- Analyze the path where the line segment breaks or changes its projected path on the feasibility region
- Compute the value of each breakpoint point set by substituting them into the piecewise-linear path equation
- Write the quadratic of this line segment with respect to these breakpoints
- Differentiate the quadratic with respect to the new breakpoint
- Identify if minimizer was found, if not, analyze the next breakpoint set
- Refine and execute
- Update the piecewise-linear path decision variable to include the minimizer
- Update the active set based on the new solution found with the minimizer
- Find the new gradient value with this point, if converged then the minimum has been found. If not, iterate and repeat steps 1.b-4.c
Pseudocode
Convexity in Quadratic Programming
Convex quadratic programming problems occur when Q is positive semi-definite guarantee that the problem has a global minimum. Convex quadratic programming problems occur when Q is not positive semi-definite and the solutions may involve local minima, requiring more advanced techniques for global optimization (e.g., branch-and-bound, global optimization methods).
Numerical Examples
Active Set Method Example
Step-by-step instructions for determining the optimal solution of the following quadratic programming problem are provided:
Step 1: Reformat Constraints as Necessary
Constraints should be reformatted to be . All constraints are currently in this format; therefore, no changes are needed.
Step 2: Construct the Gradients
Compute the gradients for the objective function and constraints.
Step 3: Determine if There is a Feasible Solution with All Constraints Set to Active
The system of equations can be solved to determine if there is a feasible solution with all 3 constraints set to active. First, solve for using constraint (1) and substitute it into constraint (2).
Solving the system with all constraints active is infeasible. This is because the rows of the coefficient matrix are linearly dependent. The system does not have a unique solution. Thus, the next steps will be to reduce the number of active constraints to determine a feasible solution.
Step 4: Determine Solution with Constraints (1) and (2) Set to Active
As determined in Step 3, solving with constraints (1) and (2) result in a contradiction. Thus, this solution is infeasible.
Step 5: Determine Solution with Constraints (1) and (3) Set to Active
Substitute using constraint (1) into constraint (3).
Step 6: Determine Solution with Constraints (2) and (3) Set to Active
Substitute using constraint (2) into constraint (3).
Step 7: Solve Objective Function Using Feasible Solutions
Step 8: Determine Optimal Solution
The feasible solution with the minimum value is at . Thus, this is the optimal solution.
Simplex-Like Method Example
Step-by-step instructions for determining the optimal solution of the following quadratic programming problem are provided:
Step 1: Reformat Constraints as Necessary
Equation constraints should be reformatted to be to solve for standard form. All constraints are currently in this format; therefore, no changes are needed. Non-negativity constraints should remain .
Step 2: Determine if the Objective Function and Constraints are Convex
To use the Simplex-like Method, both the objective function and constraints must be convex. For the objective function, convexity is proven by identifying if the eigenvalues of the Hessian matrix are . For the constraints, convexity is proven by identifying if the constraints are linear.
Step 2.1 Find the Eigenvalues of the Hessian Matrix for the Objective Function
Solve for the first order partial derivatives.
Step 2.2 Determine if the Constraints are Convex
All constraints are linear and therefore convex. This method can be applied to solve as both the objective function and constraints are convex.
Step 3: Determine the Feasible Region
Set the inequalities to equalities.
Step 4: Solve Objective Function using Feasible Solutions
Step 5: Identify Edge Solutions along the Feasible Region
Step 6: Determine the Optimal Solution
Compare all intersection and edge solutions to minimize the objective function.
Non-Convex Example Using Branch and Bound
Step-by-step instructions for determining the optimal solution of the following quadratic programming problem using branch and bound are provided:
Step 1: Determine if the Objective Function and Constraints are Convex
To determine if the objective function is convex, its eigenvalues are determined. The constraints are then analyzed for linearity.
Step 1.1 Find the Eigenvalues of the Hessian Matrix for the Objective Function
Solve for the first order partial derivatives.
Step 1.2 Determine if the Constraints are Convex
The non-binary integer constraint for this given quadratic programming problem makes this example non-convex.
Step 2: Determine the Relaxed Solution
To help determine whether to prune future branches, the optimization will be solved by relaxing the integer constraint from to . Thus, the relaxed problem formulation is as follows:
Step 3: Apply Branch and Bound
Since is an integer, the selected branches are and .
Step 3.1: Branch at
Substitute into the objective function.
Step 3.2: Branch at
Substitute into the objective function.
Step 3.3: Branch at
Substitute into the objective function.
Step 4: Determine Optimal Solution
The best solution across all branches was and , resulting in an optimal value of 10.
Application
In this section, we will explore the wide applications of quadratic programming in fields ranging from finance, logistics management, data science, and various engineering fields.
Finance-Portfolio Optimization
In finance, quadratic programming is commonly used when formulating an investment portfolio that balances returns and risk to meet the investor's desire. A Nobel Prize winner, Harry Markowitz developed a portfolio model using quadratic programming to maximize returns and minimize investment risks, and this became a cornerstone in Modern Portfolio Theory.10 Markowitz’s definition of risk is defined as variance of the portfolio’s return, which captures an essence of a portfolio’s symmetric return volatility. Markowitz reviewed a return projection of a given stock based on historical data, then analyze variances from historical projections based on realized returns to generate the risk, and with a budget and risk level constraint, would be able to select stocks that would maximize potential returns while staying within the predefined risk margin through his quadratic model.10,11
Quadratic Knapsack Problem (QKP)
The traditional knapsack optimization problem considers items (i) with associated values (v) and weights (w) and has the objective function to maximize the value of the items held in the knapsack while not exceeding its weight constraint. The decision variables in this scenario define items to include and not to include in the knapsack. This problem has proven to beneficial in optimizing the cargo loads and establishing transportation routes.
The QKP is an augmentation of the classic knapsack problem, where the objective function is quadratic and contains pairwise variables but still contains the same objective of maximizing value of items contained in the knapsack. The extension of this knapsack, initially proposed by Gallo in 1977, introduces the ability to optimize values with items that may have a unique relationship affecting their values.12 QKP has applications in solving more complicated cargo load problems, telecommunications, and electrical engineering.
To expand on an electrical engineering application, in microchip design, quadratic programming has been used to optimize chip component placement to minimize costs and component connections. This is similar in nature to the QKP.13
Support Vector Machines (SVMs)
In machine learning, SVMs are learning models tasked with classification and regression demands. Use case examples of SVMs would include facial and text recognition. SVMs utilize quadratic programing to optimize a hyperplane used in a recognition task where the objective is to maximize a margin between classes and minimize classification errors and are commonly subject to linear constraints. Algorithms like sequential minimal optimization (SMO) have emerged to help solution growing data sets used in SVMs to make the computation more achievable by breaking the problem into sub quadratic programming problems.14
Plug in Hybrid Electric Vehicle Power Management
An optimized power management program is critical for the plug-in hybrid electric vehicles (PHEV) to operate efficiently. The power management problem is more complex in PHEVs than traditional hybrid vehicles and purely electric vehicles as it relies on grid charged batteries as the for its initial range, then drives like HEVs alternating between fuel and charge obtained during the combustion of the fuel. The power management problem can be described as a quadratic polynomial and solved with quadratic programing with the optimization goal to minimize fuel consumption.15
Conclusion
Quadratic programming and its ability to optimize quadratic functions is a vastly useful tool for solving real-world problems which are often depicted as quadratic functions. For this reason, QP has been used to model a variety of problems from signal processing to game theory to economics. As one of the simplest non-linear programming types, QP can be used to both solve quadratic functions with linear bounds, made especially straightforward with a convex objective function, and can be used as a step in more complicated optimization problems such as sequential quadratic programming. Though only first theorized in the 1950s, QP application has broadened to many fields including finance, scheduling, and engineering modeling.
References
1. J. Nocedal and S. J. Wright. Numerical Optimization. Springer (1999): Web. 2 December 2024.
H. B. Mann. Quadratic forms with linear constraints. The American Mathematical Monthly (1943): Web. 2 December 2024.
3. N. I. M. Gould and P. L. Toint. A Quadratic Programming Bibliography. RAL Numerical Analysis Group Internal Report (2012): Web. 2 December 2024.
4. E. W. Barankin and R. Dorfman. Toward quadratic programming. Report to the logistics branch, Office of Naval Research (1955): Web. 2 December 2024.
5. M. Frank and P. Wolfe. An Algorithm for Quadratic Programming. Naval Research Logistics Quarterly 3 (1956): Web. 2 December 2024.
6. R. J. Vanderbei. Linear Programming: Foundations and Extensions. Springer International Publishing (2008): Web. 2 December 2024.
7. C. Floudas and V. Visweswaran. Quadratic Optimization. Nonconvex Optimization and Its Applications, 2 (1995): Web. 2 December 2024.
8. R. J. Vanderbei. Linear Programming: International Series in Operations Research & Management Sceince. Springer International Publishing (2008): Web. 2 December 2024.
9. T. V.Mikosch, S. Resnick, B. Zwart and T. Dieker. Numerical Optimization: Springer Series in Operations Research and Financial Engineering. Springer International Publishing (2006): Web. 2 December 2024.
10. B. McCarl, et al. Quadratic Programming Applications. Omega, vol. 5, no. 1 (1977): Web. 2 December 2024
11. S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press (2004): Web. 24 November 2024.
12. G. Gallo, P. L. Hammer and B. Simeone. Quadratic knapsack problems. In: Padberg, M.W. (eds) Combinatorial Optimization. Mathematical Programming Studies, vol 12. Springer, Berlin, Heidelberg (1980): Web. 24 November 2024.
13. N.A. Sherwani. Algorithms for VLSI Physical Design Automation (1993): Web. 24 November 2024.
14. J. Platt. Fast Training of Support Vector Machines Using Sequential Minimal Optimization. Microsoft Research, 17 Oct. 2018, www.microsoft.com/en-us/research/publication/fast-training-of-support-vector-machines-using-sequential-minimal-optimization/.
15. Z. Zhou and C. Mi. ‘Power management of PHEV using quadratic programming’, Int. J. Electric and Hybrid Vehicles, Vol. 3, No. 3, (2011): Web. 24 November 2024.