Local branching: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
Tag: Manual revert
 
(33 intermediate revisions by 2 users not shown)
Line 6: Line 6:
https://link.springer.com/article/10.1007/BF00132504</ref>. This approach has special advantages for large-scale, computationally intensive MILP instances because it allows one to explore the solution space in a more focused manner, improving the solution quality while still keeping the computations tractable.
https://link.springer.com/article/10.1007/BF00132504</ref>. This approach has special advantages for large-scale, computationally intensive MILP instances because it allows one to explore the solution space in a more focused manner, improving the solution quality while still keeping the computations tractable.
   
   
Furthermore, local branching was introduced as a refinement heuristic for dealing with the computational tractability issues that global optimization methods, such as branch-and-bound, became widely resource-consuming for large instances of a problem. One of the main motivations to study Local Branching is its ability to do iterative solution improvement, thereby making it possible to get good solutions fast with MILP solvers. Thus, local branching becomes extremely important when applied to domains demanding quick and reliable decision-making, more precisely within logistics optimization, production scheduling, and resource planning.
Furthermore, local branching was introduced as a refinement heuristic for dealing with the computational tractability issues that global optimization methods, such as branch-and-bound, became widely resource-consuming for large instances of a problem. One of the main motivations to study Local Branching is its ability to do iterative solution improvement, thereby making it possible to get good solutions fast with MILP solvers<ref name="ref10">Liu, D., Fischetti, M., & Lodi, A. (2022). Learning to Search in Local Branching. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4), 2224–2232.
https://doi.org/10.1609/aaai.v36i4.20294</ref>. Thus, local branching becomes extremely important when applied to domains demanding quick and reliable decision-making, more precisely within logistics optimization, production scheduling, and resource planning.


== Algorithm Discussion ==
== Algorithm Discussion ==
Line 13: Line 14:
=== Formal Description ===
=== Formal Description ===


Local Branching is an optimization strategy designed for MILP problems. It iteratively refines solutions by restricting the search space to a local neighborhood around the current best solution. This approach aims to improve computational efficiency while achieving better-quality solutions.
Local Branching is an optimization strategy designed for MILP problems. It iteratively refines solutions by restricting the search space to a local neighborhood around the current best solution. This approach aims to improve computational efficiency while achieving better-quality solutions<ref name="ref9"> Stidsen, T. (n.d.). Local Branching – A Brand New Method. Informatics and Mathematical Modelling / Operations Research, Technical University of Denmark. Retrieved from DTU website. https://www2.imm.dtu.dk/courses/02719/lbr/lbr.pdf</ref>.
 


=== Steps of the Algorithm ===
=== Steps of the Algorithm ===


==== Step 1: Initialization ====
==== Step 1: Initialization ====
* Obtain an '''initial feasible solution''' x* using any MILP solver or heuristic method.
* Obtain an '''initial feasible solution''' <math>\mathbf{x}^*</math>using any MILP solver or heuristic method.
* Define the '''local neighborhood''' using a Hamming distance constraint:
* Define the '''local neighborhood''' using a Hamming distance constraint:


Line 26: Line 26:
</math>
</math>


Here, k is a predefined parameter that controls the size of the neighborhood.
Here, <math>\mathbf{k}</math> is a predefined parameter that controls the size of the neighborhood.
 


==== Step 2: Local Optimization ====
==== Step 2: Local Optimization ====
* Solve the MILP problem restricted by the local neighborhood constraint.
* Solve the MILP problem restricted by the local neighborhood constraint.
* Use the objective function z to find a better solution x' within the neighborhood.
* Use the objective function <math>\mathbf{z}</math> to find a better solution <math>\mathbf{x}^'</math> within the neighborhood.


==== Step 3: Update Solution ====
==== Step 3: Update Solution ====
* If the local optimization improves the objective function '''Z(x') > Z(x*)''', update the best-known solution x* = x'.
* If the local optimization improves the objective function <math>Z(\mathbf{x}') > Z(\mathbf{x}^*)</math>, update the best-known solution: <math>\mathbf{x}^* = \mathbf{x}'</math>
* If no improvement is found, relax or adjust the neighborhood size k and repeat the process.
 
* If no improvement is found, relax or adjust the neighborhood size <math>k</math> and repeat the process.


==== Step 4: Termination ====
==== Step 4: Termination ====
* Stop the algorithm when:
* Stop the algorithm when:
  * No further improvement in z is achieved after a set number of iterations.
**No further improvement in <math>\mathbf{z}</math> is achieved after a set number of iterations.
  * Computational time exceeds a predefined limit.
**Computational time exceeds a predefined limit.
  * The solution meets predefined quality criteria (e.g., a threshold objective value).
**The solution meets predefined quality criteria (e.g., a threshold objective value).


=== Assumptions and Conditions ===
=== Assumptions and Conditions ===


== Numerical Examples ==  
# '''Feasibility''': An initial feasible solution <math>\mathbf{x}^*</math>must exist before local branching can be applied.
# '''Neighborhood Size''': The parameter k significantly influences performance; a smaller k focuses on local improvements, while a larger k allows broader exploration.
# '''MILP Solver''': Requires a reliable MILP solver to handle the subproblems.
# '''Problem Structure''': Works best for problems where solutions are sparse and where local improvements can significantly impact the objective.
 
== Numerical Examples ==
 
==='''Example: Knapsack Refinement'''===
A thief wants to steal items from a shop but is limited by a maximum weight capacity. There are 5 items available, each with a specific weight and value. The objective is to maximize the value of items stolen while staying within the weight limit.
 
==='''Item Details:'''===
* Weights: [2,3,1,4,5]
* Values: [10,15,7,25,30]
* Weight Capacity: 8
 
=== '''Tasks:''' ===
# Solve the knapsack problem using a binary MIP formulation:
&nbsp;&nbsp;&nbsp;&nbsp;<math>\text{Maximize: } \sum_{i=1}^{5} v_i x_i</math>
&nbsp;&nbsp;&nbsp;&nbsp;<math>\text{Subject to: } \sum_{i=1}^{5} w_i x_i \leq 8, \, x_i \in \{0, 1\}</math>
 
# Define a local branching constraint for a neighborhood where at most 1 item can be swapped:
&nbsp;&nbsp;&nbsp;&nbsp;<math>\sum_{i \in S} (1 - x_i) + \sum_{i \notin S} x_i \leq 1</math>
Solve to refine the solution.
 
=== Problem Resolution ===
[[File:Flowchart for the Example.png|thumb]]
 
==== Step 1: Formulate the Problem ====
Let:
 
<math>\ x_i \in \{0, 1\}</math>: Decision variable, where <math>\ x_i = 1</math> if item <math>\ i</math> is stolen, and <math>\ x_i =0</math> otherwise.
 
<math>\ w_i</math>: Weight of item <math>\ i</math>
 
<math>\ v_i</math>: Value of item <math>\ i</math>
 
<math>\ W</math>: Maximum weight capacity of the knapsack.
 
Objective Function:
 
Maximize:  <math>\sum_{i=1}^{n} v_i x_i</math>
 
Constraints:
 
Total weight: <math>\sum_{i=1}^{n} w_i x_i \leq W</math>
 
Binary decision variables: <math>x_i \in \{0, 1\}, \ \forall i</math>
 
==== Step 2: Problem Data ====
Items data:
 
Weights: <math>\ w</math> = [2,3,1,4,5]
 
Values: <math>\ v
</math> = [10,15,7,25,30]
 
Knapsack capacity: <math>\ W
</math> = 8
 
==== Step 3: Code Implementation ====
from pyomo.environ import *
 
<nowiki>#</nowiki> Data
 
weights = [2, 3, 1, 4, 5]  # Weights of items
 
values = [10, 15, 7, 25, 30]  # Values of items
 
capacity = 8  # Maximum capacity of the knapsack
 
num_items = len(weights)
 
<nowiki>#</nowiki> Step 1: Initial Problem Setup
 
model = ConcreteModel()
 
<nowiki>#</nowiki> Decision variables: x[i] = 1 if item i is stolen, 0 otherwise
 
model.x = Var(range(num_items), domain=Binary)
 
<nowiki>#</nowiki> Objective: Maximize total value
 
model.profit = Objective(expr=sum(values[i] * model.x[i] for i in range(num_items)), sense=maximize)
 
<nowiki>#</nowiki> Knapsack capacity constraint
 
model.capacity = Constraint(expr=sum(weights[i] * model.x[i] for i in range(num_items)) <= capacity)
 
<nowiki>#</nowiki> Solve the initial problem
 
solver = SolverFactory('glpk')
 
solver.solve(model)
 
<nowiki>#</nowiki> Extract the initial solution
 
initial_solution = [int(model.x[i]()) for i in range(num_items)]
 
initial_value = sum(values[i] * initial_solution[i] for i in range(num_items))
 
print("Initial Solution:", initial_solution)
 
print("Initial Value:", initial_value)
 
<nowiki>#</nowiki> Step 2: Add Local Branching Constraint
 
<nowiki>#</nowiki> Auxiliary variables for absolute value linearization
 
model.z = Var(range(num_items), domain=NonNegativeReals)
 
<nowiki>#</nowiki> Linearize absolute value constraints
 
model.abs_constraints = ConstraintList()
 
for i in range(num_items):
 
model.abs_constraints.add(model.z[i] >= model.x[i] - initial_solution[i])
 
model.abs_constraints.add(model.z[i] >= initial_solution[i] - model.x[i])
 
<nowiki>#</nowiki> Local branching constraint: At most k changes allowed
 
k = 1
 
model.local_branching = Constraint(expr=sum(model.z[i] for i in range(num_items)) <= k)
 
<nowiki>#</nowiki> Solve the problem with the local branching constraint
 
solver.solve(model)
 
<nowiki>#</nowiki> Extract the refined solution
 
refined_solution = [int(model.x[i]()) for i in range(num_items)]
 
refined_value = sum(values[i] * refined_solution[i] for i in range(num_items))
 
print("Refined Solution:", refined_solution)
 
print("Refined Value:", refined_value)
 
==== Step 4: Add Local Branching Constraint ====
After obtaining the initial solution, the thief refines it by exploring a neighborhood around the initial solution. The local branching constraint is:
 
<math>
\sum_{i \in S} (1 - x_i) + \sum_{i \notin S} x_i \leq k
</math>
 
'''For linearization:''' 
Add auxiliary variables z<sub>i</sub> to represent the absolute difference between the initial solution and the current solution:


<math>
z_i \geq x_i - x_{i}^\text{initial}
</math> 
<math>
z_i \geq x_{i}^\text{initial} - x_i
</math>
==== Step 5: Analyze Results ====
Refined Solution (with <math>\mathbf{k}</math>=1):
·      Neighborhood restricted to solutions with at most one item added or removed.
·      If a better solution exists in this neighborhood, it will be found.
·      If no better solution exists within the local branching neighborhood, the initial solution is already optimal within the restricted region.
{| class="wikitable"
|+
|Metric
|Initial Solution
|Refined Solution (k=1)
|-
|Selected Items
|[1, 0, 1, 0, 1]
|[1, 0, 1, 0, 1]
|-
|Total Value
|47
|47
|-
|Computation Time
|0.065 seconds
|0.012 seconds
|}
This indicates local branch method is more efficient in time.
Effect of Changing k
* Smaller <math>\mathbf{k}</math> (e.g., <math>\mathbf{k}</math>=1):
** Only solutions that are very similar to the initial solution are explored.
** The optimization focuses on fine-tuning the solution by making small adjustments, such as adding or removing a single item.
** This is faster since fewer nodes in the search tree are explored.
** However, the chance of finding significant improvements is lower, as the search is restricted.
* Larger <math>\mathbf{k}</math> (e.g., <math>\mathbf{k}</math>=2,3):
** The search neighborhood becomes broader, allowing more variables (items in this case) to change.
** This increases the chance of finding better solutions but also increases the computational time.
** Larger neighborhoods resemble the global search space, reducing the speed advantage of local branching.
* Very Large <math>\mathbf{k}</math>:
** If <math>\mathbf{k}</math> is too large, the local branching constraint becomes irrelevant, and the method effectively turns into a global search.
** This negates the benefits of local branching, as the entire feasible region is explored.
Trade-Offs in This Problem
* Speed vs. Solution Quality:
** Smaller k results in faster optimization because the solver only explores a small subset of the feasible region.
** Larger k allows for broader exploration, increasing the chance of finding better solutions but also increasing computational effort.
* Knapsack-Specific Considerations:
** The knapsack problem often has multiple near-optimal solutions with small differences in total value.
** Local branching works well here because it fine-tunes the solution by exploring close variations (e.g., swapping high-value, low-weight items).
* When Local Branching is Effective:
** A good initial solution is available.
** The feasible region is large, making global search slow.
* Small changes in the solution can yield significant improvements.


== Application ==
== Application ==
Local Branching is a powerful optimization strategy for to solving MILP optimization problems, with applications spanning industries like logistics, manufacturing, energy, data science, and engineering<ref name="ref3"> Samsatli, S., & Samsatli, N. J. (2018). A multi-objective MILP model for the design and operation of future integrated multi-vector energy networks capturing detailed spatio-temporal dependencies. Applied Energy, 220, 893-920.
https://www.sciencedirect.com/science/article/pii/S0306261917313375</ref>. In particular, for challenges such as scheduling, production planning, nesting and transportation logistics where decision-makers look for high-quality solutions within reasonable computational times<ref name="ref4"> Kim, K., Kwon, S., & Choi, M. (2024). Optimization of Production Scheduling for the Additive Manufacturing of Ship Models Using a Hybrid Method. Journal of Marine Science and Engineering, 12(11), 1961. https://doi.org/10.3390/jmse12111961</ref>. Focusing on local improvements, organizations are able to achieve better performance with low costs.
Railway Crew Scheduling:
Local branching has advantages compared to traditional optimization methods that often use branch-and-bound or cutting-plane approaches<ref name="ref1" />. In the railway crew scheduling traditional methods struggle with the sheer size and complexity of these problems due to the large number of binary decision variables. Local branching introduces constraints to find solutions in a smaller and more manageable sample space without sacrificing the quality of the solution<ref name="ref5">Heil, J., Hoffmann, K., & Buscher, U. (2020). Railway crew scheduling: Models, methods and applications. European Journal of Operational Research, 283(2), 405–425. https://doi.org/10.1016/j.ejor.2019.06.016</ref>. This approach enables the solver to find solutions with fewer scheduling conflicts and better adherence to operational constraints compared to standard methods.
Inventory Routing Problems:
A study was conducted on how local branching could lead to better solution for Inventort routing problems, which often involve constant changes<ref name="ref6"> Fischetti, M., & Lodi, A. (2008). Repairing MIP infeasibility through local branching. Computers & Operations Research, 35(5), 1436–1445.
https://doi.org/10.1016/j.cor.2006.08.004</ref>. Results showed that local branching helped to reach alternative solutions fast and effectively with small changes to the current solution since it searched within a small range of solution space. It found routes and schedules that better balanced inventory costs and routing efficiency, leading to substantial savings in operational costs<ref name="ref7"> Federgruen, A., & Simchi-Levi, D. (1995). Chapter 4: Analysis of vehicle routing and inventory-routing problems. In M. O. Ball, T. L. Magnanti, C. L. Monma, & G. L. Nemhauser (Eds.), Network Routing (Vol. 8, pp. 297–373). Elsevier.https://doi.org/10.1016/S0927-0507(05)80108-2</ref>.
Some of the most well-known software and platforms include ILOG CPLEX, which has been used in the Railwat Crew Scheduling. Another one is Gurobi—a leading MIP solver that has been used in adaptations of Local Branching for modern applications<ref name="ref8">Ernst, A., Jiang, H., Krishnamoorthy, M., Nott, H., & Sier, D. (2001). Rail crew scheduling and rostering optimization algorithms. In Computer-Aided Scheduling of Public Transport (pp. 53-71). Berlin, Heidelberg: Springer Berlin Heidelberg.https://link.springer.com/chapter/10.1007/978-3-642-56423-9_4</ref>.


== Conclusion ==
== Conclusion ==
Local Branching is a major advance in the area of mixed-integer linear programming, providing a strong method for solving large and complex optimization problems. By constraining the search space to a region around the current solution, it increases computational efficiency while retaining or even improving the solution quality. For these reasons, the approach is particularly useful in practical applications where fast convergence is vital. Local Branching is also effective when applied to other areas such as logistics, manufacturing, and energy management in solving real-world optimization problems involving scheduling, resource allocation, and operational efficiency.
Local Branching can also help the optimization process in reducing the search space for faster convergence and often to better solutions than classical global approaches. Future research on Local Branching could benefit from incorporating advanced machine-learning methods that dynamically adjust neighborhood sizes in order to increase its adaptability, efficiency, and scalability for different optimization problems.


==  References ==
==  References ==

Latest revision as of 19:13, 15 December 2024

Author: Bruce Wang (bw549), Ashley Yang (yy2333), Pufan You (py234), Sandro Xu (sx289), Yihan Wang (yw2744) (ChemE 6800 Fall 2020)

Introduction

Mixed-Integer Linear Programming (MILP) is a sophisticated optimization model aimed at solving problems that involve both discrete decision and continuous variables, where the discrete variables are often constrained to take on integer values[1]. Local Branching is a heuristic that has been explicitly developed to enhance the effectiveness of solving MILP problems by focusing the optimization search in a defined neighborhood around the current incumbent solution[2]. This approach has special advantages for large-scale, computationally intensive MILP instances because it allows one to explore the solution space in a more focused manner, improving the solution quality while still keeping the computations tractable.

Furthermore, local branching was introduced as a refinement heuristic for dealing with the computational tractability issues that global optimization methods, such as branch-and-bound, became widely resource-consuming for large instances of a problem. One of the main motivations to study Local Branching is its ability to do iterative solution improvement, thereby making it possible to get good solutions fast with MILP solvers[3]. Thus, local branching becomes extremely important when applied to domains demanding quick and reliable decision-making, more precisely within logistics optimization, production scheduling, and resource planning.

Algorithm Discussion

Formal Description

Local Branching is an optimization strategy designed for MILP problems. It iteratively refines solutions by restricting the search space to a local neighborhood around the current best solution. This approach aims to improve computational efficiency while achieving better-quality solutions[4].

Steps of the Algorithm

Step 1: Initialization

  • Obtain an initial feasible solution using any MILP solver or heuristic method.
  • Define the local neighborhood using a Hamming distance constraint:

Here, is a predefined parameter that controls the size of the neighborhood.

Step 2: Local Optimization

  • Solve the MILP problem restricted by the local neighborhood constraint.
  • Use the objective function to find a better solution within the neighborhood.

Step 3: Update Solution

  • If the local optimization improves the objective function , update the best-known solution:
  • If no improvement is found, relax or adjust the neighborhood size and repeat the process.

Step 4: Termination

  • Stop the algorithm when:
    • No further improvement in is achieved after a set number of iterations.
    • Computational time exceeds a predefined limit.
    • The solution meets predefined quality criteria (e.g., a threshold objective value).

Assumptions and Conditions

  1. Feasibility: An initial feasible solution must exist before local branching can be applied.
  2. Neighborhood Size: The parameter k significantly influences performance; a smaller k focuses on local improvements, while a larger k allows broader exploration.
  3. MILP Solver: Requires a reliable MILP solver to handle the subproblems.
  4. Problem Structure: Works best for problems where solutions are sparse and where local improvements can significantly impact the objective.

Numerical Examples

Example: Knapsack Refinement

A thief wants to steal items from a shop but is limited by a maximum weight capacity. There are 5 items available, each with a specific weight and value. The objective is to maximize the value of items stolen while staying within the weight limit.

Item Details:

  • Weights: [2,3,1,4,5]
  • Values: [10,15,7,25,30]
  • Weight Capacity: 8

Tasks:

  1. Solve the knapsack problem using a binary MIP formulation:

         

  1. Define a local branching constraint for a neighborhood where at most 1 item can be swapped:

     Solve to refine the solution.

Problem Resolution

Step 1: Formulate the Problem

Let:

: Decision variable, where if item is stolen, and otherwise.

: Weight of item

: Value of item

: Maximum weight capacity of the knapsack.

Objective Function:

Maximize:

Constraints:

Total weight:

Binary decision variables:

Step 2: Problem Data

Items data:

Weights: = [2,3,1,4,5]

Values: = [10,15,7,25,30]

Knapsack capacity: = 8

Step 3: Code Implementation

from pyomo.environ import *

# Data

weights = [2, 3, 1, 4, 5] # Weights of items

values = [10, 15, 7, 25, 30] # Values of items

capacity = 8 # Maximum capacity of the knapsack

num_items = len(weights)

# Step 1: Initial Problem Setup

model = ConcreteModel()

# Decision variables: x[i] = 1 if item i is stolen, 0 otherwise

model.x = Var(range(num_items), domain=Binary)

# Objective: Maximize total value

model.profit = Objective(expr=sum(values[i] * model.x[i] for i in range(num_items)), sense=maximize)

# Knapsack capacity constraint

model.capacity = Constraint(expr=sum(weights[i] * model.x[i] for i in range(num_items)) <= capacity)

# Solve the initial problem

solver = SolverFactory('glpk')

solver.solve(model)

# Extract the initial solution

initial_solution = [int(model.x[i]()) for i in range(num_items)]

initial_value = sum(values[i] * initial_solution[i] for i in range(num_items))

print("Initial Solution:", initial_solution)

print("Initial Value:", initial_value)

# Step 2: Add Local Branching Constraint

# Auxiliary variables for absolute value linearization

model.z = Var(range(num_items), domain=NonNegativeReals)

# Linearize absolute value constraints

model.abs_constraints = ConstraintList()

for i in range(num_items):

model.abs_constraints.add(model.z[i] >= model.x[i] - initial_solution[i])

model.abs_constraints.add(model.z[i] >= initial_solution[i] - model.x[i])

# Local branching constraint: At most k changes allowed

k = 1

model.local_branching = Constraint(expr=sum(model.z[i] for i in range(num_items)) <= k)

# Solve the problem with the local branching constraint

solver.solve(model)

# Extract the refined solution

refined_solution = [int(model.x[i]()) for i in range(num_items)]

refined_value = sum(values[i] * refined_solution[i] for i in range(num_items))

print("Refined Solution:", refined_solution)

print("Refined Value:", refined_value)

Step 4: Add Local Branching Constraint

After obtaining the initial solution, the thief refines it by exploring a neighborhood around the initial solution. The local branching constraint is:

For linearization: Add auxiliary variables zi to represent the absolute difference between the initial solution and the current solution:

Step 5: Analyze Results

Refined Solution (with =1):

· Neighborhood restricted to solutions with at most one item added or removed.

· If a better solution exists in this neighborhood, it will be found.

· If no better solution exists within the local branching neighborhood, the initial solution is already optimal within the restricted region.

Metric Initial Solution Refined Solution (k=1)
Selected Items [1, 0, 1, 0, 1] [1, 0, 1, 0, 1]
Total Value 47 47
Computation Time 0.065 seconds 0.012 seconds

This indicates local branch method is more efficient in time.


Effect of Changing k

  • Smaller (e.g., =1):
    • Only solutions that are very similar to the initial solution are explored.
    • The optimization focuses on fine-tuning the solution by making small adjustments, such as adding or removing a single item.
    • This is faster since fewer nodes in the search tree are explored.
    • However, the chance of finding significant improvements is lower, as the search is restricted.
  • Larger (e.g., =2,3):
    • The search neighborhood becomes broader, allowing more variables (items in this case) to change.
    • This increases the chance of finding better solutions but also increases the computational time.
    • Larger neighborhoods resemble the global search space, reducing the speed advantage of local branching.
  • Very Large :
    • If is too large, the local branching constraint becomes irrelevant, and the method effectively turns into a global search.
    • This negates the benefits of local branching, as the entire feasible region is explored.


Trade-Offs in This Problem

  • Speed vs. Solution Quality:
    • Smaller k results in faster optimization because the solver only explores a small subset of the feasible region.
    • Larger k allows for broader exploration, increasing the chance of finding better solutions but also increasing computational effort.
  • Knapsack-Specific Considerations:
    • The knapsack problem often has multiple near-optimal solutions with small differences in total value.
    • Local branching works well here because it fine-tunes the solution by exploring close variations (e.g., swapping high-value, low-weight items).
  • When Local Branching is Effective:
    • A good initial solution is available.
    • The feasible region is large, making global search slow.
  • Small changes in the solution can yield significant improvements.

Application

Local Branching is a powerful optimization strategy for to solving MILP optimization problems, with applications spanning industries like logistics, manufacturing, energy, data science, and engineering[5]. In particular, for challenges such as scheduling, production planning, nesting and transportation logistics where decision-makers look for high-quality solutions within reasonable computational times[6]. Focusing on local improvements, organizations are able to achieve better performance with low costs.


Railway Crew Scheduling: Local branching has advantages compared to traditional optimization methods that often use branch-and-bound or cutting-plane approaches[1]. In the railway crew scheduling traditional methods struggle with the sheer size and complexity of these problems due to the large number of binary decision variables. Local branching introduces constraints to find solutions in a smaller and more manageable sample space without sacrificing the quality of the solution[7]. This approach enables the solver to find solutions with fewer scheduling conflicts and better adherence to operational constraints compared to standard methods.


Inventory Routing Problems: A study was conducted on how local branching could lead to better solution for Inventort routing problems, which often involve constant changes[8]. Results showed that local branching helped to reach alternative solutions fast and effectively with small changes to the current solution since it searched within a small range of solution space. It found routes and schedules that better balanced inventory costs and routing efficiency, leading to substantial savings in operational costs[9].

Some of the most well-known software and platforms include ILOG CPLEX, which has been used in the Railwat Crew Scheduling. Another one is Gurobi—a leading MIP solver that has been used in adaptations of Local Branching for modern applications[10].

Conclusion

Local Branching is a major advance in the area of mixed-integer linear programming, providing a strong method for solving large and complex optimization problems. By constraining the search space to a region around the current solution, it increases computational efficiency while retaining or even improving the solution quality. For these reasons, the approach is particularly useful in practical applications where fast convergence is vital. Local Branching is also effective when applied to other areas such as logistics, manufacturing, and energy management in solving real-world optimization problems involving scheduling, resource allocation, and operational efficiency.

Local Branching can also help the optimization process in reducing the search space for faster convergence and often to better solutions than classical global approaches. Future research on Local Branching could benefit from incorporating advanced machine-learning methods that dynamically adjust neighborhood sizes in order to increase its adaptability, efficiency, and scalability for different optimization problems.

References

  1. 1.0 1.1 Fischetti, M., & Lodi, A. (2003). Local branching. Mathematical Programming, 98(1), 23–47. https://doi.org/10.1007/s10107-003-0395-5
  2. Glover, F., & Laguna, M. (1997). General purpose heuristics for integer programming—Part I. Journal of Heuristics, 2, 343-358. https://link.springer.com/article/10.1007/BF00132504
  3. Liu, D., Fischetti, M., & Lodi, A. (2022). Learning to Search in Local Branching. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4), 2224–2232. https://doi.org/10.1609/aaai.v36i4.20294
  4. Stidsen, T. (n.d.). Local Branching – A Brand New Method. Informatics and Mathematical Modelling / Operations Research, Technical University of Denmark. Retrieved from DTU website. https://www2.imm.dtu.dk/courses/02719/lbr/lbr.pdf
  5. Samsatli, S., & Samsatli, N. J. (2018). A multi-objective MILP model for the design and operation of future integrated multi-vector energy networks capturing detailed spatio-temporal dependencies. Applied Energy, 220, 893-920. https://www.sciencedirect.com/science/article/pii/S0306261917313375
  6. Kim, K., Kwon, S., & Choi, M. (2024). Optimization of Production Scheduling for the Additive Manufacturing of Ship Models Using a Hybrid Method. Journal of Marine Science and Engineering, 12(11), 1961. https://doi.org/10.3390/jmse12111961
  7. Heil, J., Hoffmann, K., & Buscher, U. (2020). Railway crew scheduling: Models, methods and applications. European Journal of Operational Research, 283(2), 405–425. https://doi.org/10.1016/j.ejor.2019.06.016
  8. Fischetti, M., & Lodi, A. (2008). Repairing MIP infeasibility through local branching. Computers & Operations Research, 35(5), 1436–1445. https://doi.org/10.1016/j.cor.2006.08.004
  9. Federgruen, A., & Simchi-Levi, D. (1995). Chapter 4: Analysis of vehicle routing and inventory-routing problems. In M. O. Ball, T. L. Magnanti, C. L. Monma, & G. L. Nemhauser (Eds.), Network Routing (Vol. 8, pp. 297–373). Elsevier.https://doi.org/10.1016/S0927-0507(05)80108-2
  10. Ernst, A., Jiang, H., Krishnamoorthy, M., Nott, H., & Sier, D. (2001). Rail crew scheduling and rostering optimization algorithms. In Computer-Aided Scheduling of Public Transport (pp. 53-71). Berlin, Heidelberg: Springer Berlin Heidelberg.https://link.springer.com/chapter/10.1007/978-3-642-56423-9_4



Stewards: Nathan Preuss, Wei-Han Chen, Tianqi Xiao, Guoqing Hu