# Eight step procedures: Difference between revisions

m (→Introduction) |
No edit summary |
||

(11 intermediate revisions by one other user not shown) | |||

Line 1: | Line 1: | ||

Author: Eljona Pushaj, Diana Bogdanowich, Stephanie Keomany | Author: Eljona Pushaj, Diana Bogdanowich, Stephanie Keomany (SysEn 5800 Fall 2020) | ||

=Introduction= | =Introduction= | ||

The eight-step procedures are a simplified, multi-stage approach for determining optimal solutions in mathematical optimization. Dynamic programming, developed by Richard Bellman in the 1950s<ref>Bellman, Richard. “The Theory of Dynamic Programming.” Bulletin of American Mathematical Society, vol. 60, 1954, pp 503–515, https://www.ams.org/journals/bull/1954-60-06/S0002-9904-1954-09848-8/S0002-9904-1954-09848-8.pdf. 18 Nov 2020.</ref>, is used to solve for the maximization or minimization of the objective function by transforming the problem into smaller steps and enumerating all the different possible solutions and finding the optimal solution. | The eight-step procedures are a simplified, multi-stage approach for determining optimal solutions in mathematical optimization. Dynamic programming, developed by Richard Bellman in the 1950s<ref>Bellman, Richard. “The Theory of Dynamic Programming.” Bulletin of American Mathematical Society, vol. 60, 1954, pp 503–515, https://www.ams.org/journals/bull/1954-60-06/S0002-9904-1954-09848-8/S0002-9904-1954-09848-8.pdf. 18 Nov 2020.</ref>, is used to solve for the maximization or minimization of the objective function by transforming the problem into smaller steps and enumerating all the different possible solutions and finding the optimal solution. | ||

In the eight-step procedure, a problem can be broken down into subproblems to solve. Using the solutions from the subproblems in a recursive manner, the solution can be determined after all the solutions of the subproblems are calculated to find the best solution, which demonstrates the principle of optimality: Any optimal policy has the property that, whatever the current state and current decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the current decision.<ref>Bradley, Stephen P. Applied Mathematical Programming. Addison-Wesley. 1 February 1977. 320-342. 18 Nov 2020</ref> Such a standard framework is used so that dynamic programming store the values of the subproblems to avoid recomputing, and thus, reduce time to solve the problem.< | In the eight-step procedure, a problem can be broken down into subproblems to solve. Using the solutions from the subproblems in a recursive manner, the solution can be determined after all the solutions of the subproblems are calculated to find the best solution, which demonstrates the principle of optimality: Any optimal policy has the property that, whatever the current state and current decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the current decision.<ref>Bradley, Stephen P. Applied Mathematical Programming. Addison-Wesley. 1 February 1977. 320-342. 18 Nov 2020</ref> Such a standard framework is used so that dynamic programming store the values of the subproblems to avoid recomputing, and thus, reduce time to solve the problem.<ref>Gavin-Hughes, Sam. “Dynamic Programming for Interviews.” Byte by Byte. https://www.byte-by-byte.com/dpbook/. 18 Nov 2020</ref> | ||

=Theory, Methodology, and/or Algorithmic Discussion= | =Theory, Methodology, and/or Algorithmic Discussion= | ||

===Methodology=== | ===Methodology=== | ||

To solve a problem using the 8-step procedure, one must use the following steps: | To solve a problem using the 8-step procedure, one must use the following steps:<br /> | ||

<br /> | <br /> | ||

'''Step 1: Specify the stages of the problem''' <br /> | '''Step 1: Specify the stages of the problem''' <br /> | ||

The stages of a dynamic programming problem can be defined as points where decisions are made. | The stages of a dynamic programming problem can be defined as points where decisions are made. Specifying the stages also divides the problem into smaller pieces.<br /> | ||

<br /> | <br /> | ||

'''Step 2: Specify the states for each stage''' <br /> | '''Step 2: Specify the states for each stage''' <br /> | ||

The states of a problem are defined as the knowledge necessary to make a decision. | The states of a problem are defined as the knowledge necessary to make a decision. There are multiple states for each stage. In general, the states consists of the information that is needed to solve the smaller problem within each stage.<ref>Chinneck. (2015). Chapter 15 Dynamic Programming. Carleton.Ca. https://www.sce.carleton.ca/faculty/chinneck/po/Chapter15.pdf</ref><br /> | ||

<br /> | <br /> | ||

'''Step 3: Specify the allowable actions for each state in each stage''' <br /> | '''Step 3: Specify the allowable actions for each state in each stage''' <br /> | ||

This stage | This helps create for a decision that must be made at each stage. <br /> | ||

<br /> | <br /> | ||

Line 29: | Line 28: | ||

'''Step 5: Define the boundary conditions''' <br /> | '''Step 5: Define the boundary conditions''' <br /> | ||

This | This can help create a starting point to finding a solution to the problem. <br /> | ||

<br /> | <br /> | ||

'''Step 6: Define the recurrence relation''' <br /> | '''Step 6: Define the recurrence relation''' <br /> | ||

This is often denoted with a function. <br /> | This is often denoted with a function, and shows the relationsip between the value of a decision at a particular stage and the value of optimal decision made at the previous stages. <br /> | ||

<br /> | <br /> | ||

'''Step 7: Compute the optimal value from the bottom-up''' <br /> | '''Step 7: Compute the optimal value from the bottom-up''' <br /> | ||

This step can be done manually or by using programming. <br /> | This step can be done manually or by using programming. Note that for each state, an optimal decision made at the remaining stages of the problem is independent from the decisions of the previous states. <br /> | ||

<br /> | <br /> | ||

'''Step 8: Arrive at the optimal solution''' <br /> | '''Step 8: Arrive at the optimal solution''' <br /> | ||

This is the final step. <br /> | This is the final step for solving a problem using the eight step procedure. <br /> | ||

=Numerical Example= | =Numerical Example= | ||

''Suppose we have a knapsack with a weight capacity of C=5 and N=2 types of items. An item of type n weighs W [n] and generates a benefit of b [n,j] when packing j items of type n to the knapsack however only a[n] units of this item are available.'' | |||

To solve a Knapsack problem we use the following steps: | |||

'''Step 1: Specify the stages of the problem''' | |||

Weight capacity of C=5 and N=2 | Weight capacity of C=5 and N=2 | ||

'''Step 2: Specify the states for each stage''' | |||

Item types are stages: n=1,2 | Item types are stages: n=1,2 | ||

Remaining capacity s= 1,2,3,4,5 | |||

'''Step 3: Specify the allowable actions for each state in each stage''' | |||

<math> | |||

U_{2}(5)\, =\, 0,1,...,min\left \{ a[2], \left \lfloor \frac{5}{w[2]}\right \rfloor \right \} | |||

</math>= '''{0,1,2}''' | |||

'''Step 4: Describe the optimization function using an English-language description.''' | |||

Remaining capacity s= 1,2,3,4,5 | |||

'''Step 5: Define the boundary conditions''' | |||

Boundary Conditions: | Boundary Conditions: | ||

Line 54: | Line 74: | ||

<math>f^{*}_{n+1}(s) = 0</math>, ''s=0,1,2,3,4,5'' ''C=5'' | <math>f^{*}_{n+1}(s) = 0</math>, ''s=0,1,2,3,4,5'' ''C=5'' | ||

<math> f^{*}_{2}(5)= max\left \{ b[2,j]+ f^{*}_{3}(5-j*w[2]) \right \} </math>= | '''Step 6: Define the recurrence relation''' | ||

<math> f^{*}_{2}(5)= max\left \{ b[2,j]+ f^{*}_{3}(5-j*w[2]) \right \} </math> | |||

'''Step 7: Compute the optimal value from the bottom-up''' | |||

<math> f^{*}_{2}(5)= max\left \{ b[2,j]+ f^{*}_{3}(5-j*w[2]) \right \} </math> | |||

<math>f^{*}_{n+1}(s) = 0</math>, ''s=0,1,2,3,4,5'' ''C=5'' | |||

{| class="wikitable" | {| class="wikitable" | ||

|+ | |+ | ||

Line 110: | Line 135: | ||

|0 | |0 | ||

|} | |} | ||

'''Step 8: Arrive at the optimal solution''' | |||

=Applications= | =Applications= | ||

Line 116: | Line 144: | ||

'''Shortest/ Longest Path Problem''' | '''Shortest/ Longest Path Problem''' | ||

In the shortest path problem, the path with the least amount of cost or value must be determined in a problem with multiple nodes in between the beginning node ''s'' to the final node ''e''. Travelling from one node to another incurs a value or cost ''c(p, q''), and the objective is to reach t with the smallest cost possible. The eight-step procedure can be used to determine the possible solutions which the optimal solution can be determined from.< | In the shortest path problem, the path with the least amount of cost or value must be determined in a problem with multiple nodes in between the beginning node ''s'' to the final node ''e''. Travelling from one node to another incurs a value or cost ''c(p, q''), and the objective is to reach t with the smallest cost possible. The eight-step procedure can be used to determine the possible solutions which the optimal solution can be determined from.<ref>Neumann K. “Dynamic Programming Basic Concepts and Applications.” Optimization in Planning and Operations of Electric Power Systems. Physica, Heidelberg, 1993, p 31-56.</ref> | ||

Likewise, but in a maximization function, the longest path can be determined in a problem by determining the solution with the highest cost involved to travel from node ''s'' to node ''e''. | Likewise, but in a maximization function, the longest path can be determined in a problem by determining the solution with the highest cost involved to travel from node ''s'' to node ''e''. | ||

Line 122: | Line 150: | ||

'''Knapsack Problem''' | '''Knapsack Problem''' | ||

The knapsack problem is an example of determining the distribution of effort or when there are limited resources to be shared with competing entities, and the goal is to maximize the benefit of the distribution. Dynamic programming is used when the increase in benefit in regard to increasing the quantity of resources is not linearly proportional. The volume may also be considered in addition to the weight of the resources. A volume constraint is added to the problem and represented in the state by stage ''n'' by an ordered pair (''s, v'') for remaining weight and volume. By considering ''d'' constraints, the number of states can grow exponentially with a ''d'' -dimensional state space even if the value of ''d'' is small. The problem becomes infeasible to solve and is referred to as the curse of dimensionality. However, the curse has faded due to advances in computational power.< | The knapsack problem is an example of determining the distribution of effort or when there are limited resources to be shared with competing entities, and the goal is to maximize the benefit of the distribution. Dynamic programming is used when the increase in benefit in regard to increasing the quantity of resources is not linearly proportional. The volume may also be considered in addition to the weight of the resources. A volume constraint is added to the problem and represented in the state by stage ''n'' by an ordered pair (''s, v'') for remaining weight and volume. By considering ''d'' constraints, the number of states can grow exponentially with a ''d'' -dimensional state space even if the value of ''d'' is small. The problem becomes infeasible to solve and is referred to as the curse of dimensionality. However, the curse has faded due to advances in computational power.<ref>Taylor, C. Robert. Applications Of Dynamic Programming To Agricultural Decision Problems. United States, CRC Press, 2019.</ref> | ||

'''Inventory Planning Problem''' | '''Inventory Planning Problem''' | ||

In inventory management, dynamic programming is used to determine how to meet anticipated and unexpected demand in order to minimize overall costs. Tracking an inventory system involves establishing a set of policies that monitor and control the levels of inventory, determining when a stock must be replenished, and the quantity of parts to order. For example, a production schedule can be computationally solved by knowing the demand, unit production costs, and inventory supply limits in order to keep the production costs below a certain rate.< | In inventory management, dynamic programming is used to determine how to meet anticipated and unexpected demand in order to minimize overall costs. Tracking an inventory system involves establishing a set of policies that monitor and control the levels of inventory, determining when a stock must be replenished, and the quantity of parts to order. For example, a production schedule can be computationally solved by knowing the demand, unit production costs, and inventory supply limits in order to keep the production costs below a certain rate.<ref>Bellman, Richard. “Dynamic Programming Approach to Optimal Inventory Processes with Delay in Delivery.” Quarterly of Applied Mathematics, vol 18, 1961, p. 399-403, https://www.ams.org/journals/qam/1961-18-04/S0033-569X-1961-0118516-2/S0033-569X-1961-0118516-2.pdf. 19 Nov 2020</ref> | ||

'''Needleman-Wunsh Algorithm (Global Sequence Alignment)''' | |||

Developed by Saul B. Needleman and Christian D. Wunsch in 1970, the Needleman-Wunsh algorithm, also known as global sequence alignment, is used to find similarities within protein or nucleotide sequences. This algorithm is an application of dynamic programming used to divide a large problem such as a large sequence into smaller subproblems and the solutions of the subproblems are used to find the optimal sequences with the highest scores. A matrix is constructed consisting of strings of the protein or nucleotide sequences. A scoring system is determined for each of the nucleotide pairs (adenine, guanine, cytosine, thymine) where there could exist a match (+1), mismatch (-1), or gap (-1). The sum of the scores determine the entire alignment pair. Then the scores are calculated for the pairs and filled out in the matrix. To find the optimal alignment, one would perform a "traceback" by starting at the upper left matrix to the bottom right. The algorithm is limited in that it can align only entire proteins.<ref>Needleman, S. B. and Wunsch, C. D. "A General Method Applicable to the Search for Similarities in the Amino Acid Sequence of Two Proteins." J. Mol. Biol. 48, 1970, p. 443-453.</ref> | |||

=Conclusion= | =Conclusion= | ||

Line 132: | Line 164: | ||

=References= | =References= | ||

<references /> | |||

## Latest revision as of 06:35, 21 December 2020

Author: Eljona Pushaj, Diana Bogdanowich, Stephanie Keomany (SysEn 5800 Fall 2020)

# Introduction

The eight-step procedures are a simplified, multi-stage approach for determining optimal solutions in mathematical optimization. Dynamic programming, developed by Richard Bellman in the 1950s^{[1]}, is used to solve for the maximization or minimization of the objective function by transforming the problem into smaller steps and enumerating all the different possible solutions and finding the optimal solution.

In the eight-step procedure, a problem can be broken down into subproblems to solve. Using the solutions from the subproblems in a recursive manner, the solution can be determined after all the solutions of the subproblems are calculated to find the best solution, which demonstrates the principle of optimality: Any optimal policy has the property that, whatever the current state and current decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the current decision.^{[2]} Such a standard framework is used so that dynamic programming store the values of the subproblems to avoid recomputing, and thus, reduce time to solve the problem.^{[3]}

# Theory, Methodology, and/or Algorithmic Discussion

### Methodology

To solve a problem using the 8-step procedure, one must use the following steps:

**Step 1: Specify the stages of the problem**

The stages of a dynamic programming problem can be defined as points where decisions are made. Specifying the stages also divides the problem into smaller pieces.

**Step 2: Specify the states for each stage**

The states of a problem are defined as the knowledge necessary to make a decision. There are multiple states for each stage. In general, the states consists of the information that is needed to solve the smaller problem within each stage.^{[4]}

**Step 3: Specify the allowable actions for each state in each stage**

This helps create for a decision that must be made at each stage.

**Step 4: Describe the optimization function using an English-language description.**

**Step 5: Define the boundary conditions**

This can help create a starting point to finding a solution to the problem.

**Step 6: Define the recurrence relation**

This is often denoted with a function, and shows the relationsip between the value of a decision at a particular stage and the value of optimal decision made at the previous stages.

**Step 7: Compute the optimal value from the bottom-up**

This step can be done manually or by using programming. Note that for each state, an optimal decision made at the remaining stages of the problem is independent from the decisions of the previous states.

**Step 8: Arrive at the optimal solution**

This is the final step for solving a problem using the eight step procedure.

# Numerical Example

*Suppose we have a knapsack with a weight capacity of C=5 and N=2 types of items. An item of type n weighs W [n] and generates a benefit of b [n,j] when packing j items of type n to the knapsack however only a[n] units of this item are available.*

To solve a Knapsack problem we use the following steps:

**Step 1: Specify the stages of the problem**

Weight capacity of C=5 and N=2

**Step 2: Specify the states for each stage**

Item types are stages: n=1,2

**Step 3: Specify the allowable actions for each state in each stage**

= **{0,1,2}**

**Step 4: Describe the optimization function using an English-language description.**

Remaining capacity s= 1,2,3,4,5

**Step 5: Define the boundary conditions**

Boundary Conditions:

, *s=0,1,2,3,4,5* *C=5*

**Step 6: Define the recurrence relation**

**Step 7: Compute the optimal value from the bottom-up**

, *s=0,1,2,3,4,5* *C=5*

Unused Capacity s | Type 1 opt | Type 2 opt | |||
---|---|---|---|---|---|

5 | 9 | 0 | 9 | 2 | 0 |

4 | 9 | 0 | 9 | 2 | 0 |

3 | 4 | 0 | 4 | 1 | 0 |

2 | 4 | 0 | 4 | 1 | 0 |

1 | 0 | 0 | 0 | 0 | 0 |

0 | 0 | 0 | 0 | 0 | 0 |

**Step 8: Arrive at the optimal solution**

# Applications

The following are some applications where dynamic programming is used. The criteria for applying dynamic programming to an optimization problem are if the objective function involves maximization, minimization, or counting and if the problem is determined by finding all the solutions to find the optimal solution.

**Shortest/ Longest Path Problem**

In the shortest path problem, the path with the least amount of cost or value must be determined in a problem with multiple nodes in between the beginning node *s* to the final node *e*. Travelling from one node to another incurs a value or cost *c(p, q*), and the objective is to reach t with the smallest cost possible. The eight-step procedure can be used to determine the possible solutions which the optimal solution can be determined from.^{[5]}

Likewise, but in a maximization function, the longest path can be determined in a problem by determining the solution with the highest cost involved to travel from node *s* to node *e*.

**Knapsack Problem**

The knapsack problem is an example of determining the distribution of effort or when there are limited resources to be shared with competing entities, and the goal is to maximize the benefit of the distribution. Dynamic programming is used when the increase in benefit in regard to increasing the quantity of resources is not linearly proportional. The volume may also be considered in addition to the weight of the resources. A volume constraint is added to the problem and represented in the state by stage *n* by an ordered pair (*s, v*) for remaining weight and volume. By considering *d* constraints, the number of states can grow exponentially with a *d* -dimensional state space even if the value of *d* is small. The problem becomes infeasible to solve and is referred to as the curse of dimensionality. However, the curse has faded due to advances in computational power.^{[6]}

**Inventory Planning Problem**

In inventory management, dynamic programming is used to determine how to meet anticipated and unexpected demand in order to minimize overall costs. Tracking an inventory system involves establishing a set of policies that monitor and control the levels of inventory, determining when a stock must be replenished, and the quantity of parts to order. For example, a production schedule can be computationally solved by knowing the demand, unit production costs, and inventory supply limits in order to keep the production costs below a certain rate.^{[7]}

**Needleman-Wunsh Algorithm (Global Sequence Alignment)**

Developed by Saul B. Needleman and Christian D. Wunsch in 1970, the Needleman-Wunsh algorithm, also known as global sequence alignment, is used to find similarities within protein or nucleotide sequences. This algorithm is an application of dynamic programming used to divide a large problem such as a large sequence into smaller subproblems and the solutions of the subproblems are used to find the optimal sequences with the highest scores. A matrix is constructed consisting of strings of the protein or nucleotide sequences. A scoring system is determined for each of the nucleotide pairs (adenine, guanine, cytosine, thymine) where there could exist a match (+1), mismatch (-1), or gap (-1). The sum of the scores determine the entire alignment pair. Then the scores are calculated for the pairs and filled out in the matrix. To find the optimal alignment, one would perform a "traceback" by starting at the upper left matrix to the bottom right. The algorithm is limited in that it can align only entire proteins.^{[8]}

# Conclusion

The eight-step procedure is an approach used in dynamic programming to transform a problem into simpler problems to yield an optimal solution. The recursive nature of the procedure allows for the optimization problems to be solved using computational models that reduce time and effort and can be used in many applications across many industries.

# References

- ↑ Bellman, Richard. “The Theory of Dynamic Programming.” Bulletin of American Mathematical Society, vol. 60, 1954, pp 503–515, https://www.ams.org/journals/bull/1954-60-06/S0002-9904-1954-09848-8/S0002-9904-1954-09848-8.pdf. 18 Nov 2020.
- ↑ Bradley, Stephen P. Applied Mathematical Programming. Addison-Wesley. 1 February 1977. 320-342. 18 Nov 2020
- ↑ Gavin-Hughes, Sam. “Dynamic Programming for Interviews.” Byte by Byte. https://www.byte-by-byte.com/dpbook/. 18 Nov 2020
- ↑ Chinneck. (2015). Chapter 15 Dynamic Programming. Carleton.Ca. https://www.sce.carleton.ca/faculty/chinneck/po/Chapter15.pdf
- ↑ Neumann K. “Dynamic Programming Basic Concepts and Applications.” Optimization in Planning and Operations of Electric Power Systems. Physica, Heidelberg, 1993, p 31-56.
- ↑ Taylor, C. Robert. Applications Of Dynamic Programming To Agricultural Decision Problems. United States, CRC Press, 2019.
- ↑ Bellman, Richard. “Dynamic Programming Approach to Optimal Inventory Processes with Delay in Delivery.” Quarterly of Applied Mathematics, vol 18, 1961, p. 399-403, https://www.ams.org/journals/qam/1961-18-04/S0033-569X-1961-0118516-2/S0033-569X-1961-0118516-2.pdf. 19 Nov 2020
- ↑ Needleman, S. B. and Wunsch, C. D. "A General Method Applicable to the Search for Similarities in the Amino Acid Sequence of Two Proteins." J. Mol. Biol. 48, 1970, p. 443-453.