Stochastic dynamic programming: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 75: Line 75:


<math>X^\pi\left(S_t^n\right)=max\ \left(C\left(S_t^n,\ x_t\right)+\gamma\ E\ \left\{{\bar{V}}_{t+1}^{n-1}\left(S_{t+1}\right)|S_t^n\right\}\right)\ </math>
<math>X^\pi\left(S_t^n\right)=max\ \left(C\left(S_t^n,\ x_t\right)+\gamma\ E\ \left\{{\bar{V}}_{t+1}^{n-1}\left(S_{t+1}\right)|S_t^n\right\}\right)\ </math>
Next, we define <math>x_t^n</math> as the value of <math>x_t</math> that solves this problem and <math>{\hat{v}}_t^n</math>, as the estimate value of being in state <math>S_t^n</math>:

Revision as of 01:24, 23 November 2021

Authors: Bo Yuan, Ali Amadeh, Max Greenberg, Raquel Sarabia Soto and Claudia Valero De la Flor (CHEME/SYSEN 6800, Fall 2021)

Theory, methodology and algorithm discussion

Theory

Stochastic dynamic programming combines stochastic programming and dynamic programming. Therefore, to understand better what it is, it is better first to give two definitions:

  • Stochastic programming. Unlike in a deterministic problem, where a decision’s outcome is only determined by the decision itself and all the parameters are known, in stochastic programming there is uncertainty and the decision results in a distribution of transformations.
  • Dynamic programming. It is an optimization method that consists in dividing a complex problem into easier subprobems and solving them recursively to find the optimal sub-solutions which lead to the complex problem optima.

In any stochastic dynamic programming problem, we must define the following concepts:

  • Policy, which is the set of rules used to make a decision.
  • Initial vector, where and is a finite closed region.
  • Choice made, where and is a set of possible choices.
  • Stochastic vector, .
  • Distribution function , associated with and dependent on and .
  • Return, which is the expected value of the function after the final stage.

In a stochastic dynamic programming problem, we assume that is known after the decision of stage has been made and before the decision of stage has to be made.

Methodology and algorithm

First, we define the N-stage return obtained using the optimal policy and starting with vector :

where is the function of the final state

Second, we define the initial transformation as , and , as the state resulting from it. The return after stages will be using the optimal policy. Therefore, we can formulate the expected return due to the initial choice made in :

Having defined that, the recurrence relation can be expressed as:

With:

This formulation presented is very general and depending on the problem characteristics, there have been developed different models. For this reason, we present the algorithm of two different models as examples: a finite stage-model and a model for Approximate Dynamic Programming (ADP).

Finite-stage model: a stock-option model

This model was created to maximize the expected profit that we can obtain in N days (stages) from selling/buying stocks. This is considered a finite-stage model because we know in advance for how many days are we calculating the expected profit.

First, we define the stock price on the th day as . We assume the following:

Where … are independent of and between them, and identically distributed with distribution .

Second, we also assume that we have the chance to buy a stock at a fixed price and this stock can be sold at price . We then define as the maximal expected profit, and it satisfies the following optimality equation:

And the boundary condition is the following:

Approximate Dynamic Programming

Approximate dynamic programming (ADP) is an algorithm strategy for solving complex problems that can be stochastic. Since the topic of this page is Stochastic Dynamic Programming, we will discuss ADP from this perspective.

To develop the ADP algorithm, we present the Bellman’s equation using the expectation form.

where and

The variables used and their meanings are the following:

  • State of the system,
  • Function . It represents the policy to make a decision
  • Transition function, . It describes the transition from state to state .
  • Action taken in state ,
  • Information observed after taking action ,
  • gives the expected value of being in state at time and making a decision following the optimal policy.

The goal of ADP is to replace the value of with a statistical approximation . Therefore, after iteration , we have an approximation . Another feature of ADP is that it steps forward in time. To go from one iteration to the following, we define our decision function as:

Next, we define as the value of that solves this problem and , as the estimate value of being in state :