Markov decision process: Difference between revisions

From Cornell University Computational Optimization Open Textbook - Optimization Wiki
Jump to navigation Jump to search
No edit summary
 
(13 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Author: Eric Berg (eb645)
Author: Eric Berg (eb645) (SysEn 5800 Fall 2020)
 
Requirements:
 
- An introduction of the topic
 
- Theory, methodology, and/or algorithmic discussions
 
- At least one numerical example (step-by-step solution process, like
 
what you did in the HWs)
 
- A section to discuss and/or illustrate the applications
 
- A conclusion section
 
- References


= Introduction =
= Introduction =
A Markov Decision Process (MDP) is a decision making method that takes into account information from the environment, actions performed by the agent, and rewards in order to decide the optimal next action. MDP works in discrete time, meaning at each point in time the decision process is carried out. The name Markov refers to the Russian mathematician Andrey Markov, since the Markov Decision Process is based on the Markov Property. MDPs are often used as control schemes in machine learning applications. Machine learning can be divided into three main categories: unsupervised learning, supervised learning, and reinforcement learning. The Markov decision process is used as a method for decision making in the reinforcement learning category. MDPs are the basis by which the machine makes decisions and "learns" how to behave in order to achieve its goal.
A Markov Decision Process (MDP) is a stochastic sequential decision making method.<math>^1</math> Sequential decision making is applicable any time there is a dynamic system that is controlled by a decision maker where decisions are made sequentially over time. MDPs can be used to determine what action the decision maker should make given the current state of the system and its environment. This decision making process takes into account information from the environment, actions performed by the agent, and rewards in order to decide the optimal next action. MDPs can be characterized as both finite or infinite and continuous or discrete depending on the set of actions and states available and the decision making frequency.<math>^1</math> This article will focus on discrete MDPs with finite states and finite actions for the sake of simplified calculations and numerical examples. The name Markov refers to the Russian mathematician Andrey Markov, since the MDP is based on the Markov Property. In the past, MDPs have been used to solve problems like inventory control, queuing optimization, and routing problems.<math>^2</math> Today, MDPs are often used as a method for decision making in the reinforcement learning applications, serving as the framework guiding the machine to make decisions and "learn" how to behave in order to achieve its goal.


= Theory and Methodology =
= Theory and Methodology =
eA Markov Decision process makes decisions using information about the system's current state, the actions being performed by the agent and the rewards earned based on states and actions.
A MDP makes decisions using information about the system's current state, the actions being performed by the agent and the rewards earned based on states and actions.


A Markov decision process is made up of multiple fundamental elements: the agent, states, a model, actions, rewards, and a policy. The agent is the object or system being controlled that has to make decisions and perform actions. The agent lives in an environment that can be described using states, which contain information about the agent and the environment.  The model determines the rules of the world in which the agent lives, in other words, how certain states and actions lead to other states. The agent can perform a fixed set of actions in any given state.  The agent receives rewards based on its current state. A policy is a function that determines the agent's next action based on its current state.  
The MDP is made up of multiple fundamental elements: the agent, states, a model, actions, rewards, and a policy.<math>^1</math> The agent is the object or system being controlled that has to make decisions and perform actions. The agent lives in an environment that can be described using states, which contain information about the agent and the environment.  The model determines the rules of the world in which the agent lives, in other words, how certain states and actions lead to other states. The agent can perform a fixed set of actions in any given state.  The agent receives rewards based on its current state. A policy is a function that determines the agent's next action based on its current state. [[File:Reinforcement Learning.png|thumb|Reinforcement Learning framework used in Markov Decision Processes]]'''MDP Framework:'''


'''MDP Framework:'''
*<math>S</math> : States (<math>s \epsilon S</math>)
 
*<math>A</math> : Actions (<math>a \epsilon A</math>)
* <math>S</math> : States (<math>s \epsilon S</math>)
* <math>A</math> : Actions (<math>a \epsilon A</math>)
*<math>P(S_{t+1} | s_t, a_t)</math> : Model determining transition probabilities
*<math>P(S_{t+1} | s_t, a_t)</math> : Model determining transition probabilities
* <math>R(s)</math>: Reward<br />
*<math>R(s)</math>: Reward<br />
In order to understand how the Markov Decision Process works, first the Markov Property must be defined. The Markov Property states that the future is independent of the past given the present. In other words, only the present in needed to determine the future, since the present contains all necessary information from the past. The Markov Property can be described in mathematical terms below:
In order to understand how the MDP works, first the Markov Property must be defined. The Markov Property states that the future is independent of the past given the present.<math>^4</math> In other words, only the present is needed to determine the future, since the present contains all necessary information from the past. The Markov Property can be described in mathematical terms below:


<math display="inline">P[S_{t+1} | S_t] = P[S_{t+1} | S_1, S_2, S_3... S_t]</math>
<math display="inline">P[S_{t+1} | S_t] = P[S_{t+1} | S_1, S_2, S_3... S_t]</math>


The above notation conveys that the probability of the next state given the current state is equal to the probability of the next state given all previous states. The Markov Property is relevant to the Markov Decision Process because only the current state is inputted into the policy function to determine the next action, the previous states and actions are not needed.
The above notation conveys that the probability of the next state given the current state is equal to the probability of the next state given all previous states. The Markov Property is relevant to the MDP because only the current state is used to determine the next action, the previous states and actions are not needed.  
[[File:Image2.png|thumb|A Markov Decision Process describing a college student's hypothetical situation.]]
[[File:Reinforcement Learning.png|thumb|Reinforcement Learning framework used in Markov Decision Processes]]
As an example, the Markov decision process can be applied to a college student, depicted to the right. In this case, the agent would be the student. The states would be the circles and squares in the diagram, and the arrows would be the actions. In the state that the student is in Class 3, the allowable actions are to Pass or go to the Pub. The model in this case  would assign probabilities to each state given the previous state and action. These probabilities are written next to the arrows.  Finally the rewards associated with each state are written in red.  


'''The Policy'''
'''The Policy and Value Function'''


The policy, <math>\Pi</math> , is a function that maps actions to states. The policy determines which is the optimal action given the current state to achieve maximize reward.  
The policy, <math>\Pi</math> , is a function that maps actions to states. The policy determines which is the optimal action given the current state to achieve the maximum total reward.  


<math>\Pi : S \rightarrow A </math>
<math>\Pi : S \rightarrow A </math>


There are various methods that can be used for finding the best policy. Each method tries to maximize rewards in some way, but differs in which accumulation of rewards should be maximized. The first method is to choose the action that maximizes the expected reward given the current state. This is the myopic method, which weighs each time-step decision equally. Next is the finite-horizon method, which tries to maximize the accumulated reward over a fixed number of time steps. But because many applications may have infinite horizons, meaning the agent will always have to make decisions and continuously try to maximize its reward, another method is commonly used, known as the infinite-horizon method. In the infinite-horizon method, the goal is to maximize the expected sum of rewards over all steps in the future. The problem becomes when performing an infinite sum of rewards that are all weighed equally, the results may not converge and the policy algorithm may get stuck in a loop. In order to avoid this, and to be able prioritize short-term or long term-rewards, a discount factor, <math>\gamma   
Before the best policy can be determined, a goal or return must be defined to quantify rewards at every state. There are various ways to define the return. Each variation of the return function tries to maximize rewards in some way, but differs in which accumulation of rewards should be maximized. The first method is to choose the action that maximizes the expected reward given the current state. This is the myopic method, which weighs each time-step decision equally.<math>^2</math> Next is the finite-horizon method, which tries to maximize the accumulated reward over a fixed number of time steps.<math>^2</math> But because many applications may have infinite horizons, meaning the agent will always have to make decisions and continuously try to maximize its reward, another method is commonly used, known as the infinite-horizon method. In the infinite-horizon method, the goal is to maximize the expected sum of rewards over all steps in the future. <math>^2</math> When performing an infinite sum of rewards that are all weighed equally, the results may not converge and the policy algorithm may get stuck in a loop. In order to avoid this, and to be able prioritize short-term or long term-rewards, a discount factor, <math>\gamma   
  </math>, is added. If <math>\gamma     
  </math>, is added. <math>^3</math> If <math>\gamma     
  </math> is closer to 0, the policy will choose actions that prioritize more immediate rewards, and <math>\gamma     
  </math> is closer to 0, the policy will choose actions that prioritize more immediate rewards, if <math>\gamma     
  </math> is closer to 1, long-term rewards are prioritized.
  </math> is closer to 1, long-term rewards are prioritized.
Return/Goal Variations:


* Myopic: Maximize <math>E[ r_t  |  \Pi , s_t ]   
* Myopic: Maximize <math>E[ r_t  |  \Pi , s_t ]   
  </math> , maximize expected reward for each state
  </math> , maximize expected reward for each state
* Finite-horizon: Maximize <math>E[ \textstyle \sum_{t=0}^k \displaystyle r_t  |  \Pi , s_t ]   
* Finite-horizon: Maximize <math>E[ \textstyle \sum_{t=0}^k \displaystyle r_t  |  \Pi , s_t ]   
  </math>
  </math> , maximize sum of expected reward over finite horizon
* Discounted Infinite-horizon: Maximize <math>E[ \textstyle \sum_{t=0}^\infty \displaystyle \gamma^t r_t  |  \Pi , s_t ]     
* Discounted Infinite-horizon: Maximize <math>E[ \textstyle \sum_{t=0}^\infty \displaystyle \gamma^t r_t  |  \Pi , s_t ]     
  </math>  <math>\gamma \epsilon [0,1]   
  </math>  <math>\gamma \epsilon [0,1]   
  </math>
  </math>, maximize sum of discounted expected reward over infinite horizon
Most commonly, the discounted infinite horizon method is used to determine the best policy. The value function, <math>V(s)   
The value function, <math>V(s)   
  </math>,  is the sum of discounted rewards.
  </math>,  characterizes the return at a given state. Most commonly, the discounted infinite horizon return method is used to determine the best policy. Below the value function is defined as the expected sum of discounted future rewards.


<math>V(s) = E[ \sum_{t=0}^\infty \gamma^t r_t  | s_t ]     
<math>V(s) = E[ \sum_{t=0}^\infty \gamma^t r_t  | s_t ]     
  </math>
  </math>


Using the Bellman Equation, the value function can be decomposed into two parts, the immediate reward of the current state, and the discounted reward of the next state.
The value function can be decomposed into two parts, the immediate reward of the current state, and the discounted value of the next state. This decomposition leads to the derivation of the [[Bellman equation|Bellman Equation]],, as shown in equation (2). Because the actions and rewards are dependent on the policy, the value function of an MDP is associated with a given policy.


<math>V(s) = E[ r_{t+1} + \gamma v(s_{t+1}) | s_t]   
<math>V(s) = E[ r_{t+1} + \gamma V(s_{t+1}) | s_t]   
</math>  , <math>s_{t+1} = s'   
  </math>
  </math>


Line 72: Line 54:
  </math>
  </math>


The policy is a function of the current state, meaning at each time step a new policy is calculated considering the present information.
<math>V^{\Pi}(s) = R(s,\Pi(s)) + \gamma \sum_{s' \epsilon S}P(s' | s,\Pi(s))V(s') 
</math>      (1)


The value function can solved iteratively using iterative methods such as dynamic programming, Monte-Carlo evaluations, or temporal-difference learning.
<math>V^{*}(s) = max_a [R(s, a) + \gamma \sum_{s' \epsilon S}P(s' | s, a)V^*(s')]   
</math>      (2)


The Algortihm:
The optimal value function can be solved iteratively using iterative methods such as dynamic programming, Monte-Carlo evaluations, or temporal-difference learning.<math>^5</math> 
 
The optimal policy is one that chooses the action with the largest optimal value given the current state:
 
<math>\Pi^*(s) = argmax_a [R(s,a) + \gamma \sum_{s' \epsilon S}P_{ss'}^aV(s')] 
</math>      (3)
 
The policy is a function of the current state, meaning at each time step a new policy is calculated considering the present information. The optimal policy function can be solved using methods such as value iteration, policy iteration, Q-learning, or linear programming. <math>^{5,6}</math>
 
'''Algorithms'''
 
The first method for solving the optimality equation (2) is using value iteration, also known as successive approximation, backwards induction, or dynamic programming. <math>^{1,6}</math>
 
Value Iteration Algorithm:
 
# Initialization: Set <math>V^{*}_0(s) = 0   
</math> for all <math>s \epsilon S</math> , choose <math>\varepsilon >0   
</math>, n=1
# Value Update: For each <math>s \epsilon S</math>, compute: <math>V^{*}_{n+1}(s) = max_a [R(s, a) + \gamma \sum_{s' \epsilon S}P(s' | s, a)V^*_n(s')]   
</math>
# If <math>| V_{n+1} - V_n | < \varepsilon   
</math>,  the algorithm has converged and the optimal value function, <math>V^*   
</math>, has been determined, otherwise return to step 2 and increment n by 1.
The value function approximation becomes more accurate at each iteration because more future states are considered. The value iteration algorithm can be slow to converge in certain situations, so an alternative algorithm can be used which converges more quickly.
 
Policy Iteration Algorithm:
 
# Initialization: Set an arbitrary policy <math>\Pi(s) 
</math> and <math>V(s) 
</math> for all <math>s \epsilon S</math>, choose <math>\varepsilon >0   
</math>, n=1
# Policy Evaluation:  For each <math>s \epsilon S</math>, compute: <math>V^{\Pi}_{n+1}(s) = R(s,\Pi(s)) + \gamma \sum_{s' \epsilon S}P(s' | s,\Pi(s))V^{\Pi}_n(s') 
</math>
# If <math>| V_{n+1} - V_n | < \varepsilon   
</math>,  the optimal value function, <math>V^*   
</math> has been determined, continue to next step, otherwise return to step 2 and increment n by 1.
# Policy Update: For each <math>s \epsilon S</math>, compute: <math>\Pi_{n+1}(s) = argmax_a [R(s,\Pi_n(s)) + \gamma \sum_{s' \epsilon S}P(s' | s,\Pi_n(s))V^{\Pi}_n(s')]   
</math>
# If  <math>\Pi_{n+1} = \Pi_n     
</math> ,the algorithm has converged and the optimal policy, <math>\Pi^*   
</math> has been determined, otherwise return to step 2 and increment n by 1.
 
With each iteration the optimal policy is improved using the previous policy and value function until the algorithm converges and the optimal policy is found.


# Update Value Function
# Update Policy
# Take
= Numerical Example =
= Numerical Example =
Optimizating of a quadratic function.12
[[File:Markov Decision Process Example 2.png|alt=|thumb|499x499px|A Markov Decision Process describing a college student's hypothetical situation.]]
As an example, the MDP can be applied to a college student, depicted to the right. In this case, the agent would be the student. The states would be the circles and squares in the diagram, and the arrows would be the actions. The action between work and school is leave work and go to school. In the state that the student is at school, the allowable actions are to go to the bar, enjoy their hobby, or sleep. The probabilities assigned to each state given the previous state and action in this example is 1. The rewards associated with each state are written in red.


Assume <math>P(s'|s) = 1.0
 
</math> , <math>\gamma   
</math> =1.
First, the optimal value functions must be calculated for each state.
<math>V^{*}(s) = max_a [R(s, a) + \gamma \sum_{s' \epsilon S}P(s' | s, a)V^*(s')]   
</math>
<math>V^{*}(Hobby) = max_a [3 + (1)(1.0*0)] = 3   
</math>
<math>V^{*}(Bar) = max_a [2 + 1(1.0*0)] = 2   
</math>
<math>V^*(Sleep) = max_a[0 + 1(1.0*0)] = 0 
</math>
<math>V^*(School) =  max_a[ -2 + 1(1.0*2)  , -2 + 1(1.0*0) , -2 + 1(1.0*3)] = 1 
</math>
<math>V^*(YouTube) = max_a[-1 + 1(1.0*-1) , -1 +1(1.0*1)]= 0   
</math>
<math>V^*(Work) = max_a[1 + 1(1.0*0) , 1 + 1(1.0*1)] = 2 
</math>
Then, the optimal policy at each state will choose the action that generates the highest value function.
<math>\Pi^*(s) = argmax_a [R(s,a) + \gamma \sum_{s' \epsilon S}P_{ss'}^aV(s')] 
</math>
<math>\Pi^*(YouTube) = argmax_a [0,2] \rightarrow a =   
</math> Work
<math>\Pi^*(Work) = argmax_a [0,1] \rightarrow a =   
</math> School
<math>\Pi^*(School) = argmax_a [0,2,3] \rightarrow a =   
</math> Hobby
Therefore, the optimal policy in each state provides a sequence of decisions that generates the optimal path sequence in this decision process. As a results, if the student starts in state Work, he/she should choose to go to school, then to enjoy their hobby, then go to sleep.


= Applications =
= Applications =
[[File:Pong.jpg|thumb|Computer playing Pong arcade game by Atari using reinforcement learning]]
[[File:Pong.jpg|thumb|Computer playing Pong arcade game by Atari using reinforcement learning]]
Markov decision Processes have been used widely within reinforcement learning to teach robots or other computer-based systems how to do something they were previously were unable to do. For example, Markov decision processes have been used to teach a computer how to play computer games like Pong, Pacman, or AlphaGo. MDPs have been used to teach a simulated robot how to walk and run. MDPs are often applied fields such as robotics, automated systems, manufacturing, and economics.  
MDPs have been applied in various fields including operations research, electrical engineering, computer science, manufacturing, economics, finance, and telecommunication.<math>^2</math> For example, the sequential decision making process described by MDP can be used to solve routing problems such as the [[Traveling salesman problem]]. In this case, the agent is the salesman, the actions available are the routes available to take from the current state, the rewards in this case are the costs of taking each route, and the goal is to determine the optimal policy that minimizes the cost function over the duration of the trip. Another application example is maintenance and repair problems, in which a dynamic system such as a vehicle will deteriorate over time due to its actions and the environment, and the available decisions at every time epoch is to do nothing, repair, or replace a certain component of the system.<math>^2</math> This problem can be formulated as an MDP to choose the actions that to minimize cost of maintenance over the life of the vehicle. MDPs have also been applied to optimize telecommunication protocols, stock trading, and queue control in manufacturing environments. <math>^2</math>
 
Given the significant advancements in artificial intelligence and machine learning over the past decade, MDPs are being applied in fields such as robotics, automated systems, autonomous vehicles, and other complex autonomous systems. MDPs have been used widely within reinforcement learning to teach robots or other computer-based systems how to do something they were previously were unable to do. For example, MDPs have been used to teach a computer how to play computer games like Pong, Pacman, or AlphaGo.<math>^{7,8}</math>  DeepMind Technologies, owned by Google, used the MDP framework in conjunction with neural networks to play Atari games better than human experts. <math>^7</math> In this application, only the raw pixel input of the game screen was used as input, and a neural network was used to estimate the value function for each state, and choose the next action.<math>^7</math> MDPs have been used in more advanced applications to teach a simulated human robot how to walk and run and a real legged-robot how to walk.<math>^9</math> 
[[File:Google Deepmind.jpg|thumb|Google's DeepMind uses reinforcement learning to teach AI how to walk]]
[[File:Google Deepmind.jpg|thumb|Google's DeepMind uses reinforcement learning to teach AI how to walk]]


= Conclusion =
= Conclusion =


A Markov Decision Process is a stochastic, sequential decision-making method based on the Markov Property.
A MDP is a stochastic, sequential decision-making method based on the Markov Property. MDPs can be used to make optimal decisions for a dynamic system given information about its current state and its environment. This process is fundamental in reinforcement learning applications and a core method for developing artificially intelligent systems. MDPs have been applied to a wide variety of industries and fields including robotics, operations research, manufacturing, economics, and finance.


= References =
= References =


<references />1. Ashraf, M. (2018, April 11). ''Reinforcement Learning Demystified: Markov Decision Processes (Part 1)''. Medium. <nowiki>https://towardsdatascience.com/reinforcement-learning-demystified-markov-decision-processes-part-1-bf00dda41690</nowiki>
<references />
<span title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rft.type=webpage&rft.title=Reinforcement%20Learning%20Demystified%3A%20Markov%20Decision%20Processes%20(Part%201)&rft.description=Episode%202%2C%20demystifying%20Markov%20Processes%2C%20Markov%20Reward%20Processes%2C%20Bellman%20Equation%2C%20and%20Markov%20Decision%20Processes.&rft.identifier=https%3A%2F%2Ftowardsdatascience.com%2Freinforcement-learning-demystified-markov-decision-processes-part-1-bf00dda41690&rft.aufirst=Mohammad&rft.aulast=Ashraf&rft.au=Mohammad%20Ashraf&rft.date=2018-04-11&rft.language=en" class="Z3988"></span>2. Bertsekas, D. P. (n.d.). ''Dynamic Programming and Optimal Control 3rd Edition, Volume II''. 233.
 
3. Littman, M. L. (2001). Markov Decision Processes. In N. J. Smelser & P. B. Baltes (Eds.), ''International Encyclopedia of the Social & Behavioral Sciences'' (pp. 9240–9242). Pergamon. <nowiki>https://doi.org/10.1016/B0-08-043076-7/00614-8</nowiki>
 
4. Roberts, J. (n.d.). ''Markov Decision Processes''. 24.


5. Silver, D. (n.d.). Lecture 2: Markov Decision Processes. ''Markov Processes'', 57.
# Puterman, M. L. (1990). Chapter 8 Markov decision processes. In ''Handbooks in Operations Research and Management Science'' (Vol. 2, pp. 331–434). Elsevier. <nowiki>https://doi.org/10.1016/S0927-0507(05)80172-0</nowiki>
# Feinberg, E. A., & Shwartz, A. (2012). ''Handbook of Markov Decision Processes: Methods and Applications''. Springer Science & Business Media.
# Howard, R. A. (1960). ''Dynamic programming and Markov processes.'' John Wiley.
# Ashraf, M. (2018, April 11). ''Reinforcement Learning Demystified: Markov Decision Processes (Part 1)''. Medium. <nowiki>https://towardsdatascience.com/reinforcement-learning-demystified-markov-decision-processes-part-1-bf00dda41690</nowiki>
# Bertsekas, D. P. (2011). Dynamic Programming and Optimal Control 3rd Edition, Volume II. ''Massachusetts Institue of Technology'', 233.
# Littman, M. L. (2001). Markov Decision Processes. In N. J. Smelser & P. B. Baltes (Eds.), ''International Encyclopedia of the Social & Behavioral Sciences'' (pp. 9240–9242). Pergamon. <nowiki>https://doi.org/10.1016/B0-08-043076-7/00614-8</nowiki>
# Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing Atari with Deep Reinforcement Learning. ''ArXiv:1312.5602 [Cs]''. <nowiki>http://arxiv.org/abs/1312.5602</nowiki>
# Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., & Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. ''Science'', ''362''(6419), 1140–1144. <nowiki>https://doi.org/10.1126/science.aar6404</nowiki>
# Ha, S., Xu, P., Tan, Z., Levine, S., & Tan, J. (2020). Learning to Walk in the Real World with Minimal Human Effort. ''ArXiv:2002.08550 [Cs]''. <nowiki>http://arxiv.org/abs/2002.08550</nowiki>
# Bellman, R. (1966). Dynamic Programming. ''Science'', ''153''(3731), 34–37. <nowiki>https://doi.org/10.1126/science.153.3731.34</nowiki>
# Abbeel, P. (2016). ''Markov Decision Processes and Exact Solution Methods:'' 34.
# Silver, D. (2015). Markov Decision Processes. ''Markov Processes'', 57.
<span title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Lecture%202%3A%20Markov%20Decision%20Processes&rft.jtitle=Markov%20Processes&rft.aufirst=David&rft.aulast=Silver&rft.au=David%20Silver&rft.pages=57&rft.language=en" class="Z3988"></span>
<span title="url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fzotero.org%3A2&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Lecture%202%3A%20Markov%20Decision%20Processes&rft.jtitle=Markov%20Processes&rft.aufirst=David&rft.aulast=Silver&rft.au=David%20Silver&rft.pages=57&rft.language=en" class="Z3988"></span>

Latest revision as of 07:34, 21 December 2020

Author: Eric Berg (eb645) (SysEn 5800 Fall 2020)

Introduction

A Markov Decision Process (MDP) is a stochastic sequential decision making method. Sequential decision making is applicable any time there is a dynamic system that is controlled by a decision maker where decisions are made sequentially over time. MDPs can be used to determine what action the decision maker should make given the current state of the system and its environment. This decision making process takes into account information from the environment, actions performed by the agent, and rewards in order to decide the optimal next action. MDPs can be characterized as both finite or infinite and continuous or discrete depending on the set of actions and states available and the decision making frequency. This article will focus on discrete MDPs with finite states and finite actions for the sake of simplified calculations and numerical examples. The name Markov refers to the Russian mathematician Andrey Markov, since the MDP is based on the Markov Property. In the past, MDPs have been used to solve problems like inventory control, queuing optimization, and routing problems. Today, MDPs are often used as a method for decision making in the reinforcement learning applications, serving as the framework guiding the machine to make decisions and "learn" how to behave in order to achieve its goal.

Theory and Methodology

A MDP makes decisions using information about the system's current state, the actions being performed by the agent and the rewards earned based on states and actions.

The MDP is made up of multiple fundamental elements: the agent, states, a model, actions, rewards, and a policy. The agent is the object or system being controlled that has to make decisions and perform actions. The agent lives in an environment that can be described using states, which contain information about the agent and the environment. The model determines the rules of the world in which the agent lives, in other words, how certain states and actions lead to other states. The agent can perform a fixed set of actions in any given state. The agent receives rewards based on its current state. A policy is a function that determines the agent's next action based on its current state.

Reinforcement Learning framework used in Markov Decision Processes

MDP Framework:

  •  : States ()
  •  : Actions ()
  •  : Model determining transition probabilities
  • : Reward

In order to understand how the MDP works, first the Markov Property must be defined. The Markov Property states that the future is independent of the past given the present. In other words, only the present is needed to determine the future, since the present contains all necessary information from the past. The Markov Property can be described in mathematical terms below:

The above notation conveys that the probability of the next state given the current state is equal to the probability of the next state given all previous states. The Markov Property is relevant to the MDP because only the current state is used to determine the next action, the previous states and actions are not needed.

The Policy and Value Function

The policy, , is a function that maps actions to states. The policy determines which is the optimal action given the current state to achieve the maximum total reward.

Before the best policy can be determined, a goal or return must be defined to quantify rewards at every state. There are various ways to define the return. Each variation of the return function tries to maximize rewards in some way, but differs in which accumulation of rewards should be maximized. The first method is to choose the action that maximizes the expected reward given the current state. This is the myopic method, which weighs each time-step decision equally. Next is the finite-horizon method, which tries to maximize the accumulated reward over a fixed number of time steps. But because many applications may have infinite horizons, meaning the agent will always have to make decisions and continuously try to maximize its reward, another method is commonly used, known as the infinite-horizon method. In the infinite-horizon method, the goal is to maximize the expected sum of rewards over all steps in the future. When performing an infinite sum of rewards that are all weighed equally, the results may not converge and the policy algorithm may get stuck in a loop. In order to avoid this, and to be able prioritize short-term or long term-rewards, a discount factor, , is added. If is closer to 0, the policy will choose actions that prioritize more immediate rewards, if is closer to 1, long-term rewards are prioritized.

Return/Goal Variations:

  • Myopic: Maximize , maximize expected reward for each state
  • Finite-horizon: Maximize , maximize sum of expected reward over finite horizon
  • Discounted Infinite-horizon: Maximize , maximize sum of discounted expected reward over infinite horizon

The value function, , characterizes the return at a given state. Most commonly, the discounted infinite horizon return method is used to determine the best policy. Below the value function is defined as the expected sum of discounted future rewards.

The value function can be decomposed into two parts, the immediate reward of the current state, and the discounted value of the next state. This decomposition leads to the derivation of the Bellman Equation,, as shown in equation (2). Because the actions and rewards are dependent on the policy, the value function of an MDP is associated with a given policy.

,

(1)

(2)

The optimal value function can be solved iteratively using iterative methods such as dynamic programming, Monte-Carlo evaluations, or temporal-difference learning.

The optimal policy is one that chooses the action with the largest optimal value given the current state:

(3)

The policy is a function of the current state, meaning at each time step a new policy is calculated considering the present information. The optimal policy function can be solved using methods such as value iteration, policy iteration, Q-learning, or linear programming.

Algorithms

The first method for solving the optimality equation (2) is using value iteration, also known as successive approximation, backwards induction, or dynamic programming.

Value Iteration Algorithm:

  1. Initialization: Set for all , choose , n=1
  2. Value Update: For each , compute:
  3. If , the algorithm has converged and the optimal value function, , has been determined, otherwise return to step 2 and increment n by 1.

The value function approximation becomes more accurate at each iteration because more future states are considered. The value iteration algorithm can be slow to converge in certain situations, so an alternative algorithm can be used which converges more quickly.

Policy Iteration Algorithm:

  1. Initialization: Set an arbitrary policy and for all , choose , n=1
  2. Policy Evaluation: For each , compute:
  3. If , the optimal value function, has been determined, continue to next step, otherwise return to step 2 and increment n by 1.
  4. Policy Update: For each , compute:
  5. If ,the algorithm has converged and the optimal policy, has been determined, otherwise return to step 2 and increment n by 1.

With each iteration the optimal policy is improved using the previous policy and value function until the algorithm converges and the optimal policy is found.

Numerical Example

A Markov Decision Process describing a college student's hypothetical situation.

As an example, the MDP can be applied to a college student, depicted to the right. In this case, the agent would be the student. The states would be the circles and squares in the diagram, and the arrows would be the actions. The action between work and school is leave work and go to school. In the state that the student is at school, the allowable actions are to go to the bar, enjoy their hobby, or sleep. The probabilities assigned to each state given the previous state and action in this example is 1. The rewards associated with each state are written in red.

Assume , =1.

First, the optimal value functions must be calculated for each state.

Then, the optimal policy at each state will choose the action that generates the highest value function.

Work

School

Hobby

Therefore, the optimal policy in each state provides a sequence of decisions that generates the optimal path sequence in this decision process. As a results, if the student starts in state Work, he/she should choose to go to school, then to enjoy their hobby, then go to sleep.

Applications

Computer playing Pong arcade game by Atari using reinforcement learning

MDPs have been applied in various fields including operations research, electrical engineering, computer science, manufacturing, economics, finance, and telecommunication. For example, the sequential decision making process described by MDP can be used to solve routing problems such as the Traveling salesman problem. In this case, the agent is the salesman, the actions available are the routes available to take from the current state, the rewards in this case are the costs of taking each route, and the goal is to determine the optimal policy that minimizes the cost function over the duration of the trip. Another application example is maintenance and repair problems, in which a dynamic system such as a vehicle will deteriorate over time due to its actions and the environment, and the available decisions at every time epoch is to do nothing, repair, or replace a certain component of the system. This problem can be formulated as an MDP to choose the actions that to minimize cost of maintenance over the life of the vehicle. MDPs have also been applied to optimize telecommunication protocols, stock trading, and queue control in manufacturing environments.

Given the significant advancements in artificial intelligence and machine learning over the past decade, MDPs are being applied in fields such as robotics, automated systems, autonomous vehicles, and other complex autonomous systems. MDPs have been used widely within reinforcement learning to teach robots or other computer-based systems how to do something they were previously were unable to do. For example, MDPs have been used to teach a computer how to play computer games like Pong, Pacman, or AlphaGo. DeepMind Technologies, owned by Google, used the MDP framework in conjunction with neural networks to play Atari games better than human experts. In this application, only the raw pixel input of the game screen was used as input, and a neural network was used to estimate the value function for each state, and choose the next action. MDPs have been used in more advanced applications to teach a simulated human robot how to walk and run and a real legged-robot how to walk.

Google's DeepMind uses reinforcement learning to teach AI how to walk

Conclusion

A MDP is a stochastic, sequential decision-making method based on the Markov Property. MDPs can be used to make optimal decisions for a dynamic system given information about its current state and its environment. This process is fundamental in reinforcement learning applications and a core method for developing artificially intelligent systems. MDPs have been applied to a wide variety of industries and fields including robotics, operations research, manufacturing, economics, and finance.

References


  1. Puterman, M. L. (1990). Chapter 8 Markov decision processes. In Handbooks in Operations Research and Management Science (Vol. 2, pp. 331–434). Elsevier. https://doi.org/10.1016/S0927-0507(05)80172-0
  2. Feinberg, E. A., & Shwartz, A. (2012). Handbook of Markov Decision Processes: Methods and Applications. Springer Science & Business Media.
  3. Howard, R. A. (1960). Dynamic programming and Markov processes. John Wiley.
  4. Ashraf, M. (2018, April 11). Reinforcement Learning Demystified: Markov Decision Processes (Part 1). Medium. https://towardsdatascience.com/reinforcement-learning-demystified-markov-decision-processes-part-1-bf00dda41690
  5. Bertsekas, D. P. (2011). Dynamic Programming and Optimal Control 3rd Edition, Volume II. Massachusetts Institue of Technology, 233.
  6. Littman, M. L. (2001). Markov Decision Processes. In N. J. Smelser & P. B. Baltes (Eds.), International Encyclopedia of the Social & Behavioral Sciences (pp. 9240–9242). Pergamon. https://doi.org/10.1016/B0-08-043076-7/00614-8
  7. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Riedmiller, M. (2013). Playing Atari with Deep Reinforcement Learning. ArXiv:1312.5602 [Cs]. http://arxiv.org/abs/1312.5602
  8. Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., & Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419), 1140–1144. https://doi.org/10.1126/science.aar6404
  9. Ha, S., Xu, P., Tan, Z., Levine, S., & Tan, J. (2020). Learning to Walk in the Real World with Minimal Human Effort. ArXiv:2002.08550 [Cs]. http://arxiv.org/abs/2002.08550
  10. Bellman, R. (1966). Dynamic Programming. Science, 153(3731), 34–37. https://doi.org/10.1126/science.153.3731.34
  11. Abbeel, P. (2016). Markov Decision Processes and Exact Solution Methods: 34.
  12. Silver, D. (2015). Markov Decision Processes. Markov Processes, 57.