I was studying reinforcement learning a while ago, attempting to educate myself about deep Q learning. As part of that effort, I read through the first few chapters of Reinforcement Learning: An A Introduction by Sutton and Barto. Here are my notes on Chapter 3. Like all of my other notes, these were never intended to be shared, so apologies in advance if they make no sense to anyone.
Chapter 3
Agent-Environment Interface
They mainly cover basic terminology here. The main difference from bandit problems is that the state can change with each action.
-
= state space, = action space, = reward space. These are all finite, with .
-
As with the bandit problems, we have , are the actions and rewards at time . The book uses to denote the reward given for action . MDPs introduce another time series, to denote the state at time . Thus and "go together" and and are "jointly determined."
-
The probability distributions governing the dynamics of an MDP are given by the density function:
Other useful equations are:
Goals and Rewards
Note that as with bandit problems, is stochastic. But this is also the only thing we can really tune about a given system. In pactice, reward is based on the full state-action-state transition, and therefore the randomness comes from the environment.
Key insight: keep rewards simple with small, finite support. For some reason, I think of this as an extension of defining really simple prior distributions. Since in this case, the value (return) is determined by percolating rewards backwards from terminal states.
Returns and Episodes
Define a new random variable to be the return at time . So if an agent interacts for time steps, this would be defined
Unified Notation for Episodic and Continuing Tasks
Here, the book allows to be infinite. In this ncase, we need a discounting factor for future returns, or otherwise the return would be a potentially divergent series. Let be the discount factor (possibly equal to 1 for finite episodes), so that
This unified notation is defined after discussing terminal states, which help to deal with the problem of finite episodes. A terminal state is a sink in the state-action graph, whose reward is always zero. This allows us to always use infinite sums even for finite episodes.
Policies and Value Functions
A policy is a conditional distribution over actions, conditioned on a state:
The value of a state is the expected return, with respect to the policy distribution:
The quality of an action at state is the expected return
We call the action-value function.
Exercise 3.12
Give an equation for in terms of and .
Solution:
Exercise 3.13
Give an equation for in terms of and the four-argument .
Solution:
The fourth line follows from the third because of the Markov property.