CSE510 Deep Reinforcement Learning (Lecture 22)
Due to lack of my attention, this lecture note is generated by ChatGPT to create continuations of the previous lecture note.
Offline Reinforcement Learning: Introduction and Challenges
Offline reinforcement learning (offline RL), also called batch RL, aims to learn an optimal policy -without- interacting with the environment. Instead, the agent is given a fixed dataset of transitions collected by an unknown behavior policy.
The Offline RL Dataset
We are given a static dataset:
Parameter explanations:
- : state sampled from behavior policy state distribution.
- : action selected by the behavior policy .
- : next state sampled from environment dynamics .
- : reward observed for transition .
- : total number of transitions in the dataset.
- : full offline dataset used for training.
The goal is to learn a new policy maximizing expected discounted return using only :
Parameter explanations:
- : policy we want to learn.
- : reward received for state-action pair.
- : discount factor controlling weight of future rewards.
- : horizon or trajectory length.
Why Offline RL Is Difficult
Offline RL is fundamentally harder than online RL because:
- The agent cannot try new actions to fix wrong value estimates.
- The policy may choose out-of-distribution actions not present in .
- Q-value estimates for unseen actions can be arbitrarily incorrect.
- Bootstrapping on wrong Q-values can cause divergence.
This leads to two major failure modes:
- —Distribution shift—: new policy actions differ from dataset actions.
- —Extrapolation error—: the Q-function guesses values for unseen actions.
Extrapolation Error Problem
In standard Q-learning, the Bellman backup is:
Parameter explanations:
- : estimated value of taking action in state .
- : maximum over possible next actions.
- : candidate next action for evaluation in backup step.
If was rarely or never taken in the dataset, is poorly estimated, so Q-learning boots off invalid values, causing instability.
Behavior Cloning (BC): The Safest Baseline
The simplest offline method is to imitate the behavior policy:
Parameter explanations:
- : neural network parameters of the cloned policy.
- : learned policy approximating behavior policy.
- : negative log-likelihood loss.
Pros:
- Does not suffer from extrapolation error.
- Extremely stable.
Cons:
- Cannot outperform the behavior policy.
- Ignores reward information entirely.
Naive Offline Q-Learning Fails
Directly applying off-policy Q-learning on generally leads to:
- Overestimation of unseen actions.
- Divergence due to extrapolation error.
- Policies worse than behavior cloning.
Strategies for Safe Offline RL
There are two primary families of solutions:
- —Policy constraint methods—
- —Conservative value estimation methods—
1. Policy Constraint Methods
These methods restrict the learned policy to stay close to the behavior policy so it does not take unsupported actions.
Advantage Weighted Regression (AWR / AWAC)
Policy update:
Parameter explanations:
- : behavior policy used to collect dataset.
- : advantage function derived from Q or V estimates.
- : temperature controlling strength of advantage weighting.
- : positive weighting on high-advantage actions.
Properties:
- Uses advantages to filter good and bad actions.
- Improves beyond behavior policy while staying safe.
Batch-Constrained Q-learning (BCQ)
BCQ constrains the policy using a generative model:
- Train a VAE to model given .
- Train a small perturbation model .
- Limit the policy to .
Parameter explanations:
- : VAE-generated action similar to data actions.
- : VAE parameters.
- : small correction to generated actions.
- : final policy action constrained near dataset distribution.
BCQ avoids selecting unseen actions and strongly reduces extrapolation.
BEAR (Bootstrapping Error Accumulation Reduction)
BEAR adds explicit constraints:
Parameter explanations:
- : Maximum Mean Discrepancy distance between action distributions.
- : threshold restricting policy deviation from behavior policy.
BEAR controls distribution shift more tightly than BCQ.
2. Conservative Value Function Methods
These methods modify Q-learning so Q-values of unseen actions are -underestimated-, preventing the policy from exploiting overestimated values.
Conservative Q-Learning (CQL)
One formulation is:
Parameter explanations:
- : standard Bellman TD loss.
- : weight of conservatism penalty.
- : expectation over policy-chosen actions.
- : expectation over dataset actions.
Effect:
- Increases Q-values of dataset actions.
- Decreases Q-values of out-of-distribution actions.
Implicit Q-Learning (IQL)
IQL avoids constraints entirely by using expectile regression:
Value regression:
Parameter explanations:
- : scalar value estimate for state .
- : expectile regression loss.
- : expectile parameter controlling conservatism.
- : Q-value estimate.
Key idea:
- For , IQL reduces sensitivity to large (possibly incorrect) Q-values.
- Implicitly conservative without special constraints.
IQL often achieves state-of-the-art performance due to simplicity and stability.
Model-Based Offline RL
Forward Model-Based RL
Train a dynamics model:
Parameter explanations:
- : learned transition model.
- : parameters of transition model.
We can generate synthetic transitions using , but model error accumulates.
Penalty-Based Model Approaches (MOPO, MOReL)
Add uncertainty penalty:
Parameter explanations:
- : penalized reward for model rollouts.
- : model uncertainty estimate.
- : penalty coefficient.
These methods limit exploration into unknown model regions.
Reverse Model-Based Imagination (ROMI)
ROMI generates new training data by -backward- imagination.
Reverse Dynamics Model
ROMI learns:
Parameter explanations:
- : parameters of reverse dynamics model.
- : later state.
- : action taken leading to .
- : predicted predecessor state.
ROMI also learns a reverse policy for sampling likely predecessor actions.
Reverse Imagination Process
Given a goal state :
- Sample from reverse policy.
- Predict from reverse dynamics.
- Form imagined transition .
- Repeat to build longer imagined trajectories.
Benefits:
- Imagined transitions end in real states, ensuring grounding.
- Completes missing parts of dataset.
- Helps propagate reward backward reliably.
ROMI combined with conservative RL often outperforms standard offline methods.
Summary of Lecture 22
Offline RL requires balancing:
- Improvement beyond dataset behavior.
- Avoiding unsafe extrapolation to unseen actions.
Three major families of solutions:
- Policy constraints (BCQ, BEAR, AWR)
- Conservative Q-learning (CQL, IQL)
- Model-based conservatism and imagination (MOPO, MOReL, ROMI)
Offline RL is becoming practical for real-world domains such as healthcare, robotics, autonomous driving, and recommender systems.