Skip to Content
CSE510CSE510 Deep Reinforcement Learning (Lecture 20)

CSE510 Deep Reinforcement Learning (Lecture 20)

Exploration in RL

Motivations

Exploration vs. Exploitation Dilemma

Online decision-making involves a fundamental choice:

  • Exploration: trying out new things (new behaviors), with the hope of discovering higher rewards
  • Exploitation: doing what you know will yield the highest reward

The best long-term strategy may involve short-term sacrifices

Gather enough knowledge early to make the best long-term decisions

Example

Restaurant Selection
  • Exploitation: Go to your favorite restaurant
  • Exploration: Try a new restaurant

Oil Drilling

  • Exploitation: Drill at the best known location
  • Exploration: Drill at a new location

Game Playing

  • Exploitation: Play the move you believe is best
  • Exploration: Play an experimental move

Breakout vs. Montezuma’s Revenge

PropertyBreakoutMontezuma’s Revenge
Reward frequencyDense (every brick hit gives points)Extremely sparse (only after collecting key or treasure)
State spaceSimple (ball, paddle, bricks)Complex (many rooms, objects, ladders, timing)
Action relevanceAlmost any action affects reward soonMost actions have no immediate feedback
Exploration depthShallow (few steps to reward)Deep (dozens/hundreds of steps before reward)
DeterminismMostly deterministic dynamicsDeterministic but requires long sequences of precise actions
Credit assignmentEasy — short time gapVery hard — long delay from cause to effect

Motivation

  • Motivation: “Forces” that energize an organism to act and that direct its activity
  • Extrinsic Motivation: being motivated to do something because of some external reward ($, a prize, food, water, etc.)
  • Intrinsic Motivation: being motivated to do something because it is inherently enjoyable (curiosity, exploration, novelty, surprise, incongruity, complexity…)

Intuitive Exploration Strategy

  • Intrinsic motivation drives the exploration for unknowns

  • Intuitively, we explore efficiently once we know what we do not know, and target our exploration efforts to the unknown part of the space.

  • All non-naive exploration methods consider some form of uncertainty estimation, regarding state (or state-action) I have visited, transition dynamics, or Q-functions.

  • Optimal methods in smaller settings don’t work, but can inspire for larger settings

  • May use some hacks

Classes of Exploration Methods in Deep RL

  • Optimistic exploration
    • Uncertainty about states
    • Visiting novel states (state visitation counting)
  • Information state search
    • Uncertainty about state transitions or dynamics
    • Dynamics prediction error or Information gain for dynamics learning
  • Posterior sampling
    • Uncertainty about Q-value functions or policies
    • Selecting actions according to the probability they are best

Optimistic Exploration

Count-Based Exploration in Small MDPs

Book-keep state visitation counts N(s)N(s) Add exploration reward bonuses that encourage policies that visit states with fewer counts.

R(s,a,s)=r(s,a,s)+B(N(s))R(s,a,s') = r(s,a,s') + \mathcal{B}(N(s))

where B(N(s))\mathcal{B}(N(s)) is the intrinsic exploration reward bonus.

  • UCB: B(N(s))=2lnnN(s)\mathcal{B}(N(s)) = \sqrt{\frac{2\ln n}{N(s)}} (more aggressive exploration)

  • MBIE-EB (Strehl & Littman): B(N(s))=1N(s)\mathcal{B}(N(s)) = \sqrt{\frac{1}{N(s)}}

  • BEB (Kolter & Ng): B(N(s))=1N(s)\mathcal{B}(N(s)) = \frac{1}{N(s)}

  • We want to come up with something that rewards states that we have not visited often.

  • But in large MDPs, we rarely visit a state twice!

  • We need to capture a notion of state similarity, and reward states that are most dissimilar to what we have seen so far

    • as opposed to different (as they will always be different).

Fitting Generative Models

Idea: fit a density model pθ(s)p_\theta(s) (or pθ(s,a)p_\theta(s,a))

pθ(s)p_\theta(s) might be high even for a new ss.

If ss is similar to perviously seen states, can we use pθ(s)p_\theta(s) to get a “pseudo-count” for ss?

If we have small MDPs, the true probability is

P(s)=N(s)nP(s)=\frac{N(s)}{n}

where N(s)N(s) is the number of times ss has been visited and nn is the total states visited.

after we visit ss, then

P(s)=N(s)+1n+1P'(s)=\frac{N(s)+1}{n+1}
  1. fit model pθ(s)p_\theta(s) to all states D\mathcal{D} so far.
  2. take a step ii and observe sis_i.
  3. fit new model pθ(s)p_\theta'(s) to all states Dsi\mathcal{D} \cup {s_i}.
  4. use pθ(si)p_\theta(s_i) and pθ(si)p_\theta'(s_i) to estimate the “pseudo-count” for N^(si)\hat{N}(s_i).
  5. set ri+=ri+B(N^(si))r_i^+=r_i+\mathcal{B}(\hat{N}(s_i))
  6. go to 1

How to get N^(si)\hat{N}(s_i)? use the equations

pθ(si)=N^(si)n^pθ(si)=N^(si)+1n^+1p_\theta(s_i)=\frac{\hat{N}(s_i)}{\hat{n}}\quad p_\theta'(s_i)=\frac{\hat{N}(s_i)+1}{\hat{n}+1}

link to the paper 

Density models

link to the paper 

State Counting with DeepHashing

  • We still count states (images) but not in pixel space, but in latent compressed space.
  • Compress ss into a latent code, then count occurrences of the code.
  • How do we get the image encoding? e.g., using autoencoders.
  • There is no guarantee such reconstruction loss will capture the important things that make two states to be similar
Last updated on