1 / 86

Introduction to Reinforcement Learning

Introduction to Reinforcement Learning . E0397 Lecture Slides by Richard Sutton (With small changes). Reinforcement Learning: Basic Idea. Learn to take correct actions over time by experience Similar to how humans learn: “trial and error” Try an action – “see” what happens

Télécharger la présentation

Introduction to Reinforcement Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Reinforcement Learning E0397 Lecture Slides by Richard Sutton (With small changes)

  2. Reinforcement Learning: Basic Idea • Learn to take correct actions over time by experience • Similar to how humans learn: “trial and error” • Try an action – • “see” what happens • “remember” what happens • Use this to change choice of action next time you are in the same situation • “Converge” to learning correct actions • Focus on long term return, not immediate benefit • Some action may seem not beneficial for short term, but its long term return will be good.

  3. RL in Cloud Computing • Why are we talking about RL in this class? • Which problems due you think can be solved by a learning approach? • Is focus on long-term benefit appropriate?

  4. RL Agent • Object is to affect the environment • Environment is stochastic and uncertain Environment action state reward Agent

  5. What is Reinforcement Learning? • An approach to Artificial Intelligence • Learning from interaction • Goal-oriented learning • Learning about, from, and while interacting with an external environment • Learning what to do—how to map situations to actions—so as to maximize a numerical reward signal

  6. Key Features of RL • Learner is not told which actions to take • Trial-and-Error search • Possibility of delayed reward • Sacrifice short-term gains for greater long-term gains • The need to explore and exploit • Considers the whole problem of a goal-directed agent interacting with an uncertain environment

  7. Examples of Reinforcement Learning • Robocup Soccer Teams Stone & Veloso, Reidmiller et al. • World’s best player of simulated soccer, 1999; Runner-up 2000 • Inventory Management Van Roy, Bertsekas, Lee & Tsitsiklis • 10-15% improvement over industry standard methods • Dynamic Channel Assignment Singh & Bertsekas, Nie & Haykin • World's best assigner of radio channels to mobile telephone calls • Elevator Control Crites & Barto • (Probably) world's best down-peak elevator controller • Many Robots • navigation, bi-pedal walking, grasping, switching between skills... • TD-Gammon and Jellyfish Tesauro, Dahl • World's best backgammon player

  8. Supervised Learning Training Info = desired (target) outputs Supervised Learning System Inputs Outputs Error = (target output – actual output) • Best examples: Classification/identification systems. E.g. fault classification, WiFi location determination, workload identification • E.g. Classification systems: give agent lots of data where data has been already classified. Agent can “train” itself using this knowledge. • Training phase is non trivial • (Normally) no learning happens in on-line phase

  9. Reinforcement Learning Training Info = evaluations (“rewards” / “penalties”) RL System Inputs Outputs (“actions”) Objective: get as much reward as possible • Absolutely no “already existing training data” • Agent learns ONLY by experience

  10. Elements of RL • State: state of the system • Actions: Suggested by RL agent, taken by actuators in the system • Actions change state (deterministically, or stochastically) • Reward: Immediate gain when taking action a in state s • Value: Long term benefit of taking action a in state s • Rewards gained over a long time horizon

  11. The n-Armed Bandit Problem • Choose repeatedly from one of n actions; each choice is called a play • After each play , you get a reward , where These are unknown action values Distribution of depends only on • Objective is to maximize the reward in the long term, e.g., over 1000 plays To solve the n-armed bandit problem, you must explore a variety of actions and exploit the best of them

  12. The Exploration/Exploitation Dilemma • Suppose you form estimates • The greedy action at t is at • You can’t exploit all the time; you can’t explore all the time • You can never stop exploring; but you should always reduce exploring. Maybe. action value estimates

  13. Action-Value Methods • Methods that adapt action-value estimates and nothing else, e.g.: suppose by the t-th play, action had been chosen times, producing rewards then “sample average”

  14. e-Greedy Action Selection • Greedy action selection: • e-Greedy: { . . . the simplest way to balance exploration and exploitation

  15. 10-Armed Testbed • n = 10 possible actions • Each is chosen randomly from a normal distrib.: • each is also normal: • 1000 plays • repeat the whole thing 2000 times and average the results

  16. e-Greedy Methods on the 10-Armed Testbed

  17. Incremental Implementation Recall the sample average estimation method: The average of the first k rewards is (dropping the dependence on ): Can we do this incrementally (without storing all the rewards)? We could keep a running sum and count, or, equivalently: This is a common form for update rules: NewEstimate=OldEstimate+StepSize[Target–OldEstimate]

  18. Tracking a Nonstationary Problem Choosing to be a sample average is appropriate in a stationary problem, i.e., when none of the change over time, But not in a nonstationary problem. Better in the nonstationary case is: exponential, recency-weighted average

  19. Formal Description of RL

  20. r r r . . . . . . t +1 t +2 s s t +3 s s t +1 t +2 t +3 a a a a t t +1 t +2 t t +3 The Agent-Environment Interface

  21. The Agent Learns a Policy • Reinforcement learning methods specify how the agent changes its policy as a result of experience. • Roughly, the agent’s goal is to get as much reward as it can over the long run.

  22. Getting the Degree of Abstraction Right • Time steps need not refer to fixed intervals of real time. • Actions can be low level (e.g., voltages to motors), or high level (e.g., accept a job offer), “mental” (e.g., shift in focus of attention), etc. • States can low-level “sensations”, or they can be abstract, symbolic, based on memory, or subjective (e.g., the state of being “surprised” or “lost”). • An RL agent is not like a whole animal or robot. • Reward computation is in the agent’s environment because the agent cannot change it arbitrarily. • The environment is not necessarily unknown to the agent, only incompletely controllable.

  23. Goals and Rewards • Is a scalar reward signal an adequate notion of a goal?—maybe not, but it is surprisingly flexible. • A goal should specify what we want to achieve, not how we want to achieve it. • A goal must be outside the agent’s direct control—thus outside the agent. • The agent must be able to measure success: • explicitly; • frequently during its lifespan.

  24. Returns Episodic tasks: interaction breaks naturally into episodes, e.g., plays of a game, trips through a maze. where T is a final time step at which a terminal state is reached, ending an episode.

  25. Returns for Continuing Tasks Continuing tasks: interaction does not have natural episodes. Discounted return:

  26. An Example Avoid failure: the pole falling beyond a critical angle or the cart hitting end of track. As an episodic task where episode ends upon failure: As a continuingtask with discounted return: In either case, return is maximized by avoiding failure for as long as possible.

  27. A Unified Notation • In episodic tasks, we number the time steps of each episode starting from zero. • We usually do not have to distinguish between episodes, so we write instead of for the state at step t of episode j. • Think of each episode as ending in an absorbing state that always produces reward of zero: • We can cover all cases by writing

  28. The Markov Property • “the state” at step t, means whatever information is available to the agent at step t about its environment. • The state can include immediate “sensations,” highly processed sensations, and structures built up over time from sequences of sensations. • Ideally, a state should summarize past sensations so as to retain all “essential” information, i.e., it should have the Markov Property:

  29. Markov Decision Processes • If a reinforcement learning task has the Markov Property, it is basically a Markov Decision Process (MDP). • If state and action sets are finite, it is a finite MDP. • To define a finite MDP, you need to give: • state and action sets • one-step “dynamics” defined by transition probabilities: • Expected values of reward:

  30. An Example Finite MDP Recycling Robot • At each step, robot has to decide whether it should (1) actively search for a can, (2) wait for someone to bring it a can, or (3) go to home base and recharge. • Searching is better but runs down the battery; if runs out of power while searching, has to be rescued (which is bad). • Decisions made on basis of current energy level: high, low. • Reward = number of cans collected

  31. Recycling Robot MDP

  32. Value Functions • The value of a state is the expected return starting from that state; depends on the agent’s policy: • The value of taking an action in a stateunder policy p is the expected return starting from that state, taking that action, and thereafter following p :

  33. Bellman Equation for a Policy p The basic idea: So: Or, without the expectation operator:

  34. More on the Bellman Equation This is a set of equations (in fact, linear), one for each state. The value function for p is its unique solution. Backup diagrams:

  35. Optimal Value Functions • For finite MDPs, policies can be partially ordered: • There are always one or more policies that are better than or equal to all the others. These are the optimal policies. We denote them all p *. • Optimal policies share the same optimal state-value function: • Optimal policies also share the same optimal action-value function: This is the expected return for taking action a in state s and thereafter following an optimal policy.

  36. Bellman Optimality Equation for V* The value of a state under an optimal policy must equal the expected return for the best action from that state: The relevant backup diagram: is the unique solution of this system of nonlinear equations.

  37. Bellman Optimality Equation for Q* The relevant backup diagram: is the unique solution of this system of nonlinear equations.

  38. Gridworld • Actions: north, south, east, west; deterministic. • If would take agent off the grid: no move but reward = –1 • Other actions produce reward = 0, except actions that move agent out of special states A and B as shown. State-value function for equiprobable random policy; g = 0.9

  39. Why Optimal State-Value Functions are Useful Any policy that is greedy with respect to is anoptimal policy. Therefore, given , one-step-ahead search produces the long-term optimal actions. E.g., back to the gridworld: *

  40. What About Optimal Action-Value Functions? Given , the agent does not even have to do a one-step-ahead search:

  41. Solving the Bellman Optimality Equation • Finding an optimal policy by solving the Bellman Optimality Equation requires the following: • accurate knowledge of environment dynamics; • we have enough space and time to do the computation; • the Markov Property. • How much space and time do we need? • polynomial in number of states (via dynamic programming methods; Chapter 4), • BUT, number of states is often huge (e.g., backgammon has about 1020 states). • We usually have to settle for approximations. • Many RL methods can be understood as approximately solving the Bellman Optimality Equation.

  42. Solving MDPs • The goal in MDP is to find the policy that maximizes long term reward (value) • If all the information is there, this can be done by explicitly solving, or by iterative methods

  43. Policy Iteration policy evaluation policy improvement “greedification”

  44. Policy Iteration

  45. From MDPs to reinforcement learning

  46. MDPs vs RL • MDPs are great if we know transition probabilities • And expected immediate rewards • But in real life tasks – this is precisely what we do NOT know! • This is what turns the task into a learning problem Simplest method to learn: Monte Carlo

  47. Monte Carlo Methods • Monte Carlo methods learn from complete sample returns • Only defined for episodic tasks • Monte Carlo methods learn directly from experience • On-line: No model necessary and still attains optimality • Simulated: No need for a full model

  48. 1 2 3 4 5 Monte Carlo Policy Evaluation • Goal: learn Vp(s) • Given: some number of episodes under p which contain s • Idea: Average returns observed after visits to s • Every-Visit MC: average returns for every time s is visited in an episode • First-visit MC: average returns only for first time s is visited in an episode • Both converge asymptotically

  49. First-visit Monte Carlo policy evaluation

  50. Monte Carlo Estimation of Action Values (Q) • Monte Carlo is most useful when a model is not available • We want to learn Q* • Qp(s,a) - average return starting from state s and action a following p • Also converges asymptotically if every state-action pair is visited • Exploring starts: Every state-action pair has a non-zero probability of being the starting pair

More Related