1 / 53

ONLINE Q-LEARNER USING MOVING PROTOTYPES

ONLINE Q-LEARNER USING MOVING PROTOTYPES. by Miguel Ángel Soto Santibáñez. Reinforcement Learning. What does it do? Tackles the problem of learning control strategies for autonomous agents. What is the goal?

jpalmer
Télécharger la présentation

ONLINE Q-LEARNER USING MOVING PROTOTYPES

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ONLINE Q-LEARNER USING MOVING PROTOTYPES by Miguel Ángel Soto Santibáñez

  2. Reinforcement Learning What does it do? Tackles the problem of learning control strategies for autonomous agents. What is the goal? The goal of the agent is to learn an action policy that maximizes the total reward it will receive from any starting state.

  3. Reinforcement Learning What does it need? This method assumes that training information is available in the form of a real-valued reward signal given for each state-action transition. i.e. (s, a, r) What problems? Very often, reinforcement learning fits a problem setting known as a Markov decision process (MDP).

  4. Reinforcement Learning vs. Dynamic programming reward function r(s, a)  r state transition function δ(s, a)  s’

  5. Q-learning An off-policy control algorithm. Advantage: Converges to an optimal policy in both deterministic and nondeterministic MDPs. Disadvantage: Only practical on a small number of problems.

  6. Q-learning Algorithm Initialize Q(s, a) arbitrarily Repeat (for each episode) Initialize s Repeat (for each step of the episode) Choose a from s using an exploratory policy Take action a, observe r, s’ Q(s, a)Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] a’ ss’

  7. Introduction to Q-learning Algorithm • An episode: { (s1, a1, r1), (s2, a2, r2), … (sn, an, rn), } • s’: δ(s, a)  s’ • Q(s, a): • γ, α :

  8. B r = 8 A A Sample Problem r = 0 r = - 8

  9. N S E W States and actions states: actions:

  10. The Q(s, a) function states a c t i o n s

  11. Q-learning Algorithm Initialize Q(s, a) arbitrarily Repeat (for each episode) Initialize s Repeat (for each step of the episode) Choose a from s using an exploratory policy Take action a, observe r, s’ Q(s, a)Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] a’ ss’

  12. Initializing the Q(s, a) function states a c t i o n s

  13. Q-learning Algorithm Initialize Q(s, a) arbitrarily Repeat (for each episode) Initialize s Repeat (for each step of the episode) Choose a from s using an exploratory policy Take action a, observe r, s’ Q(s, a)Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] a’ ss’

  14. An episode

  15. Q-learning Algorithm Initialize Q(s, a) arbitrarily Repeat (for each episode) Initialize s Repeat (for each step of the episode) Choose a from s using an exploratory policy Take action a, observe r, s’ Q(s, a)Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] a’ ss’

  16. 1st step: 2nd step: 3rd step: 4th step: Calculating new Q(s, a) values

  17. The Q(s, a) function after the first episode states a c t i o n s

  18. A second episode

  19. 1st step: 2nd step: 3rd step: 4th step: Calculating new Q(s, a) values

  20. The Q(s, a) function after the second episode states a c t i o n s

  21. The Q(s, a) function after a few episodes states a c t i o n s

  22. One of the optimal policies states a c t i o n s

  23. An optimal policy graphically

  24. Another of the optimal policies states a c t i o n s

  25. Another optimal policy graphically

  26. The problem with tabular Q-learning What is the problem? Only practical in a small number of problems because: a) Q-learning can require many thousands of training iterations to converge in even modest-sized problems. b) Very often, the memory resources required by this method become too large.

  27. Solution What can we do about it? Use generalization. What are some examples? Tile coding, Radial Basis Functions, Fuzzy function approximation, Hashing, Artificial Neural Networks, LSPI, Regression Trees, Kanerva coding, etc.

  28. Shortcomings • Tile coding: Curse of Dimensionality. • Kanerva coding: Static prototypes. • LSPI: Require a priori knowledge of the Q-function. • ANN: Require a large number of learning experiences. • Batch + Regression trees: Slow and requires lots of memory.

  29. Needed properties 1) Memory requirements should not explode exponentially with the dimensionality of the problem. 2) It should tackle the pitfalls caused by the usage of “static prototypes”. 3) It should try to reduce the number of learning experiences required to generate an acceptable policy. NOTE: All this without requiring a priori knowledge of the Q-function.

  30. Overview of the proposed method 1) The proposed method limits the number of prototypes available to describe the Q-function (as Kanerva coding). 2) The Q-function is modeled using a regression tree (as the batch method proposed by Sridharan and Tesauro). 3) But prototypes are not static, as in Kanerva coding, but dynamic. 4) The proposed method has the capacity to update the Q-function once for every available learning experience (it can be an online learner).

  31. Changes on the normal regression tree

  32. Basic operations in the regression tree Rupture Merging

  33. Impossible Merging

  34. children children parent parent Rules for a sound tree

  35. Impossible Merging

  36. The “smallest predecessor” Sample Merging

  37. List 1 Sample Merging

  38. Sample Merging The node to be inserted

  39. List 1 List 1.1 List 1.2 Sample Merging

  40. Sample Merging

  41. Sample Merging

  42. Sample Merging

  43. Reward Detectors’ Signals Actuators’ Signals The agent Agent

  44. BOOK STORE Applications

  45. Results first application

  46. Results first application (details)

  47. Results second application

  48. Results second application (details)

  49. Results third application Reason for this experiment: Evaluate the performance of the proposed method in a scenario that we consider ideal for this method, namely one, for which there is no application specific knowledge available. What took to learn a good policy: • Less than 2 minutes of CPU time. • Less that 25,000 learning experiences. • Less than 900 state-action-value tuples.

  50. Swimmer first movie

More Related