1 / 26

Implementing DHP in Software: Taking Control of the Pole-Cart System

Implementing DHP in Software: Taking Control of the Pole-Cart System. Lars Holmstrom. Overview. Provides a brief overview of Dual Heuristic Programming (DHP) Describes a software implementation of DHP for designing a non-linear controller for the pole-cart system

vahe
Télécharger la présentation

Implementing DHP in Software: Taking Control of the Pole-Cart System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Implementing DHP in Software:Taking Control of the Pole-Cart System Lars Holmstrom

  2. Overview • Provides a brief overview of Dual Heuristic Programming (DHP) • Describes a software implementation of DHP for designing a non-linear controller for the pole-cart system • Follows the methodology outlined in • Lendaris, G.G. & J.S. Neidhoefer, 2004, "Guidance in the Use of Adaptive Critics for Control" Ch.4 in "Handbook of Learning and Approximate Dynamic Programming", Si, et al, Eds., IEEE Press & Wiley Interscience, pp. 97-124, 2004.

  3. DHP Foundations • Reinforcement Learning • A process in which an agent learns behaviors through trial-and-error interactions with its environment, based on “reinforcement” signals acquired over time • As opposed to Supervised Learning in which an error signal based on the desired outcome of an action is known, reinforcement signals provide information about a “better” or “worse” action to take rather than the “best” one

  4. DHP Foundations (continued) • Dynamic Programming • Provides a mathematical formalism for finding optimal solutions to control problems within a Markovian decision process • “Cost to Go” Function • Bellman’s Recursion

  5. DHP Foundations (continued) • Adaptive Critics • An application of Reinforcement Learning for solving Dynamic Programming problems • The Critic is charged with the task of estimating J for a particular control policy π • The Critic’s knowledge about J, in turn, allows us to improve the control policy π • This process is iterated until the optimal J surface, J*, is found along with the associated optimal control policy π*

  6. DHP Architecture

  7. Weight Update Calculation for the Action Network

  8. Calculating the Critic Targets

  9. The Pole Cart Problem • The dynamical system (plant) consists of a cart on a length of track with an inverted pendulum attached to it. • The control problem is to balance the inverted pendulum while keeping the cart near the center of the track by applying a horizontal force to the cart. • Pole Cart Animation

  10. Simulating the Plant

  11. Calculating the Instantaneous Derivative

  12. Iterating One Step In Time

  13. Iterating the Model Over a Trajectory

  14. Running the Simulation

  15. Calculating the Model Jacobians • Analytically • Numerical approximation • Backpropagation

  16. Defining a Utility Function • The utility function, along with the plant dynamics, define the optimal control policy • For this example, I will choose • Note: there is no penalty for effort, horizontal velocity (the cart), or angular velocity (the pole)

  17. Setting Up the DHP Training Loop • For each training iteration (step in time) • Measure the current state • Calculate the control to apply • Calculate the control Jacobian • Iterate the model • Calculate the model Jacobian • Calculate the utility derivative • Calculate the present lambda • Calculate the future lambda • Calculate the reinforcement signal for the controller • Train the controller • Calculate the desired target for the critic • Train the critic

  18. Defining an Experiment • Define the neural network architecture for action and critic networks • Define the constants to be used for the model • Set up the lesson plan • Define incremental steps in the learning process • Set us a test plan

  19. Defining an Experiment in the DHP Toolkit

  20. Training Step 1 : 2 Degrees

  21. Training Step 2 : -5 Degrees

  22. Training Step 2 : 15 Degrees

  23. Training Step 2 : -30 Degrees

  24. Testing Step 2 : 20 Degrees

  25. Testing Step 2 : 30 Degrees

  26. Software Availability • This software is available to anyone who would like to make use of it • We also have software available for performing backpropagation through time (BPTT) experiments • Set up an appointment with me or come in during my office hours to get more information about the software

More Related