1 / 15

CAP6938 Neuroevolution and Developmental Encoding Basic Concepts

CAP6938 Neuroevolution and Developmental Encoding Basic Concepts. Dr. Kenneth Stanley August 23, 2006. We Care About Evolving Complexity So Why Neural Networks?. Historical origin of ideas in evolving complexity Representative of a broad class of structures

brew
Télécharger la présentation

CAP6938 Neuroevolution and Developmental Encoding Basic Concepts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CAP6938Neuroevolution and Developmental EncodingBasic Concepts Dr. Kenneth Stanley August 23, 2006

  2. We Care About Evolving ComplexitySo Why Neural Networks? • Historical origin of ideas in evolving complexity • Representative of a broad class of structures • Illustrative of general challenges • Clear beneficiary of high complexity

  3. How Do NNs Work? Output Output Input Input

  4. How do NNs Work?Example Outputs (effectors/controls) Forward Left Right Front Left Right Back Inputs (Sensors)

  5. What Exactly Happens Inside the Network? • Network Activation Neuron j activation: out1 out2 H1 H2 w11 w22 w21 w12 X1 X2

  6. Recurrent connection out Wout-H wH-out H w11 w21 X1 X2 Recurrent Connections • Recurrent connections are backward connections in the network • They allow feedback • Recurrence is a type of memory

  7. Activating Networks of Arbitrary Topology • Standard method makes no distinction between feedforward and recurrent connections: • The network is then usually activated once per time tick • The number of activations per tick can be thought of as the speed of thought • Thinking fast is expensive out Wout-H wH-out H w11 w21 X1 X2

  8. Arbitrary Topology Activation Controversy • The standard method is not necessarily the best • It allows “delay-line” memory and a very simple activation algorithm with no special case for recurrence • However, “all-at-once” activation utilizes the entire net in each tick with no extra cost • This issue is unsettled

  9. The Big Questions • What is the topology that works? • What are the weights that work? ? ? ? ? ? ? ? ? ? ? ? ? ?

  10. Problem Dimensionality • Each connection (weight) in the network is a dimension in a search space • The space you’re in matters: Optimization is not the only issue! • Topology defines the space 21-dimensional space 3-dimensional space

  11. High Dimensional Space is Hard to Search • 3 dimensional – easy • 100 dimensional – need a good optimization method • 10,000 dimensional – very hard • 1,000,000 dimensional – very very hard • 100,000,000,000,000 dim. – forget it

  12. Bad News • Most interesting solutions are high-D: • Robotic Maid • World Champion Go Player • Autonomous Automobile • Human-level AI • Great Composer • We need to get into high-D space

  13. A Solution (preview) • Complexification: Instead of searching directly in the space of the solution, start in a smaller, related space, and build up to the solution • Complexification is inherent in vast examples of social and biological progress

  14. So how do computers optimize those weights anyway? • Depends on the type of problem • Supervised: Learn from input/output examples • Reinforcement Learning: Sparse feedback • Self-Organization: No teacher • In general, the more feedback you get, the easier the learning problem • Humans learn language without supervision

  15. Significant Weight Optimization Techniques • Backpropagation: Change weights based on their contribution to error • Hebbian learning: Changes weights based on firing correlations between connected neurons Homework: -Fausett pp. 39-80 (in Chapter 2)-and Fausett pp. 289-316 (in Chapter 6) -Online intro chaper on RL -Optional RL survery

More Related