1 / 25

Backbone Structure of Hairy Memory

ICANN 2006, Greece. Backbone Structure of Hairy Memory. Cheng-Yuan Liou Department of Computer Science and Information Engineering National Taiwan University. Discussions.

barbarawong
Télécharger la présentation

Backbone Structure of Hairy Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ICANN 2006, Greece Backbone Structure of Hairy Memory Cheng-Yuan Liou Department of Computer Science and Information Engineering National Taiwan University

  2. Discussions • Patterns in {N_i,p & N_i,n} are backbones of the Hopfield model. They form the backbone structure of the model. • Hairy model is a homeostatic system. • All four methods, et-AM, e-AM, g-AM, and b-AM, derive asymmetric weight matrices with nonzero diagonal elements and keep Hebb’s postulate. • In almost all of our simulations, the evolution of states converged in a single iteration (basin-1) during recall after learning. This is very different from the evolutionary recall process in many other models.

  3. Discussions All three methods, et-AM, e-AM, and g-AM, operate in one shift. Each hyperplane is adjusted in turn. Each iteration improves the location of a single hyperplane. Each hyperplane is independent of all others during learning. Localizing neuron damages Localizing learning The computational cost is linearly proportional to the network size, N, and the number of patterns, P.

  4. Discussions • All of the methods, et-AM, e-AM, g-AM and b-AM give non-zero values to the self-connections, wii \= 0, which is very different from Hopfield’s setting, wii = 0. • We are still attempting to understand and clarify the meaning of the setting wii = 0, where newborn neurons start learning from full self-reference, wii = 1, and end with whole network-reference, wii = 0. • This is beneficial for cultured neurons working as a whole. This implies that stabilizing memory might not be the only purpose of learning and evolution

  5. Discussions • The Boltzmann machine can be designed according to et-AM, e-AM, or g-AM.

More Related