1 / 11

Associative Memories

Associative Memories. A Morphological Approach. Outline. Associative Memories Motivation Capacity Vs. Robustness Challenges Morphological Memories Improving Limitations Experiment Results Summary References. Associative Memories. Motivation

nathalie
Télécharger la présentation

Associative Memories

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Associative Memories A Morphological Approach

  2. Outline • Associative Memories • Motivation • Capacity Vs. Robustness Challenges • Morphological Memories • Improving Limitations • Experiment • Results • Summary • References

  3. Associative Memories • Motivation • Human ability to retrieve information from applied associated stimuli • Ex. Recalling one’s relationship with another after not seeing them for several years despite the other’s physical changes (aging, facial hair, etc.) • Enhances human awareness and deduction skills and efficiently organizes vast amounts of information • Why not replicate this ability with computers? • Ability would be a crucial addition to the Artificial Intelligence Community in developing rational, goal oriented, problem solving agents

  4. Capacity Vs. Robustness Challenges • In early memory models, capacity was limited to the length of the memory and allowed for negligible input distortion • Ex. Linear Associative Memory • Recent years have increased the memory’s robustness, but sacrificed capacity • J. J. Hopfield’s proposed Hopfield Network • Capacity: , where n is the memory length • Current research offers a solution which maximizes memory capacity while still allowing for input distortion • Morphological Neural Model • Capacity: essentially limitless (2n in the binary case) • Allows for Input Distortion • One Step Convergence

  5. Morphological Memories • Formulated using Mathematical Morphology Techniques • Image Dilation • Image Erosion • Training Constructs Two Memories: M and W • M used for recalling dilated patterns • W used for recalling eroded patterns • M and W are not sufficient…Why? • General distorted patterns are both dilated and eroded • solution: hybrid approach • Incorporate a kernel matrix, Z, into M and W • General distorted pattern recall is now possible! • Input → MZ → WZ → Output

  6. Improving Limitations • Experiment • Construct a binary morphological auto-associative memory to recall bitmap images of capital alphabetic letters • Use Hopfield Model for baseline • Construct letters using Microsoft San Serif font (block letters) and Math5 font (cursive letters) • Attempt recall 5 times for each pattern for each image distortion at 0%, 2%, 4%, 8%, 10%, 15%, 20%, and 25% • Use different memory sizes: 5 images, 10, 26, and 52 • Use Average Recall Rate per memory size as a performance measure, where recall is correct if and only if it is perfect

  7. Results • Morphological Model and Hopfield Model: • Both degraded in performance as memory size increased • Both recalled letters in Microsoft San Serif font better than Math5 font • Morphological Model: • Always perfect recall with 0% image distortion • Performance smoothly degraded as memory size and distortion increased • Hopfield Model: • Never correctly recalled images when memory contained more than 5 images

  8. Results using 5 Images MNN = Morphological Neural Network HOP = Hopfield Neural Network MSS = Microsoft San Serif font M5 = Math5 font

  9. Results using 26 Images MNN = Morphological Neural Network HOP = Hopfield Neural Network MSS = Microsoft San Serif font M5 = Math5 font

  10. Summary • The ability for humans to retrieve information from associated stimuli continues to elicit great interest among researchers • Progress Continues with the development of enhanced neural models • Linear Associative Memory → Hopfield Model → Morphological Model • Using Morphological Model • Essentially Limitless Capacity • Guaranteed Perfect Recall with Undistorted Input • One Step Convergence

  11. References • Y. H. Hu. Associative Learning and Principal Component Analysis. Lecture 6 Notes, 2003 • R. P. Lippmann. An Introduction to Computing with Neural Nets. IEEE Transactions of Acoustics, Speech, and Signal Processing, ASSP-4:4- 22, 1987. • R. McEliece and et. Al. The Capacity of Hopfield Associative Memory. Transactions of Information Theory, 1:33-45, 1987. • G. X. Ritter and P. Sussner. An Introduction to Morphological Neural Networks. In Proceedings of the 13th International Conference on Pattern Recognition, pages 709-711, Vienna, Austria, 1996. • G. X. Ritter, P. Sussner, and J. L. Diaz de Leon. Morphological Associative Memories. IEEE Transactions on Neural Networks, 9(2):281-293, March 1998.

More Related